Sample records for simple test problem

  1. Simple Test Functions in Meshless Local Petrov-Galerkin Methods

    NASA Technical Reports Server (NTRS)

    Raju, Ivatury S.

    2016-01-01

    Two meshless local Petrov-Galerkin (MLPG) methods based on two different trial functions but that use a simple linear test function were developed for beam and column problems. These methods used generalized moving least squares (GMLS) and radial basis (RB) interpolation functions as trial functions. These two methods were tested on various patch test problems. Both methods passed the patch tests successfully. Then the methods were applied to various beam vibration problems and problems involving Euler and Beck's columns. Both methods yielded accurate solutions for all problems studied. The simple linear test function offers considerable savings in computing efforts as the domain integrals involved in the weak form are avoided. The two methods based on this simple linear test function method produced accurate results for frequencies and buckling loads. Of the two methods studied, the method with radial basis trial functions is very attractive as the method is simple, accurate, and robust.

  2. New Tools to Prepare ACE Cross-section Files for MCNP Analytic Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    Monte Carlo calculations using one-group cross sections, multigroup cross sections, or simple continuous energy cross sections are often used to: (1) verify production codes against known analytical solutions, (2) verify new methods and algorithms that do not involve detailed collision physics, (3) compare Monte Carlo calculation methods with deterministic methods, and (4) teach fundamentals to students. In this work we describe 2 new tools for preparing the ACE cross-section files to be used by MCNP ® for these analytic test problems, simple_ace.pl and simple_ace_mg.pl.

  3. Effects of alcohol on a problem solving task.

    DOT National Transportation Integrated Search

    1972-03-01

    Twenty subjects were tested on two separate days on a simple problem-solving task. Half of the subjects received alcohol on the first day of testing and half on the second day of testing. A control group of 11 subjects was also tested on two days and...

  4. Building and Solving Odd-One-Out Classification Problems: A Systematic Approach

    ERIC Educational Resources Information Center

    Ruiz, Philippe E.

    2011-01-01

    Classification problems ("find the odd-one-out") are frequently used as tests of inductive reasoning to evaluate human or animal intelligence. This paper introduces a systematic method for building the set of all possible classification problems, followed by a simple algorithm for solving the problems of the R-ASCM, a psychometric test derived…

  5. Clairvoyant fusion: a new methodology for designing robust detection algorithms

    NASA Astrophysics Data System (ADS)

    Schaum, Alan

    2016-10-01

    Many realistic detection problems cannot be solved with simple statistical tests for known alternative probability models. Uncontrollable environmental conditions, imperfect sensors, and other uncertainties transform simple detection problems with likelihood ratio solutions into composite hypothesis (CH) testing problems. Recently many multi- and hyperspectral sensing CH problems have been addressed with a new approach. Clairvoyant fusion (CF) integrates the optimal detectors ("clairvoyants") associated with every unspecified value of the parameters appearing in a detection model. For problems with discrete parameter values, logical rules emerge for combining the decisions of the associated clairvoyants. For many problems with continuous parameters, analytic methods of CF have been found that produce closed-form solutions-or approximations for intractable problems. Here the principals of CF are reviewed and mathematical insights are described that have proven useful in the derivation of solutions. It is also shown how a second-stage fusion procedure can be used to create theoretically superior detection algorithms for ALL discrete parameter problems.

  6. Operator Priming and Generalization of Practice in Adults' Simple Arithmetic

    ERIC Educational Resources Information Center

    Chen, Yalin; Campbell, Jamie I. D.

    2016-01-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication,…

  7. Erratum: Simple Seismic Tests of the Solar Core

    NASA Astrophysics Data System (ADS)

    Kennedy, Dallas C.

    2000-12-01

    In the article ``Simple Seismic Tests of the Solar Core'' by Dallas C. Kennedy (ApJ, 540, 1109 [2000]), Figures 1, 2, and 3 in the print edition of the Journal were unreadable because of problems with the electronic file format. The figures in the electronic edition were unaffected. The figures should have appeared as below. The Press sincerely regrets this error.

  8. Simple and complex mental subtraction: strategy choice and speed-of-processing differences in younger and older adults.

    PubMed

    Geary, D C; Frensch, P A; Wiley, J G

    1993-06-01

    Thirty-six younger adults (10 male, 26 female; ages 18 to 38 years) and 36 older adults (14 male, 22 female; ages 61 to 80 years) completed simple and complex paper-and-pencil subtraction tests and solved a series of simple and complex computer-presented subtraction problems. For the computer task, strategies and solution times were recorded on a trial-by-trial basis. Older Ss used a developmentally more mature mix of problem-solving strategies to solve both simple and complex subtraction problems. Analyses of component scores derived from the solution times suggest that the older Ss are slower at number encoding and number production but faster at executing the borrow procedure. In contrast, groups did not appear to differ in the speed of subtraction fact retrieval. Results from a computational simulation are consistent with the interpretation that older adults' advantage for strategy choices and for the speed of executing the borrow procedure might result from more practice solving subtraction problems.

  9. Intellectual Abilities That Discriminate Good and Poor Problem Solvers.

    ERIC Educational Resources Information Center

    Meyer, Ruth Ann

    1981-01-01

    This study compared good and poor fourth-grade problem solvers on a battery of 19 "reference" tests for verbal, induction, numerical, word fluency, memory, perceptual speed, and simple visualization abilities. Results suggest verbal, numerical, and especially induction abilities are important to successful mathematical problem solving.…

  10. The Development of Student’s Activity Sheets (SAS) Based on Multiple Intelligences and Problem-Solving Skills Using Simple Science Tools

    NASA Astrophysics Data System (ADS)

    Wardani, D. S.; Kirana, T.; Ibrahim, M.

    2018-01-01

    The aim of this research is to produce SAS based on MI and problem-solving skills using simple science tools that are suitable to be used by elementary school students. The feasibility of SAS is evaluated based on its validity, practicality, and effectiveness. The completion Lesson Plan (LP) implementation and student’s activities are the indicators of SAS practicality. The effectiveness of SAS is measured by indicators of increased learning outcomes and problem-solving skills. The development of SAS follows the 4-D (define, design, develop, and disseminate) phase. However, this study was done until the third stage (develop). The written SAS was then validated through expert evaluation done by two experts of science, before its is tested to the target students. The try-out of SAS used one group with pre-test and post-test design. The result of this research shows that SAS is valid with “good” category. In addition, SAS is considered practical as seen from the increase of student activity at each meeting and LP implementation. Moreover, it was considered effective due to the significant difference between pre-test and post-test result of the learning outcomes and problem-solving skill test. Therefore, SAS is feasible to be used in learning.

  11. Investigating student understanding of simple harmonic motion

    NASA Astrophysics Data System (ADS)

    Somroob, S.; Wattanakasiwich, P.

    2017-09-01

    This study aimed to investigate students’ understanding and develop instructional material on a topic of simple harmonic motion. Participants were 60 students taking a course on vibrations and wave and 46 students taking a course on Physics 2 and 28 students taking a course on Fundamental Physics 2 on the 2nd semester of an academic year 2016. A 16-question conceptual test and tutorial activities had been developed from previous research findings and evaluated by three physics experts in teaching mechanics before using in a real classroom. Data collection included both qualitative and quantitative methods. Item analysis and whole-test analysis were determined from student responses in the conceptual test. As results, most students had misconceptions about restoring force and they had problems connecting mathematical solutions to real motions, especially phase angle. Moreover, they had problems with interpreting mechanical energy from graphs and diagrams of the motion. These results were used to develop effective instructional materials to enhance student abilities in understanding simple harmonic motion in term of multiple representations.

  12. Operator priming and generalization of practice in adults' simple arithmetic.

    PubMed

    Chen, Yalin; Campbell, Jamie I D

    2016-04-01

    There is a renewed debate about whether educated adults solve simple addition problems (e.g., 2 + 3) by direct fact retrieval or by fast, automatic counting-based procedures. Recent research testing adults' simple addition and multiplication showed that a 150-ms preview of the operator (+ or ×) facilitated addition, but not multiplication, suggesting that a general addition procedure was primed by the + sign. In Experiment 1 (n = 36), we applied this operator-priming paradigm to rule-based problems (0 + N = N, 1 × N = N, 0 × N = 0) and 1 + N problems with N ranging from 0 to 9. For the rule-based problems, we found both operator-preview facilitation and generalization of practice (e.g., practicing 0 + 3 sped up unpracticed 0 + 8), the latter being a signature of procedure use; however, we also found operator-preview facilitation for 1 + N in the absence of generalization, which implies the 1 + N problems were solved by fact retrieval but nonetheless were facilitated by an operator preview. Thus, the operator preview effect does not discriminate procedure use from fact retrieval. Experiment 2 (n = 36) investigated whether a population with advanced mathematical training-engineering and computer science students-would show generalization of practice for nonrule-based simple addition problems (e.g., 1 + 4, 4 + 7). The 0 + N problems again presented generalization, whereas no nonzero problem type did; but all nonzero problems sped up when the identical problems were retested, as predicted by item-specific fact retrieval. The results pose a strong challenge to the generality of the proposal that skilled adults' simple addition is based on fast procedural algorithms, and instead support a fact-retrieval model of fast addition performance. (c) 2016 APA, all rights reserved).

  13. Pharmacotherapeutic education through problem based learning and its impact on cognitive and motivational attitude of Indian students.

    PubMed

    Chandra, D; Sharma, S; Sethi, G; Dkhar, S

    1996-01-01

    The cognitive and motivational attitudes to problem based learning (i.e., simple didactic problem stated in written form and Programmed Patient) has been compared with those to didactic lectures (DL), the traditional teaching method. The change in recall performance measured in MCQ tests was considered as a change in the cognitive domain. The first test was conducted one week after completion of the topic and second test was taken 3 months later, without prior information. The motivational change was recorded by open-ended questions about the learning method. Three groups of students at second MBBS professional year level consisting of 55, 57 and 59 people, were assigned a simple didactic problem stated in written form (SDP), programmed patients (PP), and didactic lecture (DL), respectively. The average scores obtained by the learners in problem based learning (PBL) groups were similar to the students in the DL group in both the tests. Most of the students in PBL groups appreciated the exercise and suggested including more such exercises in the curriculum. These exercises helped them to better understand patient problems and prescribing behaviour as well as in development of communication skills. However, these exercises were time consuming and were not examination oriented. Pharmacotherapeutic teaching through PBL could be used within a traditional curriculum to develop relevant and rational use of drugs, provided the evaluation method was also modified.

  14. Complexity and compositionality in fluid intelligence.

    PubMed

    Duncan, John; Chylinski, Daphne; Mitchell, Daniel J; Bhandari, Apoorva

    2017-05-16

    Compositionality, or the ability to build complex cognitive structures from simple parts, is fundamental to the power of the human mind. Here we relate this principle to the psychometric concept of fluid intelligence, traditionally measured with tests of complex reasoning. Following the principle of compositionality, we propose that the critical function in fluid intelligence is splitting a complex whole into simple, separately attended parts. To test this proposal, we modify traditional matrix reasoning problems to minimize requirements on information integration, working memory, and processing speed, creating problems that are trivial once effectively divided into parts. Performance remains poor in participants with low fluid intelligence, but is radically improved by problem layout that aids cognitive segmentation. In line with the principle of compositionality, we suggest that effective cognitive segmentation is important in all organized behavior, explaining the broad role of fluid intelligence in successful cognition.

  15. The continuum fusion theory of signal detection applied to a bi-modal fusion problem

    NASA Astrophysics Data System (ADS)

    Schaum, A.

    2011-05-01

    A new formalism has been developed that produces detection algorithms for model-based problems, in which one or more parameter values is unknown. Continuum Fusion can be used to generate different flavors of algorithm for any composite hypothesis testing problem. The methodology is defined by a fusion logic that can be translated into max/min conditions. Here it is applied to a simple sensor fusion model, but one for which the generalized likelihood ratio test is intractable. By contrast, a fusion-based response to the same problem can be devised that is solvable in closed form and represents a good approximation to the GLR test.

  16. What Are the Signs of Alzheimer's Disease? | NIH MedlinePlus the Magazine

    MedlinePlus

    ... in behavior and personality Conduct tests of memory, problem solving, attention, counting, and language Carry out standard medical ... over and over having trouble paying bills or solving simple math problems getting lost losing things or putting them in ...

  17. Multiobjective Optimization Using a Pareto Differential Evolution Approach

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.; Biegel, Bryan A. (Technical Monitor)

    2002-01-01

    Differential Evolution is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. In this paper, the Differential Evolution algorithm is extended to multiobjective optimization problems by using a Pareto-based approach. The algorithm performs well when applied to several test optimization problems from the literature.

  18. Complexity and compositionality in fluid intelligence

    PubMed Central

    Duncan, John; Chylinski, Daphne

    2017-01-01

    Compositionality, or the ability to build complex cognitive structures from simple parts, is fundamental to the power of the human mind. Here we relate this principle to the psychometric concept of fluid intelligence, traditionally measured with tests of complex reasoning. Following the principle of compositionality, we propose that the critical function in fluid intelligence is splitting a complex whole into simple, separately attended parts. To test this proposal, we modify traditional matrix reasoning problems to minimize requirements on information integration, working memory, and processing speed, creating problems that are trivial once effectively divided into parts. Performance remains poor in participants with low fluid intelligence, but is radically improved by problem layout that aids cognitive segmentation. In line with the principle of compositionality, we suggest that effective cognitive segmentation is important in all organized behavior, explaining the broad role of fluid intelligence in successful cognition. PMID:28461462

  19. Boundary condition computational procedures for inviscid, supersonic steady flow field calculations

    NASA Technical Reports Server (NTRS)

    Abbett, M. J.

    1971-01-01

    Results are given of a comparative study of numerical procedures for computing solid wall boundary points in supersonic inviscid flow calculatons. Twenty five different calculation procedures were tested on two sample problems: a simple expansion wave and a simple compression (two-dimensional steady flow). A simple calculation procedure was developed. The merits and shortcomings of the various procedures are discussed, along with complications for three-dimensional and time-dependent flows.

  20. Will Testing Solve Our Schools' Problems?

    ERIC Educational Resources Information Center

    James, David

    2002-01-01

    On January 25, 2001, at an elementary school in Washington, DC, President Bush said that testing is crucial "to determine whether or not children are learning." Testing is appealing to many because it is simple and easy. Americans want to believe that instituting something as routine and common as yearly testing will miraculously provide…

  1. Beyond Testing: Seven Assessments of Students and Schools More Effective than Standardized Tests

    ERIC Educational Resources Information Center

    Meier, Deborah; Knoester, Matthew

    2017-01-01

    The authors of the book argue that a fundamentally complex problem--how to assess the knowledge of a child--cannot be reduced to a simple test score. "Beyond Testing" describes seven forms of assessment that are more effective than standardized test results: (1) student self-assessments, (2) direct teacher observations of students and…

  2. Problem-Solving Test: Submitochondrial Localization of Proteins

    ERIC Educational Resources Information Center

    Szeberenyi, Jozsef

    2011-01-01

    Mitochondria are surrounded by two membranes (outer and inner mitochondrial membrane) that separate two mitochondrial compartments (intermembrane space and matrix). Hundreds of proteins are distributed among these submitochondrial components. A simple biochemical/immunological procedure is described in this test to determine the localization of…

  3. A simple implementation of a normal mixture approach to differential gene expression in multiclass microarrays.

    PubMed

    McLachlan, G J; Bean, R W; Jones, L Ben-Tovim

    2006-07-01

    An important problem in microarray experiments is the detection of genes that are differentially expressed in a given number of classes. We provide a straightforward and easily implemented method for estimating the posterior probability that an individual gene is null. The problem can be expressed in a two-component mixture framework, using an empirical Bayes approach. Current methods of implementing this approach either have some limitations due to the minimal assumptions made or with more specific assumptions are computationally intensive. By converting to a z-score the value of the test statistic used to test the significance of each gene, we propose a simple two-component normal mixture that models adequately the distribution of this score. The usefulness of our approach is demonstrated on three real datasets.

  4. Simulation of Stagnation Region Heating in Hypersonic Flow on Tetrahedral Grids

    NASA Technical Reports Server (NTRS)

    Gnoffo, Peter A.

    2007-01-01

    Hypersonic flow simulations using the node based, unstructured grid code FUN3D are presented. Applications include simple (cylinder) and complex (towed ballute) configurations. Emphasis throughout is on computation of stagnation region heating in hypersonic flow on tetrahedral grids. Hypersonic flow over a cylinder provides a simple test problem for exposing any flaws in a simulation algorithm with regard to its ability to compute accurate heating on such grids. Such flaws predominantly derive from the quality of the captured shock. The importance of pure tetrahedral formulations are discussed. Algorithm adjustments for the baseline Roe / Symmetric, Total-Variation-Diminishing (STVD) formulation to deal with simulation accuracy are presented. Formulations of surface normal gradients to compute heating and diffusion to the surface as needed for a radiative equilibrium wall boundary condition and finite catalytic wall boundary in the node-based unstructured environment are developed. A satisfactory resolution of the heating problem on tetrahedral grids is not realized here; however, a definition of a test problem, and discussion of observed algorithm behaviors to date are presented in order to promote further research on this important problem.

  5. Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Test act system validation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of the Test Active Control Technology (ACT) System laboratory tests was to verify and validate the system concept, hardware, and software. The initial lab tests were open loop hardware tests of the Test ACT System as designed and built. During the course of the testing, minor problems were uncovered and corrected. Major software tests were run. The initial software testing was also open loop. These tests examined pitch control laws, wing load alleviation, signal selection/fault detection (SSFD), and output management. The Test ACT System was modified to interface with the direct drive valve (DDV) modules. The initial testing identified problem areas with DDV nonlinearities, valve friction induced limit cycling, DDV control loop instability, and channel command mismatch. The other DDV issue investigated was the ability to detect and isolate failures. Some simple schemes for failure detection were tested but were not completely satisfactory. The Test ACT System architecture continues to appear promising for ACT/FBW applications in systems that must be immune to worst case generic digital faults, and be able to tolerate two sequential nongeneric faults with no reduction in performance. The challenge in such an implementation would be to keep the analog element sufficiently simple to achieve the necessary reliability.

  6. Corrected goodness-of-fit test in covariance structure analysis.

    PubMed

    Hayakawa, Kazuhiko

    2018-05-17

    Many previous studies report simulation evidence that the goodness-of-fit test in covariance structure analysis or structural equation modeling suffers from the overrejection problem when the number of manifest variables is large compared with the sample size. In this study, we demonstrate that one of the tests considered in Browne (1974) can address this long-standing problem. We also propose a simple modification of Satorra and Bentler's mean and variance adjusted test for non-normal data. A Monte Carlo simulation is carried out to investigate the performance of the corrected tests in the context of a confirmatory factor model, a panel autoregressive model, and a cross-lagged panel (panel vector autoregressive) model. The simulation results reveal that the corrected tests overcome the overrejection problem and outperform existing tests in most cases. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. Simple simulation training system for short-wave radio station

    NASA Astrophysics Data System (ADS)

    Tan, Xianglin; Shao, Zhichao; Tu, Jianhua; Qu, Fuqi

    2018-04-01

    The short-wave radio station is a most important transmission equipment of our signal corps, but in the actual teaching process, which exist the phenomenon of fewer equipment and more students, making the students' short-wave radio operation and practice time is very limited. In order to solve the above problems, to carry out shortwave radio simple simulation training system development is very necessary. This project is developed by combining hardware and software to simulate the voice communication operation and signal principle of shortwave radio station, and can test the signal flow of shortwave radio station. The test results indicate that this system is simple operation, human-machine interface friendly and can improve teaching more efficiency.

  8. Easy way to determine quantitative spatial resolution distribution for a general inverse problem

    NASA Astrophysics Data System (ADS)

    An, M.; Feng, M.

    2013-12-01

    The spatial resolution computation of a solution was nontrivial and more difficult than solving an inverse problem. Most geophysical studies, except for tomographic studies, almost uniformly neglect the calculation of a practical spatial resolution. In seismic tomography studies, a qualitative resolution length can be indicatively given via visual inspection of the restoration of a synthetic structure (e.g., checkerboard tests). An effective strategy for obtaining quantitative resolution length is to calculate Backus-Gilbert resolution kernels (also referred to as a resolution matrix) by matrix operation. However, not all resolution matrices can provide resolution length information, and the computation of resolution matrix is often a difficult problem for very large inverse problems. A new class of resolution matrices, called the statistical resolution matrices (An, 2012, GJI), can be directly determined via a simple one-parameter nonlinear inversion performed based on limited pairs of random synthetic models and their inverse solutions. The total procedure were restricted to forward/inversion processes used in the real inverse problem and were independent of the degree of inverse skill used in the solution inversion. Spatial resolution lengths can be directly given during the inversion. Tests on 1D/2D/3D model inversion demonstrated that this simple method can be at least valid for a general linear inverse problem.

  9. An application of traveling salesman problem using the improved genetic algorithm on android google maps

    NASA Astrophysics Data System (ADS)

    Narwadi, Teguh; Subiyanto

    2017-03-01

    The Travelling Salesman Problem (TSP) is one of the best known NP-hard problems, which means that no exact algorithm to solve it in polynomial time. This paper present a new variant application genetic algorithm approach with a local search technique has been developed to solve the TSP. For the local search technique, an iterative hill climbing method has been used. The system is implemented on the Android OS because android is now widely used around the world and it is mobile system. It is also integrated with Google API that can to get the geographical location and the distance of the cities, and displays the route. Therefore, we do some experimentation to test the behavior of the application. To test the effectiveness of the application of hybrid genetic algorithm (HGA) is compare with the application of simple GA in 5 sample from the cities in Central Java, Indonesia with different numbers of cities. According to the experiment results obtained that in the average solution HGA shows in 5 tests out of 5 (100%) is better than simple GA. The results have shown that the hybrid genetic algorithm outperforms the genetic algorithm especially in the case with the problem higher complexity.

  10. Design of efficient and simple interface testing equipment for opto-electric tracking system

    NASA Astrophysics Data System (ADS)

    Liu, Qiong; Deng, Chao; Tian, Jing; Mao, Yao

    2016-10-01

    Interface testing for opto-electric tracking system is one important work to assure system running performance, aiming to verify the design result of every electronic interface matching the communication protocols or not, by different levels. Opto-electric tracking system nowadays is more complicated, composed of many functional units. Usually, interface testing is executed between units manufactured completely, highly depending on unit design and manufacture progress as well as relative people. As a result, it always takes days or weeks, inefficiently. To solve the problem, this paper promotes an efficient and simple interface testing equipment for opto-electric tracking system, consisting of optional interface circuit card, processor and test program. The hardware cards provide matched hardware interface(s), easily offered from hardware engineer. Automatic code generation technique is imported, providing adaption to new communication protocols. Automatic acquiring items, automatic constructing code architecture and automatic encoding are used to form a new program quickly with adaption. After simple steps, a standard customized new interface testing equipment with matching test program and interface(s) is ready for a waiting-test system in minutes. The efficient and simple interface testing equipment for opto-electric tracking system has worked for many opto-electric tracking system to test entire or part interfaces, reducing test time from days to hours, greatly improving test efficiency, with high software quality and stability, without manual coding. Used as a common tool, the efficient and simple interface testing equipment for opto-electric tracking system promoted by this paper has changed traditional interface testing method and created much higher efficiency.

  11. A simple finite element method for linear hyperbolic problems

    DOE PAGES

    Mu, Lin; Ye, Xiu

    2017-09-14

    Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.

  12. A simple finite element method for linear hyperbolic problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mu, Lin; Ye, Xiu

    Here, we introduce a simple finite element method for solving first order hyperbolic equations with easy implementation and analysis. Our new method, with a symmetric, positive definite system, is designed to use discontinuous approximations on finite element partitions consisting of arbitrary shape of polygons/polyhedra. Error estimate is established. Extensive numerical examples are tested that demonstrate the robustness and flexibility of the method.

  13. Approximation algorithms for the min-power symmetric connectivity problem

    NASA Astrophysics Data System (ADS)

    Plotnikov, Roman; Erzin, Adil; Mladenovic, Nenad

    2016-10-01

    We consider the NP-hard problem of synthesis of optimal spanning communication subgraph in a given arbitrary simple edge-weighted graph. This problem occurs in the wireless networks while minimizing the total transmission power consumptions. We propose several new heuristics based on the variable neighborhood search metaheuristic for the approximation solution of the problem. We have performed a numerical experiment where all proposed algorithms have been executed on the randomly generated test samples. For these instances, on average, our algorithms outperform the previously known heuristics.

  14. "Compacted" procedures for adults' simple addition: A review and critique of the evidence.

    PubMed

    Chen, Yalin; Campbell, Jamie I D

    2018-04-01

    We review recent empirical findings and arguments proffered as evidence that educated adults solve elementary addition problems (3 + 2, 4 + 1) using so-called compacted procedures (e.g., unconscious, automatic counting); a conclusion that could have significant pedagogical implications. We begin with the large-sample experiment reported by Uittenhove, Thevenot and Barrouillet (2016, Cognition, 146, 289-303), which tested 90 adults on the 81 single-digit addition problems from 1 + 1 to 9 + 9. They identified the 12 very-small addition problems with different operands both ≤ 4 (e.g., 4 + 3) as a distinct subgroup of problems solved by unconscious, automatic counting: These items yielded a near-perfectly linear increase in answer response time (RT) yoked to the sum of the operands. Using the data reported in the article, however, we show that there are clear violations of the sum-counting model's predictions among the very-small addition problems, and that there is no real RT boundary associated with addends ≤4. Furthermore, we show that a well-known associative retrieval model of addition facts-the network interference theory (Campbell, 1995)-predicts the results observed for these problems with high precision. We also review the other types of evidence adduced for the compacted procedure theory of simple addition and conclude that these findings are unconvincing in their own right and only distantly consistent with automatic counting. We conclude that the cumulative evidence for fast compacted procedures for adults' simple addition does not justify revision of the long-standing assumption that direct memory retrieval is ultimately the most efficient process of simple addition for nonzero problems, let alone sufficient to recommend significant changes to basic addition pedagogy.

  15. Job Proximity and the Urban Employment Problem: Do Suitable Nearby Jobs Improve Neighbourhood Employment Rates?: A Comment.

    ERIC Educational Resources Information Center

    Houston, Donald

    1998-01-01

    Discusses methodology to examine the problem of spatial mismatch of jobs, showing how the simple accessibility measures used by Daniel Immergluck (1998) are poor reflections of the availability of jobs to an individual and explaining why a gravity model is a favorable alternative. Also discusses the unsuitability of aggregate data for testing the…

  16. Introducing Simple Detection of Bioavailable Arsenic at Rafaela (Santa Fe Province, Argentina) Using the ARSOlux Biosensor.

    PubMed

    Siegfried, Konrad; Hahn-Tomer, Sonja; Koelsch, Andreas; Osterwalder, Eva; Mattusch, Juergen; Staerk, Hans-Joachim; Meichtry, Jorge M; De Seta, Graciela E; Reina, Fernando D; Panigatti, Cecilia; Litter, Marta I; Harms, Hauke

    2015-05-21

    Numerous articles have reported the occurrence of arsenic in drinking water in Argentina, and the resulting health effects in severely affected regions of the country. Arsenic in drinking water in Argentina is largely naturally occurring due to elevated background content of the metalloid in volcanic sediments, although, in some regions, mining can contribute. While the origin of arsenic release has been discussed extensively, the problem of drinking water contamination has not yet been solved. One key step in progress towards mitigation of problems related with the consumption of As-containing water is the availability of simple detection tools. A chemical test kit and the ARSOlux biosensor were evaluated as simple analytical tools for field measurements of arsenic in the groundwater of Rafaela (Santa Fe, Argentina), and the results were compared with ICP-MS and HPLC-ICP-MS measurements. A survey of the groundwater chemistry was performed to evaluate possible interferences with the field tests. The results showed that the ARSOlux biosensor performed better than the chemical field test, that the predominant species of arsenic in the study area was arsenate and that arsenic concentration in the studied samples had a positive correlation with fluoride and vanadium, and a negative one with calcium and iron.

  17. Introducing Simple Detection of Bioavailable Arsenic at Rafaela (Santa Fe Province, Argentina) Using the ARSOlux Biosensor

    PubMed Central

    Siegfried, Konrad; Hahn-Tomer, Sonja; Koelsch, Andreas; Osterwalder, Eva; Mattusch, Juergen; Staerk, Hans-Joachim; Meichtry, Jorge M.; De Seta, Graciela E.; Reina, Fernando D.; Panigatti, Cecilia; Litter, Marta I.; Harms, Hauke

    2015-01-01

    Numerous articles have reported the occurrence of arsenic in drinking water in Argentina, and the resulting health effects in severely affected regions of the country. Arsenic in drinking water in Argentina is largely naturally occurring due to elevated background content of the metalloid in volcanic sediments, although, in some regions, mining can contribute. While the origin of arsenic release has been discussed extensively, the problem of drinking water contamination has not yet been solved. One key step in progress towards mitigation of problems related with the consumption of As-containing water is the availability of simple detection tools. A chemical test kit and the ARSOlux biosensor were evaluated as simple analytical tools for field measurements of arsenic in the groundwater of Rafaela (Santa Fe, Argentina), and the results were compared with ICP-MS and HPLC-ICP-MS measurements. A survey of the groundwater chemistry was performed to evaluate possible interferences with the field tests. The results showed that the ARSOlux biosensor performed better than the chemical field test, that the predominant species of arsenic in the study area was arsenate and that arsenic concentration in the studied samples had a positive correlation with fluoride and vanadium, and a negative one with calcium and iron. PMID:26006123

  18. A Method for Automated Detection of Usability Problems from Client User Interface Events

    PubMed Central

    Saadawi, Gilan M.; Legowski, Elizabeth; Medvedeva, Olga; Chavan, Girish; Crowley, Rebecca S.

    2005-01-01

    Think-aloud usability analysis provides extremely useful data but is very time-consuming and expensive to perform because of the extensive manual video analysis that is required. We describe a simple method for automated detection of usability problems from client user interface events for a developing medical intelligent tutoring system. The method incorporates (1) an agent-based method for communication that funnels all interface events and system responses to a centralized database, (2) a simple schema for representing interface events and higher order subgoals, and (3) an algorithm that reproduces the criteria used for manual coding of usability problems. A correction factor was empirically determining to account for the slower task performance of users when thinking aloud. We tested the validity of the method by simultaneously identifying usability problems using TAU and manually computing them from stored interface event data using the proposed algorithm. All usability problems that did not rely on verbal utterances were detectable with the proposed method. PMID:16779121

  19. A CLIPS based personal computer hardware diagnostic system

    NASA Technical Reports Server (NTRS)

    Whitson, George M.

    1991-01-01

    Often the person designated to repair personal computers has little or no knowledge of how to repair a computer. Described here is a simple expert system to aid these inexperienced repair people. The first component of the system leads the repair person through a number of simple system checks such as making sure that all cables are tight and that the dip switches are set correctly. The second component of the system assists the repair person in evaluating error codes generated by the computer. The final component of the system applies a large knowledge base to attempt to identify the component of the personal computer that is malfunctioning. We have implemented and tested our design with a full system to diagnose problems for an IBM compatible system based on the 8088 chip. In our tests, the inexperienced repair people found the system very useful in diagnosing hardware problems.

  20. Asymptotic Linear Spectral Statistics for Spiked Hermitian Random Matrices

    NASA Astrophysics Data System (ADS)

    Passemier, Damien; McKay, Matthew R.; Chen, Yang

    2015-07-01

    Using the Coulomb Fluid method, this paper derives central limit theorems (CLTs) for linear spectral statistics of three "spiked" Hermitian random matrix ensembles. These include Johnstone's spiked model (i.e., central Wishart with spiked correlation), non-central Wishart with rank-one non-centrality, and a related class of non-central matrices. For a generic linear statistic, we derive simple and explicit CLT expressions as the matrix dimensions grow large. For all three ensembles under consideration, we find that the primary effect of the spike is to introduce an correction term to the asymptotic mean of the linear spectral statistic, which we characterize with simple formulas. The utility of our proposed framework is demonstrated through application to three different linear statistics problems: the classical likelihood ratio test for a population covariance, the capacity analysis of multi-antenna wireless communication systems with a line-of-sight transmission path, and a classical multiple sample significance testing problem.

  1. A Multivariate Randomization Text of Association Applied to Cognitive Test Results

    NASA Technical Reports Server (NTRS)

    Ahumada, Albert; Beard, Bettina

    2009-01-01

    Randomization tests provide a conceptually simple, distribution-free way to implement significance testing. We have applied this method to the problem of evaluating the significance of the association among a number (k) of variables. The randomization method was the random re-ordering of k-1 of the variables. The criterion variable was the value of the largest eigenvalue of the correlation matrix.

  2. Student’s critical thinking skills in authentic problem based learning

    NASA Astrophysics Data System (ADS)

    Yuliati, L.; Fauziah, R.; Hidayat, A.

    2018-05-01

    This study aims to determine students’ critical thinking skills in authentic problem based learning, especially on geometric optics. The study was conducted at the vocational school. The study used a quantitative descriptive method with the open question to measure critical thinking skills. The indicators of critical thinking skills measured in this study are: formulating problems, providing simple answers, applying formulas and procedures, analyzing information, making conclusions, and synthesizing ideas. The results showed that there was a positive change in students’ critical thinking skills with the average value of N-Gain test is 0.59 and effect size test is 3.73. The critical thinking skills of students need to be trained more intensively using authentic problems in daily life.

  3. Device-Independent Tests of Classical and Quantum Dimensions

    NASA Astrophysics Data System (ADS)

    Gallego, Rodrigo; Brunner, Nicolas; Hadley, Christopher; Acín, Antonio

    2010-12-01

    We address the problem of testing the dimensionality of classical and quantum systems in a “black-box” scenario. We develop a general formalism for tackling this problem. This allows us to derive lower bounds on the classical dimension necessary to reproduce given measurement data. Furthermore, we generalize the concept of quantum dimension witnesses to arbitrary quantum systems, allowing one to place a lower bound on the Hilbert space dimension necessary to reproduce certain data. Illustrating these ideas, we provide simple examples of classical and quantum dimension witnesses.

  4. Correlation of Three Techniques for Determining Soil Permeability

    ERIC Educational Resources Information Center

    Winneberger, John T.

    1974-01-01

    Discusses problems of acquiring adequate results when measuring for soil permeability. Correlates three relatively simple techniques that could be helpful to the inexperienced technician dealing with septic tank practices. An appendix includes procedures for valid percolation tests. (MLB)

  5. Restricted random search method based on taboo search in the multiple minima problem

    NASA Astrophysics Data System (ADS)

    Hong, Seung Do; Jhon, Mu Shik

    1997-03-01

    The restricted random search method is proposed as a simple Monte Carlo sampling method to search minima fast in the multiple minima problem. This method is based on taboo search applied recently to continuous test functions. The concept of the taboo region instead of the taboo list is used and therefore the sampling of a region near an old configuration is restricted in this method. This method is applied to 2-dimensional test functions and the argon clusters. This method is found to be a practical and efficient method to search near-global configurations of test functions and the argon clusters.

  6. Neuropsychological study of FASD in a sample of American Indian children: processing simple versus complex information.

    PubMed

    Aragón, Alfredo S; Kalberg, Wendy O; Buckley, David; Barela-Scott, Lindsey M; Tabachnick, Barbara G; May, Philip A

    2008-12-01

    Although a large body of literature exists on cognitive functioning in alcohol-exposed children, it is unclear if there is a signature neuropsychological profile in children with Fetal Alcohol Spectrum Disorders (FASD). This study assesses cognitive functioning in children with FASD from several American Indian reservations in the Northern Plains States, and it applies a hierarchical model of simple versus complex information processing to further examine cognitive function. We hypothesized that complex tests would discriminate between children with FASD and culturally similar controls, while children with FASD would perform similar to controls on relatively simple tests. Our sample includes 32 control children and 24 children with a form of FASD [fetal alcohol syndrome (FAS) = 10, partial fetal alcohol syndrome (PFAS) = 14]. The test battery measures general cognitive ability, verbal fluency, executive functioning, memory, and fine-motor skills. Many of the neuropsychological tests produced results consistent with a hierarchical model of simple versus complex processing. The complexity of the tests was determined "a priori" based on the number of cognitive processes involved in them. Multidimensional scaling was used to statistically analyze the accuracy of classifying the neurocognitive tests into a simple versus complex dichotomy. Hierarchical logistic regression models were then used to define the contribution made by complex versus simple tests in predicting the significant differences between children with FASD and controls. Complex test items discriminated better than simple test items. The tests that conformed well to the model were the Verbal Fluency, Progressive Planning Test (PPT), the Lhermitte memory tasks, and the Grooved Pegboard Test (GPT). The FASD-grouped children, when compared with controls, demonstrated impaired performance on letter fluency, while their performance was similar on category fluency. On the more complex PPT trials (problems 5 to 8), as well as the Lhermitte logical tasks, the FASD group performed the worst. The differential performance between children with FASD and controls was evident across various neuropsychological measures. The children with FASD performed significantly more poorly on the complex tasks than did the controls. The identification of a neurobehavioral profile in children with prenatal alcohol exposure will help clinicians identify and diagnose children with FASD.

  7. Simulated parallel annealing within a neighborhood for optimization of biomechanical systems.

    PubMed

    Higginson, J S; Neptune, R R; Anderson, F C

    2005-09-01

    Optimization problems for biomechanical systems have become extremely complex. Simulated annealing (SA) algorithms have performed well in a variety of test problems and biomechanical applications; however, despite advances in computer speed, convergence to optimal solutions for systems of even moderate complexity has remained prohibitive. The objective of this study was to develop a portable parallel version of a SA algorithm for solving optimization problems in biomechanics. The algorithm for simulated parallel annealing within a neighborhood (SPAN) was designed to minimize interprocessor communication time and closely retain the heuristics of the serial SA algorithm. The computational speed of the SPAN algorithm scaled linearly with the number of processors on different computer platforms for a simple quadratic test problem and for a more complex forward dynamic simulation of human pedaling.

  8. Transform-Based Wideband Array Processing

    DTIC Science & Technology

    1992-01-31

    Breusch and Pagan [2], it is possible to test which model, M,€, 0 AR or random coefficient, will better fit typical array data. Li The test indicates that...bearing estimation problems," Proc. IEEE, vol. 70, no. 9, pp. 1018-1028, 1982. (2] T. S. Breusch and A. R. Pagan , "A simple test for het...cor- relations do not obey an AR relationship across the array; relations in the observations. Through the use of a binary hypothesis test , it is

  9. Computational-hydrodynamic studies of the Noh compressible flow problem using non-ideal equations of state

    NASA Astrophysics Data System (ADS)

    Honnell, Kevin; Burnett, Sarah; Yorke, Chloe'; Howard, April; Ramsey, Scott

    2017-06-01

    The Noh problem is classic verification problem in the field of compressible flows. Simple to conceptualize, it is nonetheless difficult for numerical codes to predict correctly, making it an ideal code-verification test bed. In its original incarnation, the fluid is a simple ideal gas; once validated, however, these codes are often used to study highly non-ideal fluids and solids. In this work the classic Noh problem is extended beyond the commonly-studied polytropic ideal gas to more realistic equations of state (EOS) including the stiff gas, the Nobel-Abel gas, and the Carnahan-Starling hard-sphere fluid, thus enabling verification studies to be performed on more physically-realistic fluids. Exact solutions are compared with numerical results obtained from the Lagrangian hydrocode FLAG, developed at Los Alamos. For these more realistic EOSs, the simulation errors decreased in magnitude both at the origin and at the shock, but also spread more broadly about these points compared to the ideal EOS. The overall spatial convergence rate remained first order.

  10. Bayesian model checking: A comparison of tests

    NASA Astrophysics Data System (ADS)

    Lucy, L. B.

    2018-06-01

    Two procedures for checking Bayesian models are compared using a simple test problem based on the local Hubble expansion. Over four orders of magnitude, p-values derived from a global goodness-of-fit criterion for posterior probability density functions agree closely with posterior predictive p-values. The former can therefore serve as an effective proxy for the difficult-to-calculate posterior predictive p-values.

  11. Fast determination of soil behavior in the capillary zone using simple laboratory tests.

    DOT National Transportation Integrated Search

    2012-12-01

    Frost heave and thaw weakening are typical problems for engineers building in northern regions. These unsaturated-soil behaviors are : caused by water flowing through the capillary zone to a freezing front, where it forms ice lenses. Although suction...

  12. Correlation of spacecraft thermal mathematical models to reference data

    NASA Astrophysics Data System (ADS)

    Torralbo, Ignacio; Perez-Grande, Isabel; Sanz-Andres, Angel; Piqueras, Javier

    2018-03-01

    Model-to-test correlation is a frequent problem in spacecraft-thermal control design. The idea is to determine the values of the parameters of the thermal mathematical model (TMM) that allows reaching a good fit between the TMM results and test data, in order to reduce the uncertainty of the mathematical model. Quite often, this task is performed manually, mainly because a good engineering knowledge and experience is needed to reach a successful compromise, but the use of a mathematical tool could facilitate this work. The correlation process can be considered as the minimization of the error of the model results with regard to the reference data. In this paper, a simple method is presented suitable to solve the TMM-to-test correlation problem, using Jacobian matrix formulation and Moore-Penrose pseudo-inverse, generalized to include several load cases. Aside, in simple cases, this method also allows for analytical solutions to be obtained, which helps to analyze some problems that appear when the Jacobian matrix is singular. To show the implementation of the method, two problems have been considered, one more academic, and the other one the TMM of an electronic box of PHI instrument of ESA Solar Orbiter mission, to be flown in 2019. The use of singular value decomposition of the Jacobian matrix to analyze and reduce these models is also shown. The error in parameter space is used to assess the quality of the correlation results in both models.

  13. Simple mental addition in children with and without mild mental retardation.

    PubMed

    Janssen, R; De Boeck, P; Viaene, M; Vallaeys, L

    1999-11-01

    The speeded performance on simple mental addition problems of 6- and 7-year-old children with and without mild mental retardation is modeled from a person perspective and an item perspective. On the person side, it was found that a single cognitive dimension spanned the performance differences between the two ability groups. However, a discontinuity, or "jump," was observed in the performance of the normal ability group on the easier items. On the item side, the addition problems were almost perfectly ordered in difficulty according to their problem size. Differences in difficulty were explained by factors related to the difficulty of executing nonretrieval strategies. All findings were interpreted within the framework of Siegler's (e.g., R. S. Siegler & C. Shipley, 1995) model of children's strategy choices in arithmetic. Models from item response theory were used to test the hypotheses. Copyright 1999 Academic Press.

  14. Resource-Competing Oscillator Network as a Model of Amoeba-Based Neurocomputer

    NASA Astrophysics Data System (ADS)

    Aono, Masashi; Hirata, Yoshito; Hara, Masahiko; Aihara, Kazuyuki

    An amoeboid organism, Physarum, exhibits rich spatiotemporal oscillatory behavior and various computational capabilities. Previously, the authors created a recurrent neurocomputer incorporating the amoeba as a computing substrate to solve optimization problems. In this paper, considering the amoeba to be a network of oscillators coupled such that they compete for constant amounts of resources, we present a model of the amoeba-based neurocomputer. The model generates a number of oscillation modes and produces not only simple behavior to stabilize a single mode but also complex behavior to spontaneously switch among different modes, which reproduces well the experimentally observed behavior of the amoeba. To explore the significance of the complex behavior, we set a test problem used to compare computational performances of the oscillation modes. The problem is a kind of optimization problem of how to allocate a limited amount of resource to oscillators such that conflicts among them can be minimized. We show that the complex behavior enables to attain a wider variety of solutions to the problem and produces better performances compared with the simple behavior.

  15. Fundamental differences between optimization code test problems in engineering applications

    NASA Technical Reports Server (NTRS)

    Eason, E. D.

    1984-01-01

    The purpose here is to suggest that there is at least one fundamental difference between the problems used for testing optimization codes and the problems that engineers often need to solve; in particular, the level of precision that can be practically achieved in the numerical evaluation of the objective function, derivatives, and constraints. This difference affects the performance of optimization codes, as illustrated by two examples. Two classes of optimization problem were defined. Class One functions and constraints can be evaluated to a high precision that depends primarily on the word length of the computer. Class Two functions and/or constraints can only be evaluated to a moderate or a low level of precision for economic or modeling reasons, regardless of the computer word length. Optimization codes have not been adequately tested on Class Two problems. There are very few Class Two test problems in the literature, while there are literally hundreds of Class One test problems. The relative performance of two codes may be markedly different for Class One and Class Two problems. Less sophisticated direct search type codes may be less likely to be confused or to waste many function evaluations on Class Two problems. The analysis accuracy and minimization performance are related in a complex way that probably varies from code to code. On a problem where the analysis precision was varied over a range, the simple Hooke and Jeeves code was more efficient at low precision while the Powell code was more efficient at high precision.

  16. Cryptography: Cracking Codes.

    ERIC Educational Resources Information Center

    Myerscough, Don; And Others

    1996-01-01

    Describes an activity whose objectives are to encode and decode messages using linear functions and their inverses; to use modular arithmetic, including use of the reciprocal for simple equation solving; to analyze patterns and make and test conjectures; to communicate procedures and algorithms; and to use problem-solving strategies. (ASK)

  17. Evacuation of coal from hoppers/silos with low pressure pneumatic blasting systems

    NASA Technical Reports Server (NTRS)

    Fischer, J. S.

    1977-01-01

    The need for an efficient, economical, effective and quiet device for moving coal and other difficult bulk solids was recognized. Thus came the advent of the low pressure pneumatic blasting system - a very efficient means of using a small amount of plant air (up to 125 PSI) to eliminate the most troublesome material hang-ups in storage containers. This simple device has one moving part and uses approximately 3% of the air consumed by a pneumatic vibrator on the same job. The principle of operation is very simple: air stored in the unit's reservoir is expelled directly into the material via a patented quick release valve. The number, size, and placement of the blaster units on the storage vessel is determined by a series of tests to ascertain flowability of the problem material. These tests in conjunction with the hopper or silo configuration determine specification of a low pressure pneumatic blasting system. This concept has often proven effective in solving flow problems when all other means have failed.

  18. A novel approach to sports concussion assessment: Computerized multilimb reaction times and balance control testing.

    PubMed

    Vartiainen, Matti V; Holm, Anu; Lukander, Jani; Lukander, Kristian; Koskinen, Sanna; Bornstein, Robert; Hokkanen, Laura

    2016-01-01

    Mild traumatic brain injuries (MTBI) or concussions often result in problems with attention, executive functions, and motor control. For better identification of these diverse problems, novel approaches integrating tests of cognitive and motor functioning are needed. The aim was to characterize minor changes in motor and cognitive performance after sports-related concussions with a novel test battery, including balance tests and a computerized multilimb reaction time test. The cognitive demands of the battery gradually increase from a simple stimulus response to a complex task requiring executive attention. A total of 113 male ice hockey players (mean age = 24.6 years, SD = 5.7) were assessed before a season. During the season, nine concussed players were retested within 36 hours, four to six days after the concussion, and after the season. A control group of seven nonconcussed players from the same pool of players with comparable demographics were retested after the season. Performance was measured using a balance test and the Motor Cognitive Test battery (MotCoTe) with multilimb responses in simple reaction, choice reaction, inhibition, and conflict resolution conditions. The performance of the concussed group declined at the postconcussion assessment compared to both the baseline measurement and the nonconcussed controls. Significant changes were observed in the concussed group for the multilimb choice reaction and inhibition tests. Tapping and balance showed a similar trend, but no statistically significant difference in performance. In sports-related concussions, complex motor tests can be valuable additions in assessing the outcome and recovery. In the current study, using subtasks with varying cognitive demands, it was shown that while simple motor performance was largely unaffected, the more complex tasks induced impaired reaction times for the concussed subjects. The increased reaction times may reflect the disruption of complex and integrative cognitive function in concussions.

  19. A Meshless Method Using Radial Basis Functions for Beam Bending Problems

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2004-01-01

    A meshless local Petrov-Galerkin (MLPG) method that uses radial basis functions (RBFs) as trial functions in the study of Euler-Bernoulli beam problems is presented. RBFs, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as they are in the conventional MLPG method. Compactly and noncompactly supported RBFs are considered. Noncompactly supported cubic RBFs are found to be preferable. Patch tests, mixed boundary value problems, and problems with complex loading conditions are considered. Results obtained from the radial basis MLPG method are either of comparable or better accuracy than those obtained when using the conventional MLPG method.

  20. Pruning Neural Networks with Distribution Estimation Algorithms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cantu-Paz, E

    2003-01-15

    This paper describes the application of four evolutionary algorithms to the pruning of neural networks used in classification problems. Besides of a simple genetic algorithm (GA), the paper considers three distribution estimation algorithms (DEAs): a compact GA, an extended compact GA, and the Bayesian Optimization Algorithm. The objective is to determine if the DEAs present advantages over the simple GA in terms of accuracy or speed in this problem. The experiments used a feed forward neural network trained with standard back propagation and public-domain and artificial data sets. The pruned networks seemed to have better or equal accuracy than themore » original fully-connected networks. Only in a few cases, pruning resulted in less accurate networks. We found few differences in the accuracy of the networks pruned by the four EAs, but found important differences in the execution time. The results suggest that a simple GA with a small population might be the best algorithm for pruning networks on the data sets we tested.« less

  1. Evaluation of latex agglutination test (KAtex) for early diagnosis of kala-azar.

    PubMed

    Ahsan, M M; Islam, M N; Mollah, A H; Hoque, M A; Hossain, M A; Begum, Z; Islam, M T

    2010-07-01

    Kala-azar is one of the major public health problem in Bangladesh. But the diagnosis of the problem often is difficult, unusual and time consuming, a simple, noninvasive, easy to perform, reliable and rapid diagnostic test has been a long-felt need of the clinicians. Therefore, the present study was conducted to see the sensitivity and specificity of Latex Agglutination test (KAtex) to detect leishmanial antigen from urine of kala-azar cases. The study was carried out in the department of Paediatrics, Mymensingh Medical College and Hospital, Bangladesh during July to December, 2008. A total of 100 urine samples were collected of which 50 were confirmed kala-azar cases and 50 were age and sex matched controls. Out of 50 kala-azar cases 47 showed positive result of KAtex. The test was also positive in 01 out of 30 healthy controls. None of the febrile controls was positive by KAtex. The sensitivity, specificity, positive predictive value and negative predictive value of the test using presence of LD bodies in splenic and/or bone marrow aspirate as gold standard were 94%, 98%, 97.91% and 94.23% respectively. KAtex is simple, noninvasive, easy to perform, rapid and reliable test for diagnosing kala-azar in endemic area and useful for small, less equipped laboratories as well as for the laboratories with better facilities.

  2. Geometrically derived difference formulae for the numerical integration of trajectory problems

    NASA Technical Reports Server (NTRS)

    Mcleod, R. J. Y.; Sanz-Serna, J. M.

    1981-01-01

    The term 'trajectory problem' is taken to include problems that can arise, for instance, in connection with contour plotting, or in the application of continuation methods, or during phase-plane analysis. Geometrical techniques are used to construct difference methods for these problems to produce in turn explicit and implicit circularly exact formulae. Based on these formulae, a predictor-corrector method is derived which, when compared with a closely related standard method, shows improved performance. It is found that this latter method produces spurious limit cycles, and this behavior is partly analyzed. Finally, a simple variable-step algorithm is constructed and tested.

  3. Multiobjective Aerodynamic Shape Optimization Using Pareto Differential Evolution and Generalized Response Surface Metamodels

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple, fast, and robust evolutionary algorithm that has proven effective in determining the global optimum for several difficult single-objective optimization problems. The DE algorithm has been recently extended to multiobjective optimization problem by using a Pareto-based approach. In this paper, a Pareto DE algorithm is applied to multiobjective aerodynamic shape optimization problems that are characterized by computationally expensive objective function evaluations. To improve computational expensive the algorithm is coupled with generalized response surface meta-models based on artificial neural networks. Results are presented for some test optimization problems from the literature to demonstrate the capabilities of the method.

  4. Tin Can Racer Derby.

    ERIC Educational Resources Information Center

    Milson, James L.

    1986-01-01

    Describes directions for constructing "racing" cars out of simple materials like spools and coffee cans. Discusses procedures for students to build cars, then to test and race them. Stresses that the activity allows for self-discovery of problem solving techniques and opportunities to discuss the scientific concepts related to the activity. (TW)

  5. Two Simple Approaches to Overcome a Problem with the Mantel-Haenszel Statistic: Comments on Wang, Bradlow, Wainer, and Muller (2008)

    ERIC Educational Resources Information Center

    Sinharay, Sandip; Dorans, Neil J.

    2010-01-01

    The Mantel-Haenszel (MH) procedure (Mantel and Haenszel) is a popular method for estimating and testing a common two-factor association parameter in a 2 x 2 x K table. Holland and Holland and Thayer described how to use the procedure to detect differential item functioning (DIF) for tests with dichotomously scored items. Wang, Bradlow, Wainer, and…

  6. Qualified Fitness and Exercise as Professionals and Exercise Prescription: Evolution of the PAR-Q and Canadian Aerobic Fitness Test.

    PubMed

    Shephard, Roy J

    2015-04-01

    Traditional approaches to exercise prescription have included a preliminary medical screening followed by exercise tests of varying sophistication. To maximize population involvement, qualified fitness and exercise professionals (QFEPs) have used a self-administered screening questionnaire (the Physical Activity Readiness Questionnaire, PAR-Q) and a simple measure of aerobic performance (the Canadian Aerobic Fitness Test, CAFT). However, problems have arisen in applying the original protocol to those with chronic disease. Recent developments have addressed these issues. Evolution of the PAR-Q and CAFT protocol is reviewed from their origins in 1974 to the current electronic decision tree model of exercise screening and prescription. About a fifth of apparently healthy adults responded positively to the original PAR-Q instrument, thus requiring an often unwarranted referral to a physician. Minor changes of wording did not overcome this problem. However, a consensus process has now developed an electronic decision tree for stratification of exercise risk not only for healthy individuals, but also for those with various types of chronic disease. The new approach to clearance greatly reduces physician referrals and extends the role of QFEPs. The availability of effective screening and simple fitness testing should contribute to the goal of maximizing physical activity in the entire population.

  7. Bayesian models based on test statistics for multiple hypothesis testing problems.

    PubMed

    Ji, Yuan; Lu, Yiling; Mills, Gordon B

    2008-04-01

    We propose a Bayesian method for the problem of multiple hypothesis testing that is routinely encountered in bioinformatics research, such as the differential gene expression analysis. Our algorithm is based on modeling the distributions of test statistics under both null and alternative hypotheses. We substantially reduce the complexity of the process of defining posterior model probabilities by modeling the test statistics directly instead of modeling the full data. Computationally, we apply a Bayesian FDR approach to control the number of rejections of null hypotheses. To check if our model assumptions for the test statistics are valid for various bioinformatics experiments, we also propose a simple graphical model-assessment tool. Using extensive simulations, we demonstrate the performance of our models and the utility of the model-assessment tool. In the end, we apply the proposed methodology to an siRNA screening and a gene expression experiment.

  8. Increasing Early Detection of Prostate Cancer in African American Men Through a Culturally Targeted Print Intervention

    DTIC Science & Technology

    2006-03-01

    of a protein called prostate-specific antigen (PSA). Normally, PSA is found in the blood at very low levels. Elevated PSA readings can be a sign of...cancer. ♦ Prostate Specific Antigen test (also called PSA test) - This simple blood test measures the level of a protein called prostate- specific...meat ♦ Lycopene, a compound in cooked tomato products and watermelon . 9 A number of Black men say they have problems with their

  9. Authentication: A Standard Problem or a Problem of Standards?

    PubMed

    Capes-Davis, Amanda; Neve, Richard M

    2016-06-01

    Reproducibility and transparency in biomedical sciences have been called into question, and scientists have been found wanting as a result. Putting aside deliberate fraud, there is evidence that a major contributor to lack of reproducibility is insufficient quality assurance of reagents used in preclinical research. Cell lines are widely used in biomedical research to understand fundamental biological processes and disease states, yet most researchers do not perform a simple, affordable test to authenticate these key resources. Here, we provide a synopsis of the problems we face and how standards can contribute to an achievable solution.

  10. Implementation and Performance Issues in Collaborative Optimization

    NASA Technical Reports Server (NTRS)

    Braun, Robert; Gage, Peter; Kroo, Ilan; Sobieski, Ian

    1996-01-01

    Collaborative optimization is a multidisciplinary design architecture that is well-suited to large-scale multidisciplinary optimization problems. This paper compares this approach with other architectures, examines the details of the formulation, and some aspects of its performance. A particular version of the architecture is proposed to better accommodate the occurrence of multiple feasible regions. The use of system level inequality constraints is shown to increase the convergence rate. A series of simple test problems, demonstrated to challenge related optimization architectures, is successfully solved with collaborative optimization.

  11. Quantum annealing of the traveling-salesman problem.

    PubMed

    Martonák, Roman; Santoro, Giuseppe E; Tosatti, Erio

    2004-11-01

    We propose a path-integral Monte Carlo quantum annealing scheme for the symmetric traveling-salesman problem, based on a highly constrained Ising-like representation, and we compare its performance against standard thermal simulated annealing. The Monte Carlo moves implemented are standard, and consist in restructuring a tour by exchanging two links (two-opt moves). The quantum annealing scheme, even with a drastically simple form of kinetic energy, appears definitely superior to the classical one, when tested on a 1002-city instance of the standard TSPLIB.

  12. Focusing on the golden ball metaheuristic: an extended study on a wider set of problems.

    PubMed

    Osaba, E; Diaz, F; Carballedo, R; Onieva, E; Perallos, A

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results.

  13. Focusing on the Golden Ball Metaheuristic: An Extended Study on a Wider Set of Problems

    PubMed Central

    Osaba, E.; Diaz, F.; Carballedo, R.; Onieva, E.; Perallos, A.

    2014-01-01

    Nowadays, the development of new metaheuristics for solving optimization problems is a topic of interest in the scientific community. In the literature, a large number of techniques of this kind can be found. Anyway, there are many recently proposed techniques, such as the artificial bee colony and imperialist competitive algorithm. This paper is focused on one recently published technique, the one called Golden Ball (GB). The GB is a multiple-population metaheuristic based on soccer concepts. Although it was designed to solve combinatorial optimization problems, until now, it has only been tested with two simple routing problems: the traveling salesman problem and the capacitated vehicle routing problem. In this paper, the GB is applied to four different combinatorial optimization problems. Two of them are routing problems, which are more complex than the previously used ones: the asymmetric traveling salesman problem and the vehicle routing problem with backhauls. Additionally, one constraint satisfaction problem (the n-queen problem) and one combinatorial design problem (the one-dimensional bin packing problem) have also been used. The outcomes obtained by GB are compared with the ones got by two different genetic algorithms and two distributed genetic algorithms. Additionally, two statistical tests are conducted to compare these results. PMID:25165742

  14. Learning from Simple Ebooks, Online Cases or Classroom Teaching When Acquiring Complex Knowledge. A Randomized Controlled Trial in Respiratory Physiology and Pulmonology

    PubMed Central

    Worm, Bjarne Skjødt

    2013-01-01

    Background and Aims E-learning is developing fast because of the rapid increased use of smartphones, tablets and portable computers. We might not think of it as e-learning, but today many new e-books are in fact very complex electronic teaching platforms. It is generally accepted that e-learning is as effective as classroom teaching methods, but little is known about its value in relaying contents of different levels of complexity to students. We set out to investigate e-learning effects on simple recall and complex problem-solving compared to classroom teaching. Methods 63 nurses specializing in anesthesiology were evenly randomized into three groups. They were given internet-based knowledge tests before and after attending a teaching module about respiratory physiology and pulmonology. The three groups was either an e-learning group with eBook teaching material, an e-learning group with case-based teaching or a group with face-to-face case-based classroom teaching. After the module the students were required to answer a post-test. Time spent and the number of logged into the system was also measured. Results For simple recall, all methods were equally effective. For problem-solving, the eCase group achieved a comparable knowledge level to classroom teaching, while textbook learning was inferior to both (p<0.01). The textbook group also spent the least amount of time on acquiring knowledge (33 minutes, p<0.001), while the eCase group spent significantly more time on the subject (53 minutes, p<0.001) and logged into the system significantly more (2.8 vs 1.6, p<0.001). Conclusions E-learning based cases are an effective tool for teaching complex knowledge and problem-solving ability, but future studies using higher-level e-learning are encouraged.Simple recall skills, however, do not require any particular learning method. PMID:24039917

  15. Learning from simple ebooks, online cases or classroom teaching when acquiring complex knowledge. A randomized controlled trial in respiratory physiology and pulmonology.

    PubMed

    Worm, Bjarne Skjødt

    2013-01-01

    E-learning is developing fast because of the rapid increased use of smartphones, tablets and portable computers. We might not think of it as e-learning, but today many new e-books are in fact very complex electronic teaching platforms. It is generally accepted that e-learning is as effective as classroom teaching methods, but little is known about its value in relaying contents of different levels of complexity to students. We set out to investigate e-learning effects on simple recall and complex problem-solving compared to classroom teaching. 63 nurses specializing in anesthesiology were evenly randomized into three groups. They were given internet-based knowledge tests before and after attending a teaching module about respiratory physiology and pulmonology. The three groups was either an e-learning group with eBook teaching material, an e-learning group with case-based teaching or a group with face-to-face case-based classroom teaching. After the module the students were required to answer a post-test. Time spent and the number of logged into the system was also measured. For simple recall, all methods were equally effective. For problem-solving, the eCase group achieved a comparable knowledge level to classroom teaching, while textbook learning was inferior to both (p<0.01). The textbook group also spent the least amount of time on acquiring knowledge (33 minutes, p<0.001), while the eCase group spent significantly more time on the subject (53 minutes, p<0.001) and logged into the system significantly more (2.8 vs 1.6, p<0.001). E-learning based cases are an effective tool for teaching complex knowledge and problem-solving ability, but future studies using higher-level e-learning are encouraged.Simple recall skills, however, do not require any particular learning method.

  16. Tour of a Simple Trigonometry Problem

    ERIC Educational Resources Information Center

    Poon, Kin-Keung

    2012-01-01

    This article focuses on a simple trigonometric problem that generates a strange phenomenon when different methods are applied to tackling it. A series of problem-solving activities are discussed, so that students can be alerted that the precision of diagrams is important when solving geometric problems. In addition, the problem-solving plan was…

  17. Developmental Dissociation in the Neural Responses to Simple Multiplication and Subtraction Problems

    ERIC Educational Resources Information Center

    Prado, Jérôme; Mutreja, Rachna; Booth, James R.

    2014-01-01

    Mastering single-digit arithmetic during school years is commonly thought to depend upon an increasing reliance on verbally memorized facts. An alternative model, however, posits that fluency in single-digit arithmetic might also be achieved via the increasing use of efficient calculation procedures. To test between these hypotheses, we used a…

  18. The Role of Training, Individual Differences and Knowledge Representation in Cognitive-Oriented Task Performance.

    ERIC Educational Resources Information Center

    Koubek, Richard J.

    The roles of training, problem representation, and individual differences on performance of both automated (simple) and controlled (complex) process tasks were studied. The following hypotheses were tested: (1) training and cognitive style affect the representation developed; (2) training and cognitive style affect the development and performance…

  19. Rapid, low-cost fluorescent assay of β-lactamase-derived antibiotic resistance and related antibiotic susceptibility

    NASA Astrophysics Data System (ADS)

    Erdem, S. Sibel; Khan, Shazia; Palanisami, Akilan; Hasan, Tayyaba

    2014-10-01

    Antibiotic resistance (AR) is increasingly prevalent in low and middle income countries (LMICs), but the extent of the problem is poorly understood. This lack of knowledge is a critical deficiency, leaving local health authorities essentially blind to AR outbreaks and crippling their ability to provide effective treatment guidelines. The crux of the problem is the lack of microbiology laboratory capacity available in LMICs. To address this unmet need, we demonstrate a rapid and simple test of β-lactamase resistance (the most common form of AR) that uses a modified β-lactam structure decorated with two fluorophores quenched due to their close proximity. When the β-lactam core is cleaved by β-lactamase, the fluorophores dequench, allowing assay speeds of 20 min to be obtained with a simple, streamlined protocol. Furthermore, by testing in competition with antibiotics, the β-lactamase-associated antibiotic susceptibility can also be extracted. This assay can be easily implemented into standard lab work flows to provide near real-time information of β-lactamase resistance, both for epidemiological purposes as well as individualized patient care.

  20. Language functions in preterm-born children: a systematic review and meta-analysis.

    PubMed

    van Noort-van der Spek, Inge L; Franken, Marie-Christine J P; Weisglas-Kuperus, Nynke

    2012-04-01

    Preterm-born children (<37 weeks' gestation) have higher rates of language function problems compared with term-born children. It is unknown whether these problems decrease, deteriorate, or remain stable over time. The goal of this research was to determine the developmental course of language functions in preterm-born children from 3 to 12 years of age. Computerized databases Embase, PubMed, Web of Knowledge, and PsycInfo were searched for studies published between January 1995 and March 2011 reporting language functions in preterm-born children. Outcome measures were simple language function assessed by using the Peabody Picture Vocabulary Test and complex language function assessed by using the Clinical Evaluation of Language Fundamentals. Pooled effect sizes (in terms of Cohen's d) and 95% confidence intervals (CI) for simple and complex language functions were calculated by using random-effects models. Meta-regression was conducted with mean difference of effect size as the outcome variable and assessment age as the explanatory variable. Preterm-born children scored significantly lower compared with term-born children on simple (d = -0.45 [95% CI: -0.59 to -0.30]; P < .001) and on complex (d = -0.62 [95% CI: -0.82 to -0.43]; P < .001) language function tests, even in the absence of major disabilities and independent of social economic status. For complex language function (but not for simple language function), group differences between preterm- and term-born children increased significantly from 3 to 12 years of age (slope = -0.05; P = .03). While growing up, preterm-born children have increasing difficulties with complex language function.

  1. Linear complementarity formulation for 3D frictional sliding problems

    USGS Publications Warehouse

    Kaven, Joern; Hickman, Stephen H.; Davatzes, Nicholas C.; Mutlu, Ovunc

    2012-01-01

    Frictional sliding on quasi-statically deforming faults and fractures can be modeled efficiently using a linear complementarity formulation. We review the formulation in two dimensions and expand the formulation to three-dimensional problems including problems of orthotropic friction. This formulation accurately reproduces analytical solutions to static Coulomb friction sliding problems. The formulation accounts for opening displacements that can occur near regions of non-planarity even under large confining pressures. Such problems are difficult to solve owing to the coupling of relative displacements and tractions; thus, many geomechanical problems tend to neglect these effects. Simple test cases highlight the importance of including friction and allowing for opening when solving quasi-static fault mechanics models. These results also underscore the importance of considering the effects of non-planarity in modeling processes associated with crustal faulting.

  2. Thermo-elasto-viscoplastic analysis of problems in extension and shear

    NASA Technical Reports Server (NTRS)

    Riff, R.; Simitses, G. J.

    1987-01-01

    The problems of extension and shear behavior of structural elements made of carbon steel and subjected to large thermomechanical loads are investigated. The analysis is based on nonlinear geometric and constitutive relations, and is expressed in a rate form. The material constitutive equations are capable of reproducing all nonisothermal, elasto-viscoplastic characteristics. The results of the test problems show that: (1) the formulation can accommodate very large strains and rotations; (2) the model incorporates the simplification associated with rate-insensitive elastic response without losing the ability to model a rate-temperature dependent yield strength and plasticity; and (3) the formulation does not display oscillatory behavior in the stresses for the simple shear problem.

  3. Psychometric Properties of the Persian Version of the Simple Shoulder Test (SST) Questionnaire.

    PubMed

    Ebrahimzadeh, Mohammad H; Vahedi, Ehsan; Baradaran, Aslan; Birjandinejad, Ali; Seyyed-Hoseinian, Seyyed-Hadi; Bagheri, Farshid; Kachooei, Amir Reza

    2016-10-01

    To validate the Persian version of the simple shoulder test in patients with shoulder joint problems. Following Beaton`s guideline, translation and back translation was conducted. We reached to a consensus on the Persian version of SST. To test the face validity in a pilot study, the Persian SST was administered to 20 individuals with shoulder joint conditions. We enrolled 148 consecutive patients with shoulder problem to fill the Persian SST, shoulder specific measure including Oxford shoulder score (OSS) and two general measures including DASH and SF-36. To measure the test-retest reliability, 42 patients were randomly asked to fill the Persian-SST for the second time after one week. Cronbach's alpha coefficient was used to demonstrate internal consistency over the 12 items of Persian-SST. ICC for the total questionnaire was 0.61 showing good and acceptable test-retest reliability. ICC for individual items ranged from 0.32 to 0.79. The total Cronbach's alpha was 0.84 showing good internal consistency over the 12 items of the Persian-SST. Validity testing showed strong correlation between SST and OSS and DASH. The correlation with OSS was positive while with DASH scores was negative. The correlation was also good to strong with all physical and most mental subscales of the SF-36. Correlation coefficient was higher with DASH and OSS in compare to SF-36. Persian version of SST found to be valid and reliable instrument for shoulder joint pain and function assessment in Iranian population.

  4. A simple technique to increase profits in wood products marketing

    Treesearch

    George B. Harpole

    1971-01-01

    Mathematical models can be used to solve quickly some simple day-to-day marketing problems. This note explains how a sawmill production manager, who has an essentially fixed-capacity mill, can solve several optimization problems by using pencil and paper, a forecast of market prices, and a simple algorithm. One such problem is to maximize profits in an operating period...

  5. Passive hand movements disrupt adults' counting strategies.

    PubMed

    Imbo, Ineke; Vandierendonck, André; Fias, Wim

    2011-01-01

    In the present study, we experimentally tested the role of hand motor circuits in simple-arithmetic strategies. Educated adults solved simple additions (e.g., 8 + 3) or simple subtractions (e.g., 11 - 3) while they were required to retrieve the answer from long-term memory (e.g., knowing that 8 + 3 = 11), to transform the problem by making an intermediate step (e.g., 8 + 3 = 8 + 2 + 1 = 10 + 1 = 11) or to count one-by-one (e.g., 8 + 3 = 8…9…10…11). During the process of solving the arithmetic problems, the experimenter did or did not move the participants' hand on a four-point matrix. The results show that passive hand movements disrupted the counting strategy while leaving the other strategies unaffected. This pattern of results is in agreement with a procedural account, showing that the involvement of hand motor circuits in adults' mathematical abilities is reminiscent of finger counting during childhood.

  6. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies.

    PubMed

    Essa, Khalid S

    2014-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values.

  7. New fast least-squares algorithm for estimating the best-fitting parameters due to simple geometric-structures from gravity anomalies

    PubMed Central

    Essa, Khalid S.

    2013-01-01

    A new fast least-squares method is developed to estimate the shape factor (q-parameter) of a buried structure using normalized residual anomalies obtained from gravity data. The problem of shape factor estimation is transformed into a problem of finding a solution of a non-linear equation of the form f(q) = 0 by defining the anomaly value at the origin and at different points on the profile (N-value). Procedures are also formulated to estimate the depth (z-parameter) and the amplitude coefficient (A-parameter) of the buried structure. The method is simple and rapid for estimating parameters that produced gravity anomalies. This technique is used for a class of geometrically simple anomalous bodies, including the semi-infinite vertical cylinder, the infinitely long horizontal cylinder, and the sphere. The technique is tested and verified on theoretical models with and without random errors. It is also successfully applied to real data sets from Senegal and India, and the inverted-parameters are in good agreement with the known actual values. PMID:25685472

  8. Simple Sample Processing Enhances Malaria Rapid Diagnostic Test Performance

    PubMed Central

    Davis, K. M.; Gibson, L. E.; Haselton, F. R.; Wright, D. W.

    2016-01-01

    Lateral flow immunochromatographic rapid diagnostic tests (RDTs) are the primary form of medical diagnostic used for malaria in underdeveloped nations. Unfortunately, many of these tests do not detect asymptomatic malaria carriers. In order for eradication of the disease to be achieved, this problem must be solved. In this study, we demonstrate enhancement in the performance of six RDT brands when a simple sample-processing step is added to the front of the diagnostic process. Greater than a 4-fold RDT signal enhancement was observed as a result of the sample processing step. This lowered the limit of detection for RDT brands to submicroscopic parasitemias. For the best performing RDTs the limits of detection were found to be as low as 3 parasites/μL. Finally, through individual donor samples, the correlations between donor source, WHO panel detection scores and RDT signal intensities were explored. PMID:24787948

  9. Simple sample processing enhances malaria rapid diagnostic test performance.

    PubMed

    Davis, K M; Gibson, L E; Haselton, F R; Wright, D W

    2014-06-21

    Lateral flow immunochromatographic rapid diagnostic tests (RDTs) are the primary form of medical diagnostic used for malaria in underdeveloped nations. Unfortunately, many of these tests do not detect asymptomatic malaria carriers. In order for eradication of the disease to be achieved, this problem must be solved. In this study, we demonstrate enhancement in the performance of six RDT brands when a simple sample-processing step is added to the front of the diagnostic process. Greater than a 4-fold RDT signal enhancement was observed as a result of the sample processing step. This lowered the limit of detection for RDT brands to submicroscopic parasitemias. For the best performing RDTs the limits of detection were found to be as low as 3 parasites per μL. Finally, through individual donor samples, the correlations between donor source, WHO panel detection scores and RDT signal intensities were explored.

  10. An Experimental Realization of a Chaos-Based Secure Communication Using Arduino Microcontrollers.

    PubMed

    Zapateiro De la Hoz, Mauricio; Acho, Leonardo; Vidal, Yolanda

    2015-01-01

    Security and secrecy are some of the important concerns in the communications world. In the last years, several encryption techniques have been proposed in order to improve the secrecy of the information transmitted. Chaos-based encryption techniques are being widely studied as part of the problem because of the highly unpredictable and random-look nature of the chaotic signals. In this paper we propose a digital-based communication system that uses the logistic map which is a mathematically simple model that is chaotic under certain conditions. The input message signal is modulated using a simple Delta modulator and encrypted using a logistic map. The key signal is also encrypted using the same logistic map with different initial conditions. In the receiver side, the binary-coded message is decrypted using the encrypted key signal that is sent through one of the communication channels. The proposed scheme is experimentally tested using Arduino shields which are simple yet powerful development kits that allows for the implementation of the communication system for testing purposes.

  11. The effects of cumulative practice on mathematics problem solving.

    PubMed

    Mayfield, Kristin H; Chase, Philip N

    2002-01-01

    This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving.

  12. The effects of cumulative practice on mathematics problem solving.

    PubMed Central

    Mayfield, Kristin H; Chase, Philip N

    2002-01-01

    This study compared three different methods of teaching five basic algebra rules to college students. All methods used the same procedures to teach the rules and included four 50-question review sessions interspersed among the training of the individual rules. The differences among methods involved the kinds of practice provided during the four review sessions. Participants who received cumulative practice answered 50 questions covering a mix of the rules learned prior to each review session. Participants who received a simple review answered 50 questions on one previously trained rule. Participants who received extra practice answered 50 extra questions on the rule they had just learned. Tests administered after each review included new questions for applying each rule (application items) and problems that required novel combinations of the rules (problem-solving items). On the final test, the cumulative group outscored the other groups on application and problem-solving items. In addition, the cumulative group solved the problem-solving items significantly faster than the other groups. These results suggest that cumulative practice of component skills is an effective method of training problem solving. PMID:12102132

  13. A Coupling Strategy of FEM and BEM for the Solution of a 3D Industrial Crack Problem

    NASA Astrophysics Data System (ADS)

    Kouitat Njiwa, Richard; Taha Niane, Ngadia; Frey, Jeremy; Schwartz, Martin; Bristiel, Philippe

    2015-03-01

    Analyzing crack stability in an industrial context is challenging due to the geometry of the structure. The finite element method is effective for defect-free problems. The boundary element method is effective for problems in simple geometries with singularities. We present a strategy that takes advantage of both approaches. Within the iterative solution procedure, the FEM solves a defect-free problem over the structure while the BEM solves the crack problem over a fictitious domain with simple geometry. The effectiveness of the approach is demonstrated on some simple examples which allow comparison with literature results and on an industrial problem.

  14. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  15. A Matrix-Free Algorithm for Multidisciplinary Design Optimization

    NASA Astrophysics Data System (ADS)

    Lambe, Andrew Borean

    Multidisciplinary design optimization (MDO) is an approach to engineering design that exploits the coupling between components or knowledge disciplines in a complex system to improve the final product. In aircraft design, MDO methods can be used to simultaneously design the outer shape of the aircraft and the internal structure, taking into account the complex interaction between the aerodynamic forces and the structural flexibility. Efficient strategies are needed to solve such design optimization problems and guarantee convergence to an optimal design. This work begins with a comprehensive review of MDO problem formulations and solution algorithms. First, a fundamental MDO problem formulation is defined from which other formulations may be obtained through simple transformations. Using these fundamental problem formulations, decomposition methods from the literature are reviewed and classified. All MDO methods are presented in a unified mathematical notation to facilitate greater understanding. In addition, a novel set of diagrams, called extended design structure matrices, are used to simultaneously visualize both data communication and process flow between the many software components of each method. For aerostructural design optimization, modern decomposition-based MDO methods cannot efficiently handle the tight coupling between the aerodynamic and structural states. This fact motivates the exploration of methods that can reduce the computational cost. A particular structure in the direct and adjoint methods for gradient computation. motivates the idea of a matrix-free optimization method. A simple matrix-free optimizer is developed based on the augmented Lagrangian algorithm. This new matrix-free optimizer is tested on two structural optimization problems and one aerostructural optimization problem. The results indicate that the matrix-free optimizer is able to efficiently solve structural and multidisciplinary design problems with thousands of variables and constraints. On the aerostructural test problem formulated with thousands of constraints, the matrix-free optimizer is estimated to reduce the total computational time by up to 90% compared to conventional optimizers.

  16. Protecting drinking water: water quality testing and PHAST in South Africa.

    PubMed

    Breslin, E D

    2000-01-01

    The paper presents an innovative field-based programme that uses a simple total coliform test and the approach of PHAST (Participatory Hygiene And Sanitation Transformation) to help communities exploring possible water quality problems and actions that can be taken to address them. The Mvula Trust, a South African water and environmental sanitation NGO, has developed the programme. It is currently being tested throughout South Africa. The paper provides two case studies on its implementation in the field, and suggests ways in which the initiative can be improved in the future.

  17. PLEMT: A NOVEL PSEUDOLIKELIHOOD BASED EM TEST FOR HOMOGENEITY IN GENERALIZED EXPONENTIAL TILT MIXTURE MODELS.

    PubMed

    Hong, Chuan; Chen, Yong; Ning, Yang; Wang, Shuang; Wu, Hao; Carroll, Raymond J

    2017-01-01

    Motivated by analyses of DNA methylation data, we propose a semiparametric mixture model, namely the generalized exponential tilt mixture model, to account for heterogeneity between differentially methylated and non-differentially methylated subjects in the cancer group, and capture the differences in higher order moments (e.g. mean and variance) between subjects in cancer and normal groups. A pairwise pseudolikelihood is constructed to eliminate the unknown nuisance function. To circumvent boundary and non-identifiability problems as in parametric mixture models, we modify the pseudolikelihood by adding a penalty function. In addition, the test with simple asymptotic distribution has computational advantages compared with permutation-based test for high-dimensional genetic or epigenetic data. We propose a pseudolikelihood based expectation-maximization test, and show the proposed test follows a simple chi-squared limiting distribution. Simulation studies show that the proposed test controls Type I errors well and has better power compared to several current tests. In particular, the proposed test outperforms the commonly used tests under all simulation settings considered, especially when there are variance differences between two groups. The proposed test is applied to a real data set to identify differentially methylated sites between ovarian cancer subjects and normal subjects.

  18. The performance of ravens on simple discrimination tasks: a preliminary study

    PubMed Central

    Range, Friederike; Bugnyar, Thomas; Kotrschal, Kurt

    2015-01-01

    Recent studies suggest the existence of primate-like cognitive abilities in corvids. Although the learning abilities of corvids in comparison to other species have been investigated before, little is known on how corvids perform on simple discrimination tasks if tested in experimental settings comparable to those that have been used for studying complex cognitive abilities. In this study, we tested a captive group of 12 ravens (Corvus corax) on four discrimination problems and their reversals. In contrast to other studies investigating learning abilities, our ravens were not food deprived and participation in experiments was voluntary. This preliminary study showed that all ravens successfully solved feature and position discriminations and several of the ravens could solve new tasks in a few trials, making very few mistakes. PMID:25948877

  19. Multiple Use One-Sided Hypotheses Testing in Univariate Linear Calibration

    NASA Technical Reports Server (NTRS)

    Krishnamoorthy, K.; Kulkarni, Pandurang M.; Mathew, Thomas

    1996-01-01

    Consider a normally distributed response variable, related to an explanatory variable through the simple linear regression model. Data obtained on the response variable, corresponding to known values of the explanatory variable (i.e., calibration data), are to be used for testing hypotheses concerning unknown values of the explanatory variable. We consider the problem of testing an unlimited sequence of one sided hypotheses concerning the explanatory variable, using the corresponding sequence of values of the response variable and the same set of calibration data. This is the situation of multiple use of the calibration data. The tests derived in this context are characterized by two types of uncertainties: one uncertainty associated with the sequence of values of the response variable, and a second uncertainty associated with the calibration data. We derive tests based on a condition that incorporates both of these uncertainties. The solution has practical applications in the decision limit problem. We illustrate our results using an example dealing with the estimation of blood alcohol concentration based on breath estimates of the alcohol concentration. In the example, the problem is to test if the unknown blood alcohol concentration of an individual exceeds a threshold that is safe for driving.

  20. Scalability problems of simple genetic algorithms.

    PubMed

    Thierens, D

    1999-01-01

    Scalable evolutionary computation has become an intensively studied research topic in recent years. The issue of scalability is predominant in any field of algorithmic design, but it became particularly relevant for the design of competent genetic algorithms once the scalability problems of simple genetic algorithms were understood. Here we present some of the work that has aided in getting a clear insight in the scalability problems of simple genetic algorithms. Particularly, we discuss the important issue of building block mixing. We show how the need for mixing places a boundary in the GA parameter space that, together with the boundary from the schema theorem, delimits the region where the GA converges reliably to the optimum in problems of bounded difficulty. This region shrinks rapidly with increasing problem size unless the building blocks are tightly linked in the problem coding structure. In addition, we look at how straightforward extensions of the simple genetic algorithm-namely elitism, niching, and restricted mating are not significantly improving the scalability problems.

  1. A Comparison of Trajectory Optimization Methods for the Impulsive Minimum Fuel Rendezvous Problem

    NASA Technical Reports Server (NTRS)

    Hughes, Steven P.; Mailhe, Laurie M.; Guzman, Jose J.

    2003-01-01

    In this paper we present, a comparison of trajectory optimization approaches for the minimum fuel rendezvous problem. Both indirect and direct methods are compared for a variety of test cases. The indirect approach is based on primer vector theory. The direct approaches are implemented numerically and include Sequential Quadratic Programming (SQP). Quasi- Newton and Nelder-Meade Simplex. Several cost function parameterizations are considered for the direct approach. We choose one direct approach that appears to be the most flexible. Both the direct and indirect methods are applied to a variety of test cases which are chosen to demonstrate the performance of each method in different flight regimes. The first test case is a simple circular-to-circular coplanar rendezvous. The second test case is an elliptic-to-elliptic line of apsides rotation. The final test case is an orbit phasing maneuver sequence in a highly elliptic orbit. For each test case we present a comparison of the performance of all methods we consider in this paper.

  2. Numerical limitations in application of vector autoregressive modeling and Granger causality to analysis of EEG time series

    NASA Astrophysics Data System (ADS)

    Kammerdiner, Alla; Xanthopoulos, Petros; Pardalos, Panos M.

    2007-11-01

    In this chapter a potential problem with application of the Granger-causality based on the simple vector autoregressive (VAR) modeling to EEG data is investigated. Although some initial studies tested whether the data support the stationarity assumption of VAR, the stability of the estimated model is rarely (if ever) been verified. In fact, in cases when the stability condition is violated the process may exhibit a random walk like behavior or even be explosive. The problem is illustrated by an example.

  3. Inverse kinematics of a dual linear actuator pitch/roll heliostat

    NASA Astrophysics Data System (ADS)

    Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh

    2017-06-01

    This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.

  4. A single-scattering correction for the seismo-acoustic parabolic equation.

    PubMed

    Collins, Michael D

    2012-04-01

    An efficient single-scattering correction that does not require iterations is derived and tested for the seismo-acoustic parabolic equation. The approach is applicable to problems involving gradual range dependence in a waveguide with fluid and solid layers, including the key case of a sloping fluid-solid interface. The single-scattering correction is asymptotically equivalent to a special case of a single-scattering correction for problems that only have solid layers [Küsel et al., J. Acoust. Soc. Am. 121, 808-813 (2007)]. The single-scattering correction has a simple interpretation (conservation of interface conditions in an average sense) that facilitated its generalization to problems involving fluid layers. Promising results are obtained for problems in which the ocean bottom interface has a small slope.

  5. A genetic algorithm solution to the unit commitment problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kazarlis, S.A.; Bakirtzis, A.G.; Petridis, V.

    1996-02-01

    This paper presents a Genetic Algorithm (GA) solution to the Unit Commitment problem. GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as natural selection, genetic recombination and survival of the fittest. A simple Ga algorithm implementation using the standard crossover and mutation operators could locate near optimal solutions but in most cases failed to converge to the optimal solution. However, using the Varying Quality Function technique and adding problem specific operators, satisfactory solutions to the Unit Commitment problem were obtained. Test results for systems of up to 100 unitsmore » and comparisons with results obtained using Lagrangian Relaxation and Dynamic Programming are also reported.« less

  6. Simplest chronoscope. III. Further comparisons between reaction times obtained by meterstick versus machine.

    PubMed

    Montare, Alberto

    2013-06-01

    The three classical Donders' reaction time (RT) tasks (simple, choice, and discriminative RTs) were employed to compare reaction time scores from college students obtained by use of Montare's simplest chronoscope (meterstick) methodology to scores obtained by use of a digital-readout multi-choice reaction timer (machine). Five hypotheses were tested. Simple RT, choice RT, and discriminative RT were faster when obtained by meterstick than by machine. The meterstick method showed higher reliability than the machine method and was less variable. The meterstick method of the simplest chronoscope may help to alleviate the longstanding problems of low reliability and high variability of reaction time performances; while at the same time producing faster performance on Donders' simple, choice and discriminative RT tasks than the machine method.

  7. Advanced Joining of Aerospace Metallic Materials.

    DTIC Science & Technology

    1986-07-01

    uniaxial tensile test with varying temperature and cyclic loading. This simple test problem excercises maray aspects of the phenomena. suOn- ,Ual Yield...6vidence Ia seconde configuration apparait plus n~ faste . 5.3. Ents-e-igne-men-t-s s-u r a dyna-mique de. bain-s de fu-sion A lusage il svest r~vle que la...scanning system for fast and exact alignment of the EB-qun is used. In a fixture the cleaned detail parts are positioned exactly and clamped for welding. At

  8. Deep Reconditioning Testing for near Earth Orbits

    NASA Technical Reports Server (NTRS)

    Betz, F. E.; Barnes, W. L.

    1984-01-01

    The problems and benefits of deep reconditioning to near Earth orbit missions with high cycle life and shallow discharge depth requirements is discussed. A simple battery level approach to deep reconditioning of nickel cadmium batteries in near Earth orbit is considered. A test plan was developed to perform deep reconditioning in direct comparison with an alternative trickle charge approach. The results demonstrate that the deep reconditioning procedure described for near Earth orbit application is inferior to the alternative of trickle charging.

  9. Using model order tests to determine sensory inputs in a motion study

    NASA Technical Reports Server (NTRS)

    Repperger, D. W.; Junker, A. M.

    1977-01-01

    In the study of motion effects on tracking performance, a problem of interest is the determination of what sensory inputs a human uses in controlling his tracking task. In the approach presented here a simple canonical model (FID or a proportional, integral, derivative structure) is used to model the human's input-output time series. A study of significant changes in reduction of the output error loss functional is conducted as different permutations of parameters are considered. Since this canonical model includes parameters which are related to inputs to the human (such as the error signal, its derivatives and integration), the study of model order is equivalent to the study of which sensory inputs are being used by the tracker. The parameters are obtained which have the greatest effect on reducing the loss function significantly. In this manner the identification procedure converts the problem of testing for model order into the problem of determining sensory inputs.

  10. Simulated annealing with probabilistic analysis for solving traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Hong, Pei-Yee; Lim, Yai-Fung; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Simulated Annealing (SA) is a widely used meta-heuristic that was inspired from the annealing process of recrystallization of metals. Therefore, the efficiency of SA is highly affected by the annealing schedule. As a result, in this paper, we presented an empirical work to provide a comparable annealing schedule to solve symmetric traveling salesman problems (TSP). Randomized complete block design is also used in this study. The results show that different parameters do affect the efficiency of SA and thus, we propose the best found annealing schedule based on the Post Hoc test. SA was tested on seven selected benchmarked problems of symmetric TSP with the proposed annealing schedule. The performance of SA was evaluated empirically alongside with benchmark solutions and simple analysis to validate the quality of solutions. Computational results show that the proposed annealing schedule provides a good quality of solution.

  11. Modified reactive tabu search for the symmetric traveling salesman problems

    NASA Astrophysics Data System (ADS)

    Lim, Yai-Fung; Hong, Pei-Yee; Ramli, Razamin; Khalid, Ruzelan

    2013-09-01

    Reactive tabu search (RTS) is an improved method of tabu search (TS) and it dynamically adjusts tabu list size based on how the search is performed. RTS can avoid disadvantage of TS which is in the parameter tuning in tabu list size. In this paper, we proposed a modified RTS approach for solving symmetric traveling salesman problems (TSP). The tabu list size of the proposed algorithm depends on the number of iterations when the solutions do not override the aspiration level to achieve a good balance between diversification and intensification. The proposed algorithm was tested on seven chosen benchmarked problems of symmetric TSP. The performance of the proposed algorithm is compared with that of the TS by using empirical testing, benchmark solution and simple probabilistic analysis in order to validate the quality of solution. The computational results and comparisons show that the proposed algorithm provides a better quality solution than that of the TS.

  12. A genuinely discontinuous approach for multiphase EHD problems

    NASA Astrophysics Data System (ADS)

    Natarajan, Mahesh; Desjardins, Olivier

    2017-11-01

    Electrohydrodynamics (EHD) involves solving the Poisson equation for the electric field potential. For multiphase flows, although the electric field potential is a continuous quantity, due to the discontinuity in the electric permittivity between the phases, additional jump conditions at the interface, for the normal and tangential components of the electric field need to be satisfied. All approaches till date either ignore the jump conditions, or involve simplifying assumptions, and hence yield unconvincing results even for simple test problems. In the present work, we develop a genuinely discontinuous approach for the Poisson equation for multiphase flows using a Finite Volume Unsplit Volume of Fluid method. The governing equation and the jump conditions without assumptions are used to develop the method, and its efficiency is demonstrated by comparison of the numerical results with canonical test problems having exact solutions. Postdoctoral Associate, Department of Mechanical and Aerospace Engineering.

  13. Two-way ANOVA Problems with Simple Numbers.

    ERIC Educational Resources Information Center

    Read, K. L. Q.; Shihab, L. H.

    1998-01-01

    Describes how to construct simple numerical examples in two-way ANOVAs, specifically randomized blocks, balanced two-way layouts, and Latin squares. Indicates that working through simple numerical problems is helpful to students meeting a technique for the first time and should be followed by computer-based analysis of larger, real datasets when…

  14. Collaboratively Conceived, Designed and Implemented: Matching Visualization Tools with Geoscience Data Collections and Geoscience Data Collections with Visualization Tools via the ToolMatch Service.

    NASA Astrophysics Data System (ADS)

    Hoebelheinrich, N. J.; Lynnes, C.; West, P.; Ferritto, M.

    2014-12-01

    Two problems common to many geoscience domains are the difficulties in finding tools to work with a given dataset collection, and conversely, the difficulties in finding data for a known tool. A collaborative team from the Earth Science Information Partnership (ESIP) has gotten together to design and create a web service, called ToolMatch, to address these problems. The team began their efforts by defining an initial, relatively simple conceptual model that addressed the two uses cases briefly described above. The conceptual model is expressed as an ontology using OWL (Web Ontology Language) and DCterms (Dublin Core Terms), and utilizing standard ontologies such as DOAP (Description of a Project), FOAF (Friend of a Friend), SKOS (Simple Knowledge Organization System) and DCAT (Data Catalog Vocabulary). The ToolMatch service will be taking advantage of various Semantic Web and Web standards, such as OpenSearch, RESTful web services, SWRL (Semantic Web Rule Language) and SPARQL (Simple Protocol and RDF Query Language). The first version of the ToolMatch service was deployed in early fall 2014. While more complete testing is required, a number of communities besides ESIP member organizations have expressed interest in collaborating to create, test and use the service and incorporate it into their own web pages, tools and / or services including the USGS Data Catalog service, DataONE, the Deep Carbon Observatory, Virtual Solar Terrestrial Observatory (VSTO), and the U.S. Global Change Research Program. In this session, presenters will discuss the inception and development of the ToolMatch service, the collaborative process used to design, refine, and test the service, and future plans for the service.

  15. Discrete sequence prediction and its applications

    NASA Technical Reports Server (NTRS)

    Laird, Philip

    1992-01-01

    Learning from experience to predict sequences of discrete symbols is a fundamental problem in machine learning with many applications. We apply sequence prediction using a simple and practical sequence-prediction algorithm, called TDAG. The TDAG algorithm is first tested by comparing its performance with some common data compression algorithms. Then it is adapted to the detailed requirements of dynamic program optimization, with excellent results.

  16. A Novel Approach to Hardness Testing

    NASA Technical Reports Server (NTRS)

    Spiegel, F. Xavier; West, Harvey A.

    1996-01-01

    This paper gives a description of the application of a simple rebound time measuring device and relates the determination of relative hardness of a variety of common engineering metals. A relation between rebound time and hardness will be sought. The effect of geometry and surface condition will also be discussed in order to acquaint the student with the problems associated with this type of method.

  17. What Sensing Tells Us: Towards a Formal Theory of Testing for Dynamical Systems

    NASA Technical Reports Server (NTRS)

    McIlraith, Sheila; Scherl, Richard

    2005-01-01

    Just as actions can have indirect effects on the state of the world, so too can sensing actions have indirect effects on an agent's state of knowledge. In this paper, we investigate "what sensing actions tell us", i.e., what an agent comes to know indirectly from the outcome of a sensing action, given knowledge of its actions and state constraints that hold in the world. To this end, we propose a formalization of the notion of testing within a dialect of the situation calculus that includes knowledge and sensing actions. Realizing this formalization requires addressing the ramification problem for sensing actions. We formalize simple tests as sensing actions. Complex tests are expressed in the logic programming language Golog. We examine what it means to perform a test, and how the outcome of a test affects an agent's state of knowledge. Finally, we propose automated reasoning techniques for test generation and complex-test verification, under certain restrictions. The work presented in this paper is relevant to a number of application domains including diagnostic problem solving, natural language understanding, plan recognition, and active vision.

  18. Evaluation of seismic spatial interaction effects through an impact testing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, B.D.; Driesen, G.E.

    The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous ``sources`` and ``targets`` requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to ``calibrate`` the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less

  19. Evaluation of seismic spatial interaction effects through an impact testing program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Thomas, B.D.; Driesen, G.E.

    The consequences of non-seismically qualified objects falling and striking essential, seismically qualified objects is an analytically difficult problem to assess. Analytical solutions to impact problems are conservative and only available for simple situations. In a nuclear facility, the numerous sources'' and targets'' requiring evaluation often have complex geometric configurations, which makes calculations and computer modeling difficult. Few industry or regulatory rules are available for this specialized assessment. A drop test program was recently conducted to calibrate'' the judgment of seismic qualification engineers who perform interaction evaluations and to further develop seismic interaction criteria. Impact tests on varying combinations of sourcesmore » and targets were performed by dropping the sources from various heights onto targets that were connected to instruments. This paper summarizes the scope, test configurations, and some results of the drop test program. Force and acceleration time history data and general observations are presented on the ruggedness of various targets when subjected to impacts from different types of sources.« less

  20. Assessing student written problem solutions: A problem-solving rubric with application to introductory physics

    NASA Astrophysics Data System (ADS)

    Docktor, Jennifer L.; Dornfeld, Jay; Frodermann, Evan; Heller, Kenneth; Hsu, Leonardo; Jackson, Koblar Alan; Mason, Andrew; Ryan, Qing X.; Yang, Jie

    2016-06-01

    Problem solving is a complex process valuable in everyday life and crucial for learning in the STEM fields. To support the development of problem-solving skills it is important for researchers and curriculum developers to have practical tools that can measure the difference between novice and expert problem-solving performance in authentic classroom work. It is also useful if such tools can be employed by instructors to guide their pedagogy. We describe the design, development, and testing of a simple rubric to assess written solutions to problems given in undergraduate introductory physics courses. In particular, we present evidence for the validity, reliability, and utility of the instrument. The rubric identifies five general problem-solving processes and defines the criteria to attain a score in each: organizing problem information into a Useful Description, selecting appropriate principles (Physics Approach), applying those principles to the specific conditions in the problem (Specific Application of Physics), using Mathematical Procedures appropriately, and displaying evidence of an organized reasoning pattern (Logical Progression).

  1. No Generalization of Practice for Nonzero Simple Addition

    ERIC Educational Resources Information Center

    Campbell, Jamie I. D.; Beech, Leah C.

    2014-01-01

    Several types of converging evidence have suggested recently that skilled adults solve very simple addition problems (e.g., 2 + 1, 4 + 2) using a fast, unconscious counting algorithm. These results stand in opposition to the long-held assumption in the cognitive arithmetic literature that such simple addition problems normally are solved by fact…

  2. Meshless Local Petrov-Galerkin Euler-Bernoulli Beam Problems: A Radial Basis Function Approach

    NASA Technical Reports Server (NTRS)

    Raju, I. S.; Phillips, D. R.; Krishnamurthy, T.

    2003-01-01

    A radial basis function implementation of the meshless local Petrov-Galerkin (MLPG) method is presented to study Euler-Bernoulli beam problems. Radial basis functions, rather than generalized moving least squares (GMLS) interpolations, are used to develop the trial functions. This choice yields a computationally simpler method as fewer matrix inversions and multiplications are required than when GMLS interpolations are used. Test functions are chosen as simple weight functions as in the conventional MLPG method. Compactly and noncompactly supported radial basis functions are considered. The non-compactly supported cubic radial basis function is found to perform very well. Results obtained from the radial basis MLPG method are comparable to those obtained using the conventional MLPG method for mixed boundary value problems and problems with discontinuous loading conditions.

  3. An Experimental Realization of a Chaos-Based Secure Communication Using Arduino Microcontrollers

    PubMed Central

    Zapateiro De la Hoz, Mauricio; Vidal, Yolanda

    2015-01-01

    Security and secrecy are some of the important concerns in the communications world. In the last years, several encryption techniques have been proposed in order to improve the secrecy of the information transmitted. Chaos-based encryption techniques are being widely studied as part of the problem because of the highly unpredictable and random-look nature of the chaotic signals. In this paper we propose a digital-based communication system that uses the logistic map which is a mathematically simple model that is chaotic under certain conditions. The input message signal is modulated using a simple Delta modulator and encrypted using a logistic map. The key signal is also encrypted using the same logistic map with different initial conditions. In the receiver side, the binary-coded message is decrypted using the encrypted key signal that is sent through one of the communication channels. The proposed scheme is experimentally tested using Arduino shields which are simple yet powerful development kits that allows for the implementation of the communication system for testing purposes. PMID:26413563

  4. Designing instruction to support mechanical reasoning: Three alternatives in the simple machines learning environment

    NASA Astrophysics Data System (ADS)

    McKenna, Ann Frances

    2001-07-01

    Creating a classroom environment that fosters a productive learning experience and engages students in the learning process is a complex endeavor. A classroom environment is dynamic and requires a unique synergy among students, teacher, classroom artifacts and events to achieve robust understanding and knowledge integration. This dissertation addresses this complex issue by developing, implementing, and investigating the simple machines learning environment (SIMALE) to support students' mechanical reasoning and understanding. SIMALE was designed to support reflection, collaborative learning, and to engage students in generative learning through multiple representations of concepts and successive experimentation and design activities. Two key components of SIMALE are an original web-based software tool and hands-on Lego activities. A research study consisting of three treatment groups was created to investigate the benefits of hands-on and web-based computer activities on students' analytic problem solving ability, drawing/modeling ability, and conceptual understanding. The study was conducted with two populations of students that represent a diverse group with respect to gender, ethnicity, academic achievement and social/economic status. One population of students in this dissertation study participated from the Mathematics, Engineering, and Science Achievement (MESA) program that serves minorities and under-represented groups in science and mathematics. The second group was recruited from the Academic Talent Development Program (ATDP) that is an academically competitive outreach program offered through the University of California at Berkeley. Results from this dissertation show success of the SIMALE along several dimensions. First, students in both populations achieved significant gains in analytic problem solving ability, drawing/modeling ability, and conceptual understanding. Second, significant differences that were found on pre-test measures were eliminated on post-test measures. Specifically, female students scored significantly lower than males on the overall pre-tests but scored as well as males on the same post-test measures. MESA students also scored significantly lower than ATDP students on pre-test measures but both populations scored equally well on the post-tests. This dissertation has therefore shown the SIMALE to support a collaborative, reflective, and generative learning environment. Furthermore, the SIMALE clearly contributes to students' mechanical reasoning and understanding of simple machines concepts for a diverse population of students.

  5. Tour of a simple trigonometry problem

    NASA Astrophysics Data System (ADS)

    Poon, Kin-Keung

    2012-06-01

    This article focuses on a simple trigonometric problem that generates a strange phenomenon when different methods are applied to tackling it. A series of problem-solving activities are discussed, so that students can be alerted that the precision of diagrams is important when solving geometric problems. In addition, the problem-solving plan was implemented in a high school and the results indicated that students are relatively weak in problem-solving abilities but they understand and appreciate the thinking process in different stages and steps of the activities.

  6. Curcumin-Eudragit® E PO solid dispersion: A simple and potent method to solve the problems of curcumin.

    PubMed

    Li, Jinglei; Lee, Il Woo; Shin, Gye Hwa; Chen, Xiguang; Park, Hyun Jin

    2015-08-01

    Using a simple solution mixing method, curcumin was dispersed in the matrix of Eudragit® E PO polymer. Water solubility of curcumin in curcumin-Eudragit® E PO solid dispersion (Cur@EPO) was greatly increased. Based on the results of several tests, curcumin was demonstrated to exist in the polymer matrix in amorphous state. The interaction between curcumin and the polymer was investigated through Fourier transform infrared spectroscopy and (1)H NMR which implied that OH group of curcumin and carbonyl group of the polymer involved in the H bonding formation. Cur@EPO also provided protection function for curcumin as verified by the pH challenge and UV irradiation test. The pH value influenced curcumin release profile in which sustained release pattern was revealed. Additionally, in vitro transdermal test was conducted to assess the potential of Cur@EPO as a vehicle to deliver curcumin through this alternative administration route. Copyright © 2015 Elsevier B.V. All rights reserved.

  7. Crystal separation from mother solution and conservation under microgravity conditions using inert liquid

    NASA Astrophysics Data System (ADS)

    Regel, L. L.; Vedernikov, A. A.; Queeckers, P.; Legros, J.-C.

    1991-12-01

    The problem of the separation of crystals from their feeding solutions and their conservation at the end of the crystallization under microgravity is investigated. The goal to be reached is to propose an efficient and simple system. This method has to be applicable for an automatic separation on board a spacecraft, without using a centrifuge. The injection of an immiscible and inert liquid into the cell is proposed to solve the problem. The results of numerical modeling, earth simulation tests and experiments under short durations of weightlessness (using aircraft parabolic flights) are described.

  8. Misconceptions of Mexican Teachers in the Solution of Simple Pendulum

    ERIC Educational Resources Information Center

    Garcia Trujillo, Luis Antonio; Ramirez Díaz, Mario H.; Rodriguez Castillo, Mario

    2013-01-01

    Solving the position of a simple pendulum at any time is apparently one of the most simple and basic problems to solve in high school and college physics courses. However, because of this apparent simplicity, teachers and physics texts often assume that the solution is immediate without pausing to reflect on the problem formulation or verifying…

  9. The analysis and compensation of errors of precise simple harmonic motion control under high speed and large load conditions based on servo electric cylinder

    NASA Astrophysics Data System (ADS)

    Ma, Chen-xi; Ding, Guo-qing

    2017-10-01

    Simple harmonic waves and synthesized simple harmonic waves are widely used in the test of instruments. However, because of the errors caused by clearance of gear and time-delay error of FPGA, it is difficult to control servo electric cylinder in precise simple harmonic motion under high speed, high frequency and large load conditions. To solve the problem, a method of error compensation is proposed in this paper. In the method, a displacement sensor is fitted on the piston rod of the electric cylinder. By using the displacement sensor, the real-time displacement of the piston rod is obtained and fed back to the input of servo motor, then a closed loop control is realized. There is compensation of pulses in the next period of the synthetic waves. This paper uses FPGA as the processing core. The software mainly comprises a waveform generator, an Ethernet module, a memory module, a pulse generator, a pulse selector, a protection module, an error compensation module. A durability of shock absorbers is used as the testing platform. The durability mainly comprises a single electric cylinder, a servo motor for driving the electric cylinder, and the servo motor driver.

  10. Anxiety, Stress and Coping Patterns in Children in Dental Settings.

    PubMed

    Pop-Jordanova, Nadica; Sarakinova, Olivera; Pop-Stefanova-Trposka, Maja; Zabokova-Bilbilova, Efka; Kostadinovska, Emilija

    2018-04-15

    Fear of the dentist and dental treatment is a common problem. It can cause treatment difficulties for the practitioner, as well as severe consequences for the patient. As is known, the level of stress can be evaluated thought electrodermal activity, cortisol measure in saliva, or indirectly by psychometric tests. The present study examined the psychological influence of dental interventions on the child as well as coping patterns used for stress diminution. We examined two matched groups of patients: a) children with orthodontic problems (anomalies in shape, position and function of dentomaxillofacial structures) (N = 31, mean age 10.3 ± 2.02) years; and b) children with ordinary dental problems (N = 31, mean age 10.3 ± 2.4 years). As psychometric instruments, we used: 45 items Sarason's scale for anxiety, 20 items simple Stress - test adapted for children, as well as A - cope test for evaluation coping patterns. Obtained scores confirmed the presence of moderate anxiety in both groups as well as moderate stress level. For Sarason's test obtained scores for the group with dental problems are 20.63 ± 8.37 (from max 45); and for Stress test 7.63 ± 3.45 (from max 20); for the orthodontic group obtained scores are 18.66 ± 6.85 for Sarason's test, while for the Stress test were 7.76 ± 3.78. One way ANOVA confirmed a significant difference in values of obtained scores related to the age and gender. Calculated Student t - test shows non-significant differences in obtained test results for both groups of examinees. Coping mechanisms evaluated by A - cope test shows that in both groups the most important patterns used for stress relief are: developing self-reliance and optimism; avoiding problems and engaging in demanding activity. This study confirmed that moderate stress level and anxiety are present in both groups of patients (orthodontic and dental). Obtained scores are depending on gender and age. As more used coping patterns in both groups are developing self-reliance and optimism; avoiding problems and engaging in demanding activity. Some strategies for managing this problem are discussed.

  11. Failure Study of Composite Materials by the Yeh-Stratton Criterion

    NASA Technical Reports Server (NTRS)

    Yeh, Hsien-Yang; Richards, W. Lance

    1997-01-01

    The newly developed Yeh-Stratton (Y-S) Strength Criterion was used to study the failure of composite materials with central holes and normal cracks. To evaluate the interaction parameters for the Y-S failure theory, it is necessary to perform several biaxial loading tests. However, it is indisputable that the inhomogeneous and anisotropic nature of composite materials have made their own contribution to the complication of the biaxial testing problem. To avoid the difficulties of performing many biaxial tests and still consider the effects of the interaction term in the Y-S Criterion, a simple modification of the Y-S Criterion was developed. The preliminary predictions by the modified Y-S Criterion were relatively conservative compared to the testing data. Thus, the modified Y-S Criterion could be used as a design tool. To further understand the composite failure problem, an investigation of the damage zone in front of the crack tip coupled with the Y-S Criterion is imperative.

  12. Development of a Quantitative Decision Metric for Selecting the Most Suitable Discretization Method for SN Transport Problems

    NASA Astrophysics Data System (ADS)

    Schunert, Sebastian

    In this work we develop a quantitative decision metric for spatial discretization methods of the SN equations. The quantitative decision metric utilizes performance data from selected test problems for computing a fitness score that is used for the selection of the most suitable discretization method for a particular SN transport application. The fitness score is aggregated as a weighted geometric mean of single performance indicators representing various performance aspects relevant to the user. Thus, the fitness function can be adjusted to the particular needs of the code practitioner by adding/removing single performance indicators or changing their importance via the supplied weights. Within this work a special, broad class of methods is considered, referred to as nodal methods. This class is naturally comprised of the DGFEM methods of all function space families. Within this work it is also shown that the Higher Order Diamond Difference (HODD) method is a nodal method. Building on earlier findings that the Arbitrarily High Order Method of the Nodal type (AHOTN) is also a nodal method, a generalized finite-element framework is created to yield as special cases various methods that were developed independently using profoundly different formalisms. A selection of test problems related to a certain performance aspect are considered: an Method of Manufactured Solutions (MMS) test suite for assessing accuracy and execution time, Lathrop's test problem for assessing resilience against occurrence of negative fluxes, and a simple, homogeneous cube test problem to verify if a method possesses the thick diffusive limit. The contending methods are implemented as efficiently as possible under a common SN transport code framework to level the playing field for a fair comparison of their computational load. Numerical results are presented for all three test problems and a qualitative rating of each method's performance is provided for each aspect: accuracy/efficiency, resilience against negative fluxes, and possession of the thick diffusion limit, separately. The choice of the most efficient method depends on the utilized error norm: in Lp error norms higher order methods such as the AHOTN method of order three perform best, while for computing integral quantities the linear nodal (LN) method is most efficient. The most resilient method against occurrence of negative fluxes is the simple corner balance (SCB) method. A validation of the quantitative decision metric is performed based on the NEA box-inbox suite of test problems. The validation exercise comprises two stages: first prediction of the contending methods' performance via the decision metric and second computing the actual scores based on data obtained from the NEA benchmark problem. The comparison of predicted and actual scores via a penalty function (ratio of predicted best performer's score to actual best score) completes the validation exercise. It is found that the decision metric is capable of very accurate predictions (penalty < 10%) in more than 83% of the considered cases and features penalties up to 20% for the remaining cases. An exception to this rule is the third test case NEA-III intentionally set up to incorporate a poor match of the benchmark with the "data" problems. However, even under these worst case conditions the decision metric's suggestions are never detrimental. Suggestions for improving the decision metric's accuracy are to increase the pool of employed data, to refine the mapping of a given configuration to a case in the database, and to better characterize the desired target quantities.

  13. Simple assay for staphylococcal enterotoxins A, B, and C: modification of enzyme-linked immunosorbent assay.

    PubMed Central

    Stiffler-Rosenberg, G; Fey, H

    1978-01-01

    The enzyme-linked immunosorbent assay (ELISA) introduced for the detection of staphylococcal enterotoxins by Saunders et al., Simon and Terplan, and ourselves has proved to be a simple, reliable, and sensitive test. A new modification is described that uses polystyrene balls (diameter, 6 mm) coated individually with antibody against one of the toxins A, B, or C. In a single tube, 20 ml of the food extract was incubated with the three balls differently stained, which were then each tested for the uptake of enterotoxin by a competitive ELISA. A concentration of 0.1 ng or less of enterotoxin per ml can be measured, making tedious concentration procedures of the extracts superfluous. Culture supernatants and extracts from foods artificially or naturally contaminated with toxin were successfully examined. Cross-reactions did not occur, and nonspecific interfering substances did not create serious problems. PMID:365877

  14. Simple arithmetic: not so simple for highly math anxious individuals.

    PubMed

    Chang, Hyesang; Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G

    2017-12-01

    Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low-compared to high-math anxious individuals perform better when they activate this network less-a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. © The Author (2017). Published by Oxford University Press.

  15. Simple arithmetic: not so simple for highly math anxious individuals

    PubMed Central

    Sprute, Lisa; Maloney, Erin A; Beilock, Sian L; Berman, Marc G

    2017-01-01

    Abstract Fluency with simple arithmetic, typically achieved in early elementary school, is thought to be one of the building blocks of mathematical competence. Behavioral studies with adults indicate that math anxiety (feelings of tension or apprehension about math) is associated with poor performance on cognitively demanding math problems. However, it remains unclear whether there are fundamental differences in how high and low math anxious individuals approach overlearned simple arithmetic problems that are less reliant on cognitive control. The current study used functional magnetic resonance imaging to examine the neural correlates of simple arithmetic performance across high and low math anxious individuals. We implemented a partial least squares analysis, a data-driven, multivariate analysis method to measure distributed patterns of whole-brain activity associated with performance. Despite overall high simple arithmetic performance across high and low math anxious individuals, performance was differentially dependent on the fronto-parietal attentional network as a function of math anxiety. Specifically, low—compared to high—math anxious individuals perform better when they activate this network less—a potential indication of more automatic problem-solving. These findings suggest that low and high math anxious individuals approach even the most fundamental math problems differently. PMID:29140499

  16. Testing framework for embedded languages

    NASA Astrophysics Data System (ADS)

    Leskó, Dániel; Tejfel, Máté

    2012-09-01

    Embedding a new programming language into an existing one is a widely used technique, because it fastens the development process and gives a part of a language infrastructure for free (e.g. lexical, syntactical analyzers). In this paper we are presenting a new advantage of this development approach regarding to adding testing support for these new languages. Tool support for testing is a crucial point for a newly designed programming language. It could be done in the hard way by creating a testing tool from scratch, or we could try to reuse existing testing tools by extending them with an interface to our new language. The second approach requires less work, and also it fits very well for the embedded approach. The problem is that the creation of such interfaces is not straightforward at all, because the existing testing tools were mostly not designed to be extendable and to be able to deal with new languages. This paper presents an extendable and modular model of a testing framework, in which the most basic design decision was to keep the - previously mentioned - interface creation simple and straightforward. Other important aspects of our model are the test data generation, the oracle problem and the customizability of the whole testing phase.

  17. Two-fluid dusty shocks: simple benchmarking problems and applications to protoplanetary discs

    NASA Astrophysics Data System (ADS)

    Lehmann, Andrew; Wardle, Mark

    2018-05-01

    The key role that dust plays in the interstellar medium has motivated the development of numerical codes designed to study the coupled evolution of dust and gas in systems such as turbulent molecular clouds and protoplanetary discs. Drift between dust and gas has proven to be important as well as numerically challenging. We provide simple benchmarking problems for dusty gas codes by numerically solving the two-fluid dust-gas equations for steady, plane-parallel shock waves. The two distinct shock solutions to these equations allow a numerical code to test different forms of drag between the two fluids, the strength of that drag and the dust to gas ratio. We also provide an astrophysical application of J-type dust-gas shocks to studying the structure of accretion shocks on to protoplanetary discs. We find that two-fluid effects are most important for grains larger than 1 μm, and that the peak dust temperature within an accretion shock provides a signature of the dust-to-gas ratio of the infalling material.

  18. The impact of group therapy training on social communications of Afghan immigrants

    PubMed Central

    Mehrabi, Tayebeh; Musavi, Tayebeh; Ghazavi, Zahra; Zandieh, Zahra; Zamani, Ahmadreza

    2011-01-01

    BACKGROUND: Mental training considers sharing of mental health care information as the primary objective. The secondary objectives include facilitating dialogue about feelings such as isolation, sadness, labeling, loneliness and possible strategies for confronting with these feelings. Group therapy trainings have supportive functioning in accepting the environment so that the members are able to be part of the indigenous groups. However, no study has been ever done on the impact of this educational method on the communication problems of this group. This study aimed to determine the impact of group therapy training on the communication problems of Afghan immigrants. METHODS: This was a clinical trial study. Eighty-eight Afghan men were investigated. Sampling method was simple sampling method. Thereafter, the study subjects were divided randomly into two groups of test and control based on the inclusion criteria. Data collection tool was a self-made questionnaire about the social problems. For analyzing the data, software SPSS, independent t-test and paired t-test were used. RESULTS: Reviewing the data indicated lower mean score of the social problems after implementing the group therapy training in social communication compared with before implementing the group therapy training. Paired t-test showed a significant difference between mean scores of the social communication problems before and after the implementation of group therapy training. CONCLUSIONS: Given the effectiveness of the intervention, group therapy training on social problems in social communication of Afghan immigrants is recommended. This program should be part of continuous education and training of the Afghan immigrants. PMID:22224098

  19. MPPhys—A many-particle simulation package for computational physics education

    NASA Astrophysics Data System (ADS)

    Müller, Thomas

    2014-03-01

    In a first course to classical mechanics elementary physical processes like elastic two-body collisions, the mass-spring model, or the gravitational two-body problem are discussed in detail. The continuation to many-body systems, however, is deferred to graduate courses although the underlying equations of motion are essentially the same and although there is a strong motivation for high-school students in particular because of the use of particle systems in computer games. The missing link between the simple and the more complex problem is a basic introduction to solve the equations of motion numerically which could be illustrated, however, by means of the Euler method. The many-particle physics simulation package MPPhys offers a platform to experiment with simple particle simulations. The aim is to give a principle idea how to implement many-particle simulations and how simulation and visualization can be combined for interactive visual explorations. Catalogue identifier: AERR_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AERR_v1_0.html Program obtainable from: CPC Program Library, Queen’s University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 111327 No. of bytes in distributed program, including test data, etc.: 608411 Distribution format: tar.gz Programming language: C++, OpenGL, GLSL, OpenCL. Computer: Linux and Windows platforms with OpenGL support. Operating system: Linux and Windows. RAM: Source Code 4.5 MB Complete package 242 MB Classification: 14, 16.9. External routines: OpenGL, OpenCL Nature of problem: Integrate N-body simulations, mass-spring models Solution method: Numerical integration of N-body-simulations, 3D-Rendering via OpenGL. Running time: Problem dependent

  20. Class and Home Problems: Humidification, a True "Home" Problem for p. Chemical Engineer

    ERIC Educational Resources Information Center

    Condoret, Jean-Stephane

    2012-01-01

    The problem of maintaining hygrothermal comfort in a house is addressed using the chemical engineer's toolbox. A simple dynamic modelling proved to give a good description of the humidification of the house in winter, using a domestic humidifier. Parameters of the model were identified from a simple experiment. Surprising results, especially…

  1. The Effects of Lunar Dust on EVA Systems During the Apollo Missions

    NASA Technical Reports Server (NTRS)

    Gaier, James R.

    2005-01-01

    Mission documents from the six Apollo missions that landed on the lunar surface have been studied in order to catalog the effects of lunar dust on Extra-Vehicular Activity (EVA) systems, primarily the Apollo surface space suit. It was found that the effects could be sorted into nine categories: vision obscuration, false instrument readings, dust coating and contamination, loss of traction, clogging of mechanisms, abrasion, thermal control problems, seal failures, and inhalation and irritation. Although simple dust mitigation measures were sufficient to mitigate some of the problems (i.e., loss of traction) it was found that these measures were ineffective to mitigate many of the more serious problems (i.e., clogging, abrasion, diminished heat rejection). The severity of the dust problems were consistently underestimated by ground tests, indicating a need to develop better simulation facilities and procedures.

  2. The Effects of Lunar Dust on EVA Systems During the Apollo Missions

    NASA Technical Reports Server (NTRS)

    Gaier, James R.

    2007-01-01

    Mission documents from the six Apollo missions that landed on the lunar surface have been studied in order to catalog the effects of lunar dust on Extra-Vehicular Activity (EVA) systems, primarily the Apollo surface space suit. It was found that the effects could be sorted into nine categories: vision obscuration, false instrument readings, dust coating and contamination, loss of traction, clogging of mechanisms, abrasion, thermal control problems, seal failures, and inhalation and irritation. Although simple dust mitigation measures were sufficient to mitigate some of the problems (i.e., loss of traction) it was found that these measures were ineffective to mitigate many of the more serious problems (i.e., clogging, abrasion, diminished heat rejection). The severity of the dust problems were consistently underestimated by ground tests, indicating a need to develop better simulation facilities and procedures.

  3. Using Laboratory Homework to Facilitate Skill Integration and Assess Understanding in Intermediate Physics Courses

    NASA Astrophysics Data System (ADS)

    Johnston, Marty; Jalkio, Jeffrey

    2013-04-01

    By the time students have reached the intermediate level physics courses they have been exposed to a broad set of analytical, experimental, and computational skills. However, their ability to independently integrate these skills into the study of a physical system is often weak. To address this weakness and assess their understanding of the underlying physical concepts we have introduced laboratory homework into lecture based, junior level theoretical mechanics and electromagnetics courses. A laboratory homework set replaces a traditional one and emphasizes the analysis of a single system. In an exercise, students use analytical and computational tools to predict the behavior of a system and design a simple measurement to test their model. The laboratory portion of the exercises is straight forward and the emphasis is on concept integration and application. The short student reports we collect have revealed misconceptions that were not apparent in reviewing the traditional homework and test problems. Work continues on refining the current problems and expanding the problem sets.

  4. A simple level set method for solving Stefan problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, S.; Merriman, B.; Osher, S.

    1997-07-15

    Discussed in this paper is an implicit finite difference scheme for solving a heat equation and a simple level set method for capturing the interface between solid and liquid phases which are used to solve Stefan problems.

  5. Genetic Algorithms and Their Application to the Protein Folding Problem

    DTIC Science & Technology

    1993-12-01

    and symbolic methods, random methods such as Monte Carlo simulation and simulated annealing, distance geometry, and molecular dynamics. Many of these...calculated energies with those obtained using the molecular simulation software package called CHARMm. 10 9) Test both the simple and parallel simpie genetic...homology-based, and simplification techniques. 3.21 Molecular Dynamics. Perhaps the most natural approach is to actually simulate the folding process. This

  6. Noninvasive Tests Do Not Accurately Differentiate Nonalcoholic Steatohepatitis From Simple Steatosis: A Systematic Review and Meta-analysis.

    PubMed

    Verhaegh, Pauline; Bavalia, Roisin; Winkens, Bjorn; Masclee, Ad; Jonkers, Daisy; Koek, Ger

    2018-06-01

    Nonalcoholic fatty liver disease is a rapidly increasing health problem. Liver biopsy analysis is the most sensitive test to differentiate between nonalcoholic steatohepatitis (NASH) and simple steatosis (SS), but noninvasive methods are needed. We performed a systematic review and meta-analysis of noninvasive tests for differentiating NASH from SS, focusing on blood markers. We performed a systematic search of the PubMed, Medline and Embase (1990-2016) databases using defined keywords, limited to full-text papers in English and human adults, and identified 2608 articles. Two independent reviewers screened the articles and identified 122 eligible articles that used liver biopsy as reference standard. If at least 2 studies were available, pooled sensitivity (sens p ) and specificity (spec p ) values were determined using the Meta-Analysis Package for R (metafor). In the 122 studies analyzed, 219 different blood markers (107 single markers and 112 scoring systems) were identified to differentiate NASH from simple steatosis, and 22 other diagnostic tests were studied. Markers identified related to several pathophysiological mechanisms. The markers analyzed in the largest proportions of studies were alanine aminotransferase (sens p , 63.5% and spec p , 74.4%) within routine biochemical tests, adiponectin (sensp, 72.0% and spec p , 75.7%) within inflammatory markers, CK18-M30 (sens p , 68.4% and spec p , 74.2%) within markers of cell death or proliferation and homeostatic model assessment of insulin resistance (sens p , 69.0% and spec p , 72.7%) within the metabolic markers. Two scoring systems could also be pooled: the NASH test (differentiated NASH from borderline NASH plus simple steatosis with 22.9% sens p and 95.3% spec p ) and the GlycoNASH test (67.1% sens p and 63.8% spec p ). In the meta-analysis, we found no test to differentiate NASH from SS with a high level of pooled sensitivity and specificity (≥80%). However, some blood markers, when included in scoring systems in single studies, identified patients with NASH with ≥80% sensitivity and specificity. Replication studies and more standardized study designs are urgently needed. At present, no marker or scoring system can be recommended for use in clinical practice to differentiate NASH from simple steatosis. Copyright © 2018 AGA Institute. Published by Elsevier Inc. All rights reserved.

  7. Diagnostics and Active Control of Aircraft Interior Noise

    NASA Technical Reports Server (NTRS)

    Fuller, C. R.

    1998-01-01

    This project deals with developing advanced methods for investigating and controlling interior noise in aircraft. The work concentrates on developing and applying the techniques of Near Field Acoustic Holography (NAH) and Principal Component Analysis (PCA) to the aircraft interior noise dynamic problem. This involves investigating the current state of the art, developing new techniques and then applying them to the particular problem being studied. The knowledge gained under the first part of the project was then used to develop and apply new, advanced noise control techniques for reducing interior noise. A new fully active control approach based on the PCA was developed and implemented on a test cylinder. Finally an active-passive approach based on tunable vibration absorbers was to be developed and analytically applied to a range of test structures from simple plates to aircraft fuselages.

  8. The influence of strain rate and the effect of friction on the forging load in simple upsetting and closed die forging

    NASA Astrophysics Data System (ADS)

    Klemz, Francis B.

    Forging provides an elegant solution to the problem of producing complicated shapes from heated metal. This study attempts to relate some of the important parameters involved when considering, simple upsetting, closed die forging and extrusion forging.A literature survey showed some of the empirical graphical and statistical methods of load prediction together with analytical methods of estimating load and energy. Investigations of the effects of high strain rate and temperature on the stress-strain properties of materials are also evident.In the present study special equipment including an experimental drop hammer and various die-sets have been designed and manufactured. Instrumentation to measure load/time and displacement/time behaviour, of the deformed metal, has been incorporated and calibrated. A high speed camera was used to record the behaviour mode of test pieces used in the simple upsetting tests.Dynamic and quasi-static material properties for the test materials, lead and aluminium alloy, were measured using the drop-hammer and a compression-test machine.Analytically two separate mathematical solutions have been developed: A numerical technique using a lumped-massmodel for the analysis of simple upsetting and closed-die forging and, for extrusion forging, an analysis which equates the shear and compression energy requirements tothe work done by the forging load.Cylindrical test pieces were used for all the experiments and both dry and lubricated test conditions were investigated. The static and dynamic tests provide data on Load, Energy and the Profile of the deformed billet. In addition for the Extrusion Forging, both single ended and double ended tests were conducted. Material dependency was also examined by a further series of tests on aluminium and copper.Comparison of the experimental and theoretical results was made which shows clearly the effects of friction and high strain rate on load and energy requirements and the deformation mode of the billet. For the axisymmetric shapes considered, it was found that the load, energy requirement and profile could be predicted with reasonable accuracy.

  9. On inconsistency in frictional granular systems

    NASA Astrophysics Data System (ADS)

    Alart, Pierre; Renouf, Mathieu

    2018-04-01

    Numerical simulation of granular systems is often based on a discrete element method. The nonsmooth contact dynamics approach can be used to solve a broad range of granular problems, especially involving rigid bodies. However, difficulties could be encountered and hamper successful completion of some simulations. The slow convergence of the nonsmooth solver may sometimes be attributed to an ill-conditioned system, but the convergence may also fail. The prime aim of the present study was to identify situations that hamper the consistency of the mathematical problem to solve. Some simple granular systems were investigated in detail while reviewing and applying the related theoretical results. A practical alternative is briefly analyzed and tested.

  10. Crash test for the Copenhagen problem.

    PubMed

    Nagler, Jan

    2004-06-01

    The Copenhagen problem is a simple model in celestial mechanics. It serves to investigate the behavior of a small body under the gravitational influence of two equally heavy primary bodies. We present a partition of orbits into classes of various kinds of regular motion, chaotic motion, escape and crash. Collisions of the small body onto one of the primaries turn out to be unexpectedly frequent, and their probability displays a scale-free dependence on the size of the primaries. The analysis reveals a high degree of complexity so that long term prediction may become a formidable task. Moreover, we link the results to chaotic scattering theory and the theory of leaking Hamiltonian systems.

  11. Language-specific memory for everyday arithmetic facts in Chinese-English bilinguals.

    PubMed

    Chen, Yalin; Yanke, Jill; Campbell, Jamie I D

    2016-04-01

    The role of language in memory for arithmetic facts remains controversial. Here, we examined transfer of memory training for evidence that bilinguals may acquire language-specific memory stores for everyday arithmetic facts. Chinese-English bilingual adults (n = 32) were trained on different subsets of simple addition and multiplication problems. Each operation was trained in one language or the other. The subsequent test phase included all problems with addition and multiplication alternating across trials in two blocks, one in each language. Averaging over training language, the response time (RT) gains for trained problems relative to untrained problems were greater in the trained language than in the untrained language. Subsequent analysis showed that English training produced larger RT gains for trained problems relative to untrained problems in English at test relative to the untrained Chinese language. In contrast, there was no evidence with Chinese training that problem-specific RT gains differed between Chinese and the untrained English language. We propose that training in Chinese promoted a translation strategy for English arithmetic (particularly multiplication) that produced strong cross-language generalization of practice, whereas training in English strengthened relatively weak, English-language arithmetic memories and produced little generalization to Chinese (i.e., English training did not induce an English translation strategy for Chinese language trials). The results support the existence of language-specific strengthening of memory for everyday arithmetic facts.

  12. A review on simple assembly line balancing type-e problem

    NASA Astrophysics Data System (ADS)

    Jusop, M.; Rashid, M. F. F. Ab

    2015-12-01

    Simple assembly line balancing (SALB) is an attempt to assign the tasks to the various workstations along the line so that the precedence relations are satisfied and some performance measure are optimised. Advanced approach of algorithm is necessary to solve large-scale problems as SALB is a class of NP-hard. Only a few studies are focusing on simple assembly line balancing of Type-E problem (SALB-E) since it is a general and complex problem. SALB-E problem is one of SALB problem which consider the number of workstation and the cycle time simultaneously for the purpose of maximising the line efficiency. This paper review previous works that has been done in order to optimise SALB -E problem. Besides that, this paper also reviewed the Genetic Algorithm approach that has been used to optimise SALB-E. From the reviewed that has been done, it was found that none of the existing works are concern on the resource constraint in the SALB-E problem especially on machine and tool constraints. The research on SALB-E will contribute to the improvement of productivity in real industrial application.

  13. Application of artificial neural networks to identify equilibration in computer simulations

    NASA Astrophysics Data System (ADS)

    Leibowitz, Mitchell H.; Miller, Evan D.; Henry, Michael M.; Jankowski, Eric

    2017-11-01

    Determining which microstates generated by a thermodynamic simulation are representative of the ensemble for which sampling is desired is a ubiquitous, underspecified problem. Artificial neural networks are one type of machine learning algorithm that can provide a reproducible way to apply pattern recognition heuristics to underspecified problems. Here we use the open-source TensorFlow machine learning library and apply it to the problem of identifying which hypothetical observation sequences from a computer simulation are “equilibrated” and which are not. We generate training populations and test populations of observation sequences with embedded linear and exponential correlations. We train a two-neuron artificial network to distinguish the correlated and uncorrelated sequences. We find that this simple network is good enough for > 98% accuracy in identifying exponentially-decaying energy trajectories from molecular simulations.

  14. ParaExp Using Leapfrog as Integrator for High-Frequency Electromagnetic Simulations

    NASA Astrophysics Data System (ADS)

    Merkel, M.; Niyonzima, I.; Schöps, S.

    2017-12-01

    Recently, ParaExp was proposed for the time integration of linear hyperbolic problems. It splits the time interval of interest into subintervals and computes the solution on each subinterval in parallel. The overall solution is decomposed into a particular solution defined on each subinterval with zero initial conditions and a homogeneous solution propagated by the matrix exponential applied to the initial conditions. The efficiency of the method depends on fast approximations of this matrix exponential based on recent results from numerical linear algebra. This paper deals with the application of ParaExp in combination with Leapfrog to electromagnetic wave problems in time domain. Numerical tests are carried out for a simple toy problem and a realistic spiral inductor model discretized by the Finite Integration Technique.

  15. Asymmetrically dominated choice problems, the isolation hypothesis and random incentive mechanisms.

    PubMed

    Cox, James C; Sadiraj, Vjollca; Schmidt, Ulrich

    2014-01-01

    This paper presents an experimental study of the random incentive mechanisms which are a standard procedure in economic and psychological experiments. Random incentive mechanisms have several advantages but are incentive-compatible only if responses to the single tasks are independent. This is true if either the independence axiom of expected utility theory or the isolation hypothesis of prospect theory holds. We present a simple test of this in the context of choice under risk. In the baseline (one task) treatment we observe risk behavior in a given choice problem. We show that by integrating a second, asymmetrically dominated choice problem in a random incentive mechanism risk behavior can be manipulated systematically. This implies that the isolation hypothesis is violated and the random incentive mechanism does not elicit true preferences in our example.

  16. Elasto visco-plastic flow with special attention to boundary conditions

    NASA Technical Reports Server (NTRS)

    Shimazaki, Y.; Thompson, E. G.

    1981-01-01

    A simple but nontrivial steady-state creeping elasto visco-plastic (Maxwell fluid) radial flow problem is analyzed, with special attention given to the effects of the boundary conditions. Solutions are obtained through integration of a governing equation on stress using the Runge-Kutta method for initial value problems and finite differences for boundary value problems. A more general approach through the finite element method, an approach that solves for the velocity field rather than the stress field and that is applicable to a wide range of problems, is presented and tested using the radial flow example. It is found that steady-state flows of elasto visco-plastic materials are strongly influenced by the state of stress of material as it enters the region of interest. The importance of this boundary or initial condition in analyses involving materials coming into control volumes from unusual stress environments is emphasized.

  17. Recent Advances in Agglomerated Multigrid

    NASA Technical Reports Server (NTRS)

    Nishikawa, Hiroaki; Diskin, Boris; Thomas, James L.; Hammond, Dana P.

    2013-01-01

    We report recent advancements of the agglomerated multigrid methodology for complex flow simulations on fully unstructured grids. An agglomerated multigrid solver is applied to a wide range of test problems from simple two-dimensional geometries to realistic three- dimensional configurations. The solver is evaluated against a single-grid solver and, in some cases, against a structured-grid multigrid solver. Grid and solver issues are identified and overcome, leading to significant improvements over single-grid solvers.

  18. Pre/Post Data Analysis - Simple or Is It?

    NASA Technical Reports Server (NTRS)

    Feiveson, Al; Fiedler, James; Ploutz-Snyder, Robert

    2011-01-01

    This slide presentation reviews some of the problems of data analysis in analyzing pre and post data. Using as an example, ankle extensor strength (AES) experiments, to measure bone density loss during bed rest, the presentation discusses several questions: (1) How should we describe change? (2) Common analysis methods for comparing post to pre results. (3) What do we mean by "% change"? and (4) What are we testing when we compare % changes?

  19. The Shock and Vibration Bulletin. Part 3. Structural Dynamics, Machinery Dynamics and Vibration Problems

    DTIC Science & Technology

    1984-06-01

    and to thermopile, but with a dynamically non similar control . Response limiting was accomplished by electric heat source. The test transient measuring...pulse Improvements = Final eport, Space teats were found to be reasonably simple to and Communications Group , Hughes implement and control . The time...coolant flow components, experimental studies are generally from the core is constricted by the presence r of the control rod drive line (CRDL

  20. A verification library for multibody simulation software

    NASA Technical Reports Server (NTRS)

    Kim, Sung-Soo; Haug, Edward J.; Frisch, Harold P.

    1989-01-01

    A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure.

  1. Development of new dyes for use in integrated optical sensors.

    PubMed

    Citterio, D; Rásonyi, S; Spichiger, U E

    1996-03-01

    New chromoionophores have been developed, focused on NIR applications so that optode membranes may be used in monolithically integrated optical sensors. The wavelength of maximum absorbance has been estimated for a new model compound by the Pariser-Parr-Pople (PPP) method. Several cyanine type dyes have been tested as membrane chromoionophores. Membrane composition has been altered to overcome solubility problems. In this way, simple pH-sensitive optode membranes have been produced.

  2. Some anticipated contributions to core fluid dynamics from the GRM

    NASA Technical Reports Server (NTRS)

    Vanvorhies, C.

    1985-01-01

    It is broadly maintained that the secular variation (SV) of the large scale geomagnetic field contains information on the fluid dynamics of Earth's electrically conducting outer core. The electromagnetic theory appropriate to a simple Earth model has recently been combined with reduced geomagnetic data in order to extract some of this information and ascertain its significance. The simple Earth model consists of a rigid, electrically insulating mantle surrounding a spherical, inviscid, and perfectly conducting liquid outer core. This model was tested against seismology by using truncated spherical harmonic models of the observed geomagnetic field to locate Earth's core-mantle boundary, CMB. Further electromagnetic theory has been developed and applied to the problem of estimating the horizontal fluid motion just beneath CMB. Of particular geophysical interest are the hypotheses that these motions: (1) include appreciable surface divergence indicative of vertical motion at depth, and (2) are steady for time intervals of a decade or more. In addition to the extended testing of the basic Earth model, the proposed GRM provides a unique opportunity to test these dynamical hypotheses.

  3. Noticing relevant problem features: activating prior knowledge affects problem solving by guiding encoding

    PubMed Central

    Crooks, Noelle M.; Alibali, Martha W.

    2013-01-01

    This study investigated whether activating elements of prior knowledge can influence how problem solvers encode and solve simple mathematical equivalence problems (e.g., 3 + 4 + 5 = 3 + __). Past work has shown that such problems are difficult for elementary school students (McNeil and Alibali, 2000). One possible reason is that children's experiences in math classes may encourage them to think about equations in ways that are ultimately detrimental. Specifically, children learn a set of patterns that are potentially problematic (McNeil and Alibali, 2005a): the perceptual pattern that all equations follow an “operations = answer” format, the conceptual pattern that the equal sign means “calculate the total”, and the procedural pattern that the correct way to solve an equation is to perform all of the given operations on all of the given numbers. Upon viewing an equivalence problem, knowledge of these patterns may be reactivated, leading to incorrect problem solving. We hypothesized that these patterns may negatively affect problem solving by influencing what people encode about a problem. To test this hypothesis in children would require strengthening their misconceptions, and this could be detrimental to their mathematical development. Therefore, we tested this hypothesis in undergraduate participants. Participants completed either control tasks or tasks that activated their knowledge of the three patterns, and were then asked to reconstruct and solve a set of equivalence problems. Participants in the knowledge activation condition encoded the problems less well than control participants. They also made more errors in solving the problems, and their errors resembled the errors children make when solving equivalence problems. Moreover, encoding performance mediated the effect of knowledge activation on equivalence problem solving. Thus, one way in which experience may affect equivalence problem solving is by influencing what students encode about the equations. PMID:24324454

  4. Reducing the two-body problem in scalar-tensor theories to the motion of a test particle: A scalar-tensor effective-one-body approach

    NASA Astrophysics Data System (ADS)

    Julié, Félix-Louis

    2018-01-01

    Starting from the second post-Keplerian (2PK) Hamiltonian describing the conservative part of the two-body dynamics in massless scalar-tensor (ST) theories, we build an effective-one-body (EOB) Hamiltonian which is a ν deformation (where ν =0 is the test mass limit) of the analytically known ST Hamiltonian of a test particle. This ST-EOB Hamiltonian leads to a simple (yet canonically equivalent) formulation of the conservative 2PK two-body problem, but also defines a resummation of the dynamics which is well-suited to ST regimes that depart strongly from general relativity (GR) and which may provide information on the strong field dynamics; in particular, the ST innermost stable circular orbit location and associated orbital frequency. Results will be compared and contrasted with those deduced from the ST-deformation of the (5PN) GR-EOB Hamiltonian previously obtained in [Phys. Rev. D 95, 124054 (2017), 10.1103/PhysRevD.95.124054].

  5. Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Gentile, Nick

    This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy andmore » performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.« less

  6. Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering.

    PubMed

    Gil-Ley, Alejandro; Bussi, Giovanni

    2015-03-10

    The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide.

  7. Fokker-Planck Equations of Stochastic Acceleration: A Study of Numerical Methods

    NASA Astrophysics Data System (ADS)

    Park, Brian T.; Petrosian, Vahe

    1996-03-01

    Stochastic wave-particle acceleration may be responsible for producing suprathermal particles in many astrophysical situations. The process can be described as a diffusion process through the Fokker-Planck equation. If the acceleration region is homogeneous and the scattering mean free path is much smaller than both the energy change mean free path and the size of the acceleration region, then the Fokker-Planck equation reduces to a simple form involving only the time and energy variables. in an earlier paper (Park & Petrosian 1995, hereafter Paper 1), we studied the analytic properties of the Fokker-Planck equation and found analytic solutions for some simple cases. In this paper, we study the numerical methods which must be used to solve more general forms of the equation. Two classes of numerical methods are finite difference methods and Monte Carlo simulations. We examine six finite difference methods, three fully implicit and three semi-implicit, and a stochastic simulation method which uses the exact correspondence between the Fokker-Planck equation and the it5 stochastic differential equation. As discussed in Paper I, Fokker-Planck equations derived under the above approximations are singular, causing problems with boundary conditions and numerical overflow and underflow. We evaluate each method using three sample equations to test its stability, accuracy, efficiency, and robustness for both time-dependent and steady state solutions. We conclude that the most robust finite difference method is the fully implicit Chang-Cooper method, with minor extensions to account for the escape and injection terms. Other methods suffer from stability and accuracy problems when dealing with some Fokker-Planck equations. The stochastic simulation method, although simple to implement, is susceptible to Poisson noise when insufficient test particles are used and is computationally very expensive compared to the finite difference method.

  8. Exploring different strategies for imbalanced ADME data problem: case study on Caco-2 permeability modeling.

    PubMed

    Pham-The, Hai; Casañola-Martin, Gerardo; Garrigues, Teresa; Bermejo, Marival; González-Álvarez, Isabel; Nguyen-Hai, Nam; Cabrera-Pérez, Miguel Ángel; Le-Thi-Thu, Huong

    2016-02-01

    In many absorption, distribution, metabolism, and excretion (ADME) modeling problems, imbalanced data could negatively affect classification performance of machine learning algorithms. Solutions for handling imbalanced dataset have been proposed, but their application for ADME modeling tasks is underexplored. In this paper, various strategies including cost-sensitive learning and resampling methods were studied to tackle the moderate imbalance problem of a large Caco-2 cell permeability database. Simple physicochemical molecular descriptors were utilized for data modeling. Support vector machine classifiers were constructed and compared using multiple comparison tests. Results showed that the models developed on the basis of resampling strategies displayed better performance than the cost-sensitive classification models, especially in the case of oversampling data where misclassification rates for minority class have values of 0.11 and 0.14 for training and test set, respectively. A consensus model with enhanced applicability domain was subsequently constructed and showed improved performance. This model was used to predict a set of randomly selected high-permeability reference drugs according to the biopharmaceutics classification system. Overall, this study provides a comparison of numerous rebalancing strategies and displays the effectiveness of oversampling methods to deal with imbalanced permeability data problems.

  9. Identifiability in N-mixture models: a large-scale screening test with bird data.

    PubMed

    Kéry, Marc

    2018-02-01

    Binomial N-mixture models have proven very useful in ecology, conservation, and monitoring: they allow estimation and modeling of abundance separately from detection probability using simple counts. Recently, doubts about parameter identifiability have been voiced. I conducted a large-scale screening test with 137 bird data sets from 2,037 sites. I found virtually no identifiability problems for Poisson and zero-inflated Poisson (ZIP) binomial N-mixture models, but negative-binomial (NB) models had problems in 25% of all data sets. The corresponding multinomial N-mixture models had no problems. Parameter estimates under Poisson and ZIP binomial and multinomial N-mixture models were extremely similar. Identifiability problems became a little more frequent with smaller sample sizes (267 and 50 sites), but were unaffected by whether the models did or did not include covariates. Hence, binomial N-mixture model parameters with Poisson and ZIP mixtures typically appeared identifiable. In contrast, NB mixtures were often unidentifiable, which is worrying since these were often selected by Akaike's information criterion. Identifiability of binomial N-mixture models should always be checked. If problems are found, simpler models, integrated models that combine different observation models or the use of external information via informative priors or penalized likelihoods, may help. © 2017 by the Ecological Society of America.

  10. Using Algorithms in Solving Synapse Transmission Problems.

    ERIC Educational Resources Information Center

    Stencel, John E.

    1992-01-01

    Explains how a simple three-step algorithm can aid college students in solving synapse transmission problems. Reports that all of the students did not completely understand the algorithm. However, many learn a simple working model of synaptic transmission and understand why an impulse will pass across a synapse quantitatively. Students also see…

  11. Simple adaptive control for quadcopters with saturated actuators

    NASA Astrophysics Data System (ADS)

    Borisov, Oleg I.; Bobtsov, Alexey A.; Pyrkin, Anton A.; Gromov, Vladislav S.

    2017-01-01

    The stabilization problem for quadcopters with saturated actuators is considered. A simple adaptive output control approach is proposed. The control law "consecutive compensator" is augmented with the auxiliary integral loop and anti-windup scheme. Efficiency of the obtained regulator was confirmed by simulation of the quadcopter control problem.

  12. High order methods for the integration of the Bateman equations and other problems of the form of y‧ = F(y,t)y

    NASA Astrophysics Data System (ADS)

    Josey, C.; Forget, B.; Smith, K.

    2017-12-01

    This paper introduces two families of A-stable algorithms for the integration of y‧ = F (y , t) y: the extended predictor-corrector (EPC) and the exponential-linear (EL) methods. The structure of the algorithm families are described, and the method of derivation of the coefficients presented. The new algorithms are then tested on a simple deterministic problem and a Monte Carlo isotopic evolution problem. The EPC family is shown to be only second order for systems of ODEs. However, the EPC-RK45 algorithm had the highest accuracy on the Monte Carlo test, requiring at least a factor of 2 fewer function evaluations to achieve a given accuracy than a second order predictor-corrector method (center extrapolation / center midpoint method) with regards to Gd-157 concentration. Members of the EL family can be derived to at least fourth order. The EL3 and the EL4 algorithms presented are shown to be third and fourth order respectively on the systems of ODE test. In the Monte Carlo test, these methods did not overtake the accuracy of EPC methods before statistical uncertainty dominated the error. The statistical properties of the algorithms were also analyzed during the Monte Carlo problem. The new methods are shown to yield smaller standard deviations on final quantities as compared to the reference predictor-corrector method, by up to a factor of 1.4.

  13. A Simple Label Switching Algorithm for Semisupervised Structural SVMs.

    PubMed

    Balamurugan, P; Shevade, Shirish; Sundararajan, S

    2015-10-01

    In structured output learning, obtaining labeled data for real-world applications is usually costly, while unlabeled examples are available in abundance. Semisupervised structured classification deals with a small number of labeled examples and a large number of unlabeled structured data. In this work, we consider semisupervised structural support vector machines with domain constraints. The optimization problem, which in general is not convex, contains the loss terms associated with the labeled and unlabeled examples, along with the domain constraints. We propose a simple optimization approach that alternates between solving a supervised learning problem and a constraint matching problem. Solving the constraint matching problem is difficult for structured prediction, and we propose an efficient and effective label switching method to solve it. The alternating optimization is carried out within a deterministic annealing framework, which helps in effective constraint matching and avoiding poor local minima, which are not very useful. The algorithm is simple and easy to implement. Further, it is suitable for any structured output learning problem where exact inference is available. Experiments on benchmark sequence labeling data sets and a natural language parsing data set show that the proposed approach, though simple, achieves comparable generalization performance.

  14. An investigation of rooftop STOL port aerodynamics

    NASA Technical Reports Server (NTRS)

    Blanton, J. N.; Parker, H. M.

    1972-01-01

    An investigation into aerodynamic problems associated with large building rooftop STOLports was performed. Initially, a qualitative flow visualization study indicated two essential problems: (1) the establishment of smooth, steady, attached flow over the rooftop, and (2) the generation of acceptable crosswind profile once (1) has been achieved. This study indicated that (1) could be achieved by attaching circular-arc rounded edge extensions to the upper edges of the building and that crosswind profiles could be modified by the addition of porous vertical fences to the lateral edges of the rooftop. Important fence parameters associated with crosswind alteration were found to be solidity, fence element number and spacing. Large scale building induced velocity fluctuations were discovered for most configurations tested and a possible explanation for their occurrence was postulated. Finally, a simple equation relating fence solidity to the resulting velocity profile was developed and tested for non-uniform single element fences with 30 percent maximum solidity.

  15. Toddlers with Early Behavioral Problems at Higher Family Demographic Risk Benefit the Most from Maternal Emotion Talk.

    PubMed

    Brophy-Herb, Holly E; Bocknek, Erika London; Vallotton, Claire D; Stansbury, Kathy E; Senehi, Neda; Dalimonte-Merckling, Danielle; Lee, Young-Eun

    2015-09-01

    To test the hypothesis that toddlers at highest risk for behavioral problems from the most economically vulnerable families will benefit most from maternal talk about emotions. This study included 89 toddlers and mothers from low-income families. Behavioral problems were rated at 2 time points by masters-level trained Early Head Start home visiting specialists. Maternal emotion talk was coded from a wordless book-sharing task. Coding focused on mothers' emotion bridging, which included labeling emotions, explaining the context of emotions, noting the behavioral cues of emotions, and linking emotions to toddlers' own experiences. Maternal demographic risk reflected a composite score of 5 risk factors. A significant 3-way interaction between Time 1 toddler behavior problems, maternal emotion talk, and maternal demographic risk (p = .001) and examination of slope difference tests revealed that when maternal demographic risk was greater, more maternal emotion talk buffered associations between earlier and later behavior problems. Greater demographic risk and lower maternal emotion talk intensified Time 1 behavior problems as a predictor of Time 2 behavior problems. The model explained 54% of the variance in toddlers' Time 2 behavior problems. Analyses controlled for maternal warmth to better examine the unique contributions of emotion bridging to toddlers' behaviors. Toddlers at highest risk, those with more early behavioral problems from higher demographic-risk families, benefit the most from mothers' emotion talk. Informing parents about the use of emotion talk may be a cost-effective, simple strategy to support at-risk toddlers' social-emotional development and reduce behavioral problems.

  16. Assessing Quantitative Learning With The Math You Need When You Need It

    NASA Astrophysics Data System (ADS)

    Wenner, J. M.; Baer, E. M.; Burn, H.

    2008-12-01

    We present new data from a pilot project using the The Math You Need, When You Need It (TMYN) web resources in conjunction with several introductory geoscience courses. TMYN is a series of NSF-supported, NAGT-sponsored, web-based modular resources designed to help students learn (or relearn) mathematical skills essential for success in introductory geoscience courses. TMYN presents mathematical topics that are relevant to introductory geoscience based on a survey of more than 75 geoscience faculty members. To date, modules include unit conversions, many aspects of graphing, density calculations, rearranging equations and other simple mathematical concepts commonly used in the geosciences. The modular nature of the resources make it simple to select the units that are appropriate for a given course. In the fall of 2008, nine TMYN modules were tested in three courses taught at Highline Community College (Geology 101) and University of Wisconsin Oshkosh (Physical and Environmental Geology). Over 300 students participated in the study by taking pre- and post-tests and completing modules relevant to their course. Feedback about the use of these modules has been mixed. Initial results confirm anecdotal evidence that students initially have difficulty applying mathematical concepts to geologic problems. Furthermore, pre- test results indicate that, although instructors assume that students can perform simple mathematical manipulations, many students arrive in courses without the skills to apply mathematical concepts in problem solving situations. TMYN resources effectively provide support for learning quantitative problem solving and a mechanism for students to engage in self-teaching. Although we have seen mixed results due to a range of instructor engagement with the material, TMYN can have significant effect on students who are math phobic or "can't do math" because they can work at their own pace to overcome affective obstacles such as fear and dislike of mathematics. TMYN is most effective when instructors make explicit connections between material in the modules and course content. Instructors who participated in the study in Fall 2008 reacted positively to the use of TMYN in introductory geoscience courses because the resources require minimal class and prep time. Furthermore, when instructors can hold students responsible for the quantitative concepts covered with TMYN, they feel more comfortable including quantitative information without significant loss of geologic content.

  17. Virtual-stereo fringe reflection technique for specular free-form surface testing

    NASA Astrophysics Data System (ADS)

    Ma, Suodong; Li, Bo

    2016-11-01

    Due to their excellent ability to improve the performance of optical systems, free-form optics have attracted extensive interest in many fields, e.g. optical design of astronomical telescopes, laser beam expanders, spectral imagers, etc. However, compared with traditional simple ones, testing for such kind of optics is usually more complex and difficult which has been being a big barrier for the manufacture and the application of these optics. Fortunately, owing to the rapid development of electronic devices and computer vision technology, fringe reflection technique (FRT) with advantages of simple system structure, high measurement accuracy and large dynamic range is becoming a powerful tool for specular free-form surface testing. In order to obtain absolute surface shape distributions of test objects, two or more cameras are often required in the conventional FRT which makes the system structure more complex and the measurement cost much higher. Furthermore, high precision synchronization between each camera is also a troublesome issue. To overcome the aforementioned drawback, a virtual-stereo FRT for specular free-form surface testing is put forward in this paper. It is able to achieve absolute profiles with the help of only one single biprism and a camera meanwhile avoiding the problems of stereo FRT based on binocular or multi-ocular cameras. Preliminary experimental results demonstrate the feasibility of the proposed technique.

  18. Evaluation of commercially available techniques and development of simplified methods for measuring grille airflows in HVAC systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Walker, Iain S.; Wray, Craig P.; Guillot, Cyril

    2003-08-01

    In this report, we discuss the accuracy of flow hoods for residential applications, based on laboratory tests and field studies. The results indicate that commercially available hoods are often inadequate to measure flows in residential systems, and that there can be a wide range of performance between different flow hoods. The errors are due to poor calibrations, sensitivity of existing hoods to grille flow non-uniformities, and flow changes from added flow resistance. We also evaluated several simple techniques for measuring register airflows that could be adopted by the HVAC industry and homeowners as simple diagnostics that are often as accuratemore » as commercially available devices. Our test results also show that current calibration procedures for flow hoods do not account for field application problems. As a result, organizations such as ASHRAE or ASTM need to develop a new standard for flow hood calibration, along with a new measurement standard to address field use of flow hoods.« less

  19. Use of Docker for deployment and testing of astronomy software

    NASA Astrophysics Data System (ADS)

    Morris, D.; Voutsinas, S.; Hambly, N. C.; Mann, R. G.

    2017-07-01

    We describe preliminary investigations of using Docker for the deployment and testing of astronomy software. Docker is a relatively new containerization technology that is developing rapidly and being adopted across a range of domains. It is based upon virtualization at operating system level, which presents many advantages in comparison to the more traditional hardware virtualization that underpins most cloud computing infrastructure today. A particular strength of Docker is its simple format for describing and managing software containers, which has benefits for software developers, system administrators and end users. We report on our experiences from two projects - a simple activity to demonstrate how Docker works, and a more elaborate set of services that demonstrates more of its capabilities and what they can achieve within an astronomical context - and include an account of how we solved problems through interaction with Docker's very active open source development community, which is currently the key to the most effective use of this rapidly-changing technology.

  20. Statistical mechanics of simple models of protein folding and design.

    PubMed Central

    Pande, V S; Grosberg, A Y; Tanaka, T

    1997-01-01

    It is now believed that the primary equilibrium aspects of simple models of protein folding are understood theoretically. However, current theories often resort to rather heavy mathematics to overcome some technical difficulties inherent in the problem or start from a phenomenological model. To this end, we take a new approach in this pedagogical review of the statistical mechanics of protein folding. The benefit of our approach is a drastic mathematical simplification of the theory, without resort to any new approximations or phenomenological prescriptions. Indeed, the results we obtain agree precisely with previous calculations. Because of this simplification, we are able to present here a thorough and self contained treatment of the problem. Topics discussed include the statistical mechanics of the random energy model (REM), tests of the validity of REM as a model for heteropolymer freezing, freezing transition of random sequences, phase diagram of designed ("minimally frustrated") sequences, and the degree to which errors in the interactions employed in simulations of either folding and design can still lead to correct folding behavior. Images FIGURE 2 FIGURE 3 FIGURE 4 FIGURE 6 PMID:9414231

  1. Network systems security analysis

    NASA Astrophysics Data System (ADS)

    Yilmaz, Ä.°smail

    2015-05-01

    Network Systems Security Analysis has utmost importance in today's world. Many companies, like banks which give priority to data management, test their own data security systems with "Penetration Tests" by time to time. In this context, companies must also test their own network/server systems and take precautions, as the data security draws attention. Based on this idea, the study cyber-attacks are researched throughoutly and Penetration Test technics are examined. With these information on, classification is made for the cyber-attacks and later network systems' security is tested systematically. After the testing period, all data is reported and filed for future reference. Consequently, it is found out that human beings are the weakest circle of the chain and simple mistakes may unintentionally cause huge problems. Thus, it is clear that some precautions must be taken to avoid such threats like updating the security software.

  2. A Screening Test for Wilson's Disease and its Application to Psychiatric Patients

    PubMed Central

    Cox, Diane Wilson

    1967-01-01

    Varied modes of onset make the early diagnosis of Wilson's disease difficult. A deficiency of serum ceruloplasmin, usually characteristic of the disease, was used as the basis for a screening test. Simple test materials and provision for handling about 50 plasma samples simultaneously made this test feasible for large-scale screening. The screening test was applied to 336 persons hospitalized for psychiatric disorders, to detect patients with Wilson's disease before the classical symptoms appeared. Two patients with ceruloplasmin levels below the normal limits were detected but did not have Wilson's disease. Further application of the screening test to relatives of patients known to have Wilson's disease and to individuals with any symptoms of the disease (hepatic disease, extrapyramidal dysfunction, psychiatric disorders, behaviour problems in children) would aid in early diagnosis and more effective treatment. ImagesFig. 1 PMID:6017170

  3. GRIPs (Group Investigation Problems) for Introductory Physics

    NASA Astrophysics Data System (ADS)

    Moore, Thomas A.

    2006-12-01

    GRIPs lie somewhere between homework problems and simple labs: they are open-ended questions that require a mixture of problem-solving skills and hands-on experimentation to solve practical puzzles involving simple physical objects. In this talk, I will describe three GRIPs that I developed for a first-semester introductory calculus-based physics course based on the "Six Ideas That Shaped Physics" text. I will discuss the design of the three GRIPs we used this past fall, our experience in working with students on these problems, and students' response as reported on course evaluations.

  4. A study of compositional verification based IMA integration method

    NASA Astrophysics Data System (ADS)

    Huang, Hui; Zhang, Guoquan; Xu, Wanmeng

    2018-03-01

    The rapid development of avionics systems is driving the application of integrated modular avionics (IMA) systems. But meanwhile it is improving avionics system integration, complexity of system test. Then we need simplify the method of IMA system test. The IMA system supports a module platform that runs multiple applications, and shares processing resources. Compared with federated avionics system, IMA system is difficult to isolate failure. Therefore, IMA system verification will face the critical problem is how to test shared resources of multiple application. For a simple avionics system, traditional test methods are easily realizing to test a whole system. But for a complex system, it is hard completed to totally test a huge and integrated avionics system. Then this paper provides using compositional-verification theory in IMA system test, so that reducing processes of test and improving efficiency, consequently economizing costs of IMA system integration.

  5. Could HPS Improve Problem-Solving?

    NASA Astrophysics Data System (ADS)

    Coelho, Ricardo Lopes

    2013-05-01

    It is generally accepted nowadays that History and Philosophy of Science (HPS) is useful in understanding scientific concepts, theories and even some experiments. Problem-solving strategies are a significant topic, since students' careers depend on their skill to solve problems. These are the reasons for addressing the question of whether problem solving could be improved by means of HPS. Three typical problems in introductory courses of mechanics—the inclined plane, the simple pendulum and the Atwood machine—are taken as the object of the present study. The solving strategies of these problems in the eighteenth and nineteenth century constitute the historical component of the study. Its philosophical component stems from the foundations of mechanics research literature. The use of HPS leads us to see those problems in a different way. These different ways can be tested, for which experiments are proposed. The traditional solving strategies for the incline and pendulum problems are adequate for some situations but not in general. The recourse to apparent weights in the Atwood machine problem leads us to a new insight and a solving strategy for composed Atwood machines. Educational implications also concern the development of logical thinking by means of the variety of lines of thought provided by HPS.

  6. Simulator for multilevel optimization research

    NASA Technical Reports Server (NTRS)

    Padula, S. L.; Young, K. C.

    1986-01-01

    A computer program designed to simulate and improve multilevel optimization techniques is described. By using simple analytic functions to represent complex engineering analyses, the simulator can generate and test a large variety of multilevel decomposition strategies in a relatively short time. This type of research is an essential step toward routine optimization of large aerospace systems. The paper discusses the types of optimization problems handled by the simulator and gives input and output listings and plots for a sample problem. It also describes multilevel implementation techniques which have value beyond the present computer program. Thus, this document serves as a user's manual for the simulator and as a guide for building future multilevel optimization applications.

  7. Infrared thermographic detection of buried grave sites

    NASA Astrophysics Data System (ADS)

    Weil, Gary J.; Graf, Richard J.

    1992-04-01

    Since time began, people have been born and people have died. For a variety of reasons grave sites have had to be located and investigated. These reasons have included legal, criminal, religious, construction and even simple curiosity problems. Destructive testing methods such as shovels and backhoes, have traditionally been used to determine grave site locations in fields, under pavements, and behind hidden locations. These existing techniques are slow, inconvenient, dirty, destructive, visually obtrusive, irritating to relatives, explosive to the media and expensive. A new, nondestructive, non-contact technique, infrared thermography has been developed to address these problems. This paper will describe how infrared thermography works and will be illustrated by several case histories.

  8. Experiment D009: Simple navigation

    NASA Technical Reports Server (NTRS)

    Silva, R. M.; Jorris, T. R.; Vallerie, E. M., III

    1971-01-01

    Space position-fixing techniques have been investigated by collecting data on the observable phenomena of space flight that could be used to solve the problem of autonomous navigation by the use of optical data and manual computations to calculate the position of a spacecraft. After completion of the developmental and test phases, the product of the experiment would be a manual-optical technique of orbital space navigation that could be used as a backup to onboard and ground-based spacecraft-navigation systems.

  9. Verification assessment of piston boundary conditions for Lagrangian simulation of compressible flow similarity solutions

    DOE PAGES

    Ramsey, Scott D.; Ivancic, Philip R.; Lilieholm, Jennifer F.

    2015-12-10

    This work is concerned with the use of similarity solutions of the compressible flow equations as benchmarks or verification test problems for finite-volume compressible flow simulation software. In practice, this effort can be complicated by the infinite spatial/temporal extent of many candidate solutions or “test problems.” Methods can be devised with the intention of ameliorating this inconsistency with the finite nature of computational simulation; the exact strategy will depend on the code and problem archetypes under investigation. For example, self-similar shock wave propagation can be represented in Lagrangian compressible flow simulations as rigid boundary-driven flow, even if no such “piston”more » is present in the counterpart mathematical similarity solution. The purpose of this work is to investigate in detail the methodology of representing self-similar shock wave propagation as a piston-driven flow in the context of various test problems featuring simple closed-form solutions of infinite spatial/temporal extent. The closed-form solutions allow for the derivation of similarly closed-form piston boundary conditions (BCs) for use in Lagrangian compressible flow solvers. Finally, the consequences of utilizing these BCs (as opposed to directly initializing the self-similar solution in a computational spatial grid) are investigated in terms of common code verification analysis metrics (e.g., shock strength/position errors and global convergence rates).« less

  10. Verification assessment of piston boundary conditions for Lagrangian simulation of compressible flow similarity solutions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ramsey, Scott D.; Ivancic, Philip R.; Lilieholm, Jennifer F.

    This work is concerned with the use of similarity solutions of the compressible flow equations as benchmarks or verification test problems for finite-volume compressible flow simulation software. In practice, this effort can be complicated by the infinite spatial/temporal extent of many candidate solutions or “test problems.” Methods can be devised with the intention of ameliorating this inconsistency with the finite nature of computational simulation; the exact strategy will depend on the code and problem archetypes under investigation. For example, self-similar shock wave propagation can be represented in Lagrangian compressible flow simulations as rigid boundary-driven flow, even if no such “piston”more » is present in the counterpart mathematical similarity solution. The purpose of this work is to investigate in detail the methodology of representing self-similar shock wave propagation as a piston-driven flow in the context of various test problems featuring simple closed-form solutions of infinite spatial/temporal extent. The closed-form solutions allow for the derivation of similarly closed-form piston boundary conditions (BCs) for use in Lagrangian compressible flow solvers. Finally, the consequences of utilizing these BCs (as opposed to directly initializing the self-similar solution in a computational spatial grid) are investigated in terms of common code verification analysis metrics (e.g., shock strength/position errors and global convergence rates).« less

  11. On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed; Farag, Aly

    2005-12-01

    The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.

  12. On the Formulation of Weakly Singular Displacement/Traction Integral Equations; and Their Solution by the MLPG Method

    NASA Technical Reports Server (NTRS)

    Atluri, Satya N.; Shen, Shengping

    2002-01-01

    In this paper, a very simple method is used to derive the weakly singular traction boundary integral equation based on the integral relationships for displacement gradients. The concept of the MLPG method is employed to solve the integral equations, especially those arising in solid mechanics. A moving Least Squares (MLS) interpolation is selected to approximate the trial functions in this paper. Five boundary integral Solution methods are introduced: direct solution method; displacement boundary-value problem; traction boundary-value problem; mixed boundary-value problem; and boundary variational principle. Based on the local weak form of the BIE, four different nodal-based local test functions are selected, leading to four different MLPG methods for each BIE solution method. These methods combine the advantages of the MLPG method and the boundary element method.

  13. A Simple Acronym for Doing Calculus: CAL

    ERIC Educational Resources Information Center

    Hathaway, Richard J.

    2008-01-01

    An acronym is presented that provides students a potentially useful, unifying view of the major topics covered in an elementary calculus sequence. The acronym (CAL) is based on viewing the calculus procedure for solving a calculus problem P* in three steps: (1) recognizing that the problem cannot be solved using simple (non-calculus) techniques;…

  14. Using Probabilistic Information in Solving Resource Allocation Problems for a Decentralized Firm

    DTIC Science & Technology

    1978-09-01

    deterministic equivalent form of HIQ’s problem (5) by an approach similar to the one used in stochastic programming with simple recourse. See Ziemba [38) or, in...1964). 38. Ziemba , W.T., "Stochastic Programs with Simple Recourse," Technical Report 72-15, Stanford University, Department of Operations Research

  15. A smart sensor architecture based on emergent computation in an array of outer-totalistic cells

    NASA Astrophysics Data System (ADS)

    Dogaru, Radu; Dogaru, Ioana; Glesner, Manfred

    2005-06-01

    A novel smart-sensor architecture is proposed, capable to segment and recognize characters in a monochrome image. It is capable to provide a list of ASCII codes representing the recognized characters from the monochrome visual field. It can operate as a blind's aid or for industrial applications. A bio-inspired cellular model with simple linear neurons was found the best to perform the nontrivial task of cropping isolated compact objects such as handwritten digits or characters. By attaching a simple outer-totalistic cell to each pixel sensor, emergent computation in the resulting cellular automata lattice provides a straightforward and compact solution to the otherwise computationally intensive problem of character segmentation. A simple and robust recognition algorithm is built in a compact sequential controller accessing the array of cells so that the integrated device can provide directly a list of codes of the recognized characters. Preliminary simulation tests indicate good performance and robustness to various distortions of the visual field.

  16. Possibilities of rock constitutive modelling and simulations

    NASA Astrophysics Data System (ADS)

    Baranowski, Paweł; Małachowski, Jerzy

    2018-01-01

    The paper deals with a problem of rock finite element modelling and simulation. The main intention of authors was to present possibilities of different approaches in case of rock constitutive modelling. For this purpose granite rock was selected, due to its wide mechanical properties recognition and prevalence in literature. Two significantly different constitutive material models were implemented to simulate the granite fracture in various configurations: Johnson - Holmquist ceramic model which is very often used for predicting rock and other brittle materials behavior, and a simple linear elastic model with a brittle failure which can be used for simulating glass fracturing. Four cases with different loading conditions were chosen to compare the aforementioned constitutive models: uniaxial compression test, notched three-point-bending test, copper ball impacting a block test and small scale blasting test.

  17. Interpretation of diagnostic data: 4. How to do it with a more complex table.

    PubMed

    1983-10-15

    A more complex table is especially useful when a diagnostic test produces a wide range of results and your patient's levels are near one of the extremes. The following guidelines will be useful: Identify the several cut-off points that could be used. Fill in a complex table along the lines of Table I, showing the numbers of patients at each level who have and do not have the target disorder. Generate a simple table for each cut-off point, as in Table II, and determine the sensitivity (TP rate) and specificity (TN rate) at each of them. Select the cut-off point that makes the most sense for your patient's test result and proceed as in parts 2 and 3 of our series. Alternatively, construct an ROC curve by plotting the TP and FP rates that attend each cut-off point. If you keep your tables and ROC curves close at hand, you will gradually accumulate a set of very useful guides. However, if you looked very hard at what was happening, you will probably have noticed that they are not very useful for patients whose test results fall in the middle zones, or for those with just one positive result of two tests; the post-test likelihood of disease in these patients lurches back and forth past 50%, depending on where the cut-off point is. We will show you how to tackle this problem in part 5 of our series. It involves some maths, but you will find that its very powerful clinical application can be achieved with a simple nomogram or with some simple calculations.

  18. Development of an immunochromatographic strip for simple detection of penicillin-binding protein 2'.

    PubMed

    Matsui, Hidehito; Hanaki, Hideaki; Inoue, Megumi; Akama, Hiroyuki; Nakae, Taiji; Sunakawa, Keisuke; Omura, Satoshi

    2011-02-01

    Infections with methicillin-resistant Staphylococcus aureus (MRSA) and methicillin-resistant coagulase-negative Staphylococcus (MR-CNS) are a serious problem in hospitals because these bacteria produce penicillin-binding protein 2' (PBP2' or PBP2a), which shows low affinity to β-lactam antibiotics. Furthermore, the bacteria show resistance to a variety of antibiotics. Identification of these pathogens has been carried out mainly by the oxacillin susceptibility test, which takes several days to produce a reliable result. We developed a simple immunochromatographic test that enabled the detection of PBP2' within about 20 min. Anti-PBP2' monoclonal antibodies were produced by a hybridoma of recombinant PBP2' (rPBP2')-immunized mouse spleen cells and myeloma cells. The monoclonal antibodies reacted only with PBP2' of whole-cell extracts and showed no detectable cross-reactivity with extracts from other bacterial species tested so far. One of the monoclonal antibodies was conjugated with gold colloid particles, which react with PBP2', and another antibody was immobilized on a nitrocellulose membrane, which captures the PBP2'-gold colloid particle complex on a nitrocellulose strip. This strip was able to detect 1.0 ng of rPBP2' or 2.8 × 10(5) to 1.7 × 10(7) CFU of MRSA cells. The cross-reactivity test using 15 bacterial species and a Candida albicans strain showed no detectable false-positive results. The accuracy of this method in the detection of MRSA and MR-CNS appeared to be 100%, compared with the results obtained by PCR amplification of the PBP2' gene, mecA. This newly developed immunochromatographic test can be used for simple and accurate detection of PBP2'-producing cells in clinical laboratories.

  19. Phase-space finite elements in a least-squares solution of the transport equation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Drumm, C.; Fan, W.; Pautz, S.

    2013-07-01

    The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshingmore » tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)« less

  20. Optimization of Wireless Power Transfer Systems Enhanced by Passive Elements and Metasurfaces

    NASA Astrophysics Data System (ADS)

    Lang, Hans-Dieter; Sarris, Costas D.

    2017-10-01

    This paper presents a rigorous optimization technique for wireless power transfer (WPT) systems enhanced by passive elements, ranging from simple reflectors and intermedi- ate relays all the way to general electromagnetic guiding and focusing structures, such as metasurfaces and metamaterials. At its core is a convex semidefinite relaxation formulation of the otherwise nonconvex optimization problem, of which tightness and optimality can be confirmed by a simple test of its solutions. The resulting method is rigorous, versatile, and general -- it does not rely on any assumptions. As shown in various examples, it is able to efficiently and reliably optimize such WPT systems in order to find their physical limitations on performance, optimal operating parameters and inspect their working principles, even for a large number of active transmitters and passive elements.

  1. Enhanced Conformational Sampling Using Replica Exchange with Collective-Variable Tempering

    PubMed Central

    2015-01-01

    The computational study of conformational transitions in RNA and proteins with atomistic molecular dynamics often requires suitable enhanced sampling techniques. We here introduce a novel method where concurrent metadynamics are integrated in a Hamiltonian replica-exchange scheme. The ladder of replicas is built with different strengths of the bias potential exploiting the tunability of well-tempered metadynamics. Using this method, free-energy barriers of individual collective variables are significantly reduced compared with simple force-field scaling. The introduced methodology is flexible and allows adaptive bias potentials to be self-consistently constructed for a large number of simple collective variables, such as distances and dihedral angles. The method is tested on alanine dipeptide and applied to the difficult problem of conformational sampling in a tetranucleotide. PMID:25838811

  2. High order filtering methods for approximating hyberbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1990-01-01

    In the computation of discontinuous solutions of hyperbolic systems of conservation laws, the recently developed essentially non-oscillatory (ENO) schemes appear to be very useful. However, they are computationally costly compared to simple central difference methods. A filtering method which is developed uses simple central differencing of arbitrarily high order accuracy, except when a novel local test indicates the development of spurious oscillations. At these points, the full ENO apparatus is used, maintaining the high order of accuracy, but removing spurious oscillations. Numerical results indicate the success of the method. High order of accuracy was obtained in regions of smooth flow without spurious oscillations for a wide range of problems and a significant speed up of generally a factor of almost three over the full ENO method.

  3. A Solution Adaptive Technique Using Tetrahedral Unstructured Grids

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2000-01-01

    An adaptive unstructured grid refinement technique has been developed and successfully applied to several three dimensional inviscid flow test cases. The method is based on a combination of surface mesh subdivision and local remeshing of the volume grid Simple functions of flow quantities are employed to detect dominant features of the flowfield The method is designed for modular coupling with various error/feature analyzers and flow solvers. Several steady-state, inviscid flow test cases are presented to demonstrate the applicability of the method for solving practical three-dimensional problems. In all cases, accurate solutions featuring complex, nonlinear flow phenomena such as shock waves and vortices have been generated automatically and efficiently.

  4. Formal language constrained path problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvablemore » efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.« less

  5. Turning Points of the Spherical Pendulum and the Golden Ratio

    ERIC Educational Resources Information Center

    Essen, Hanno; Apazidis, Nicholas

    2009-01-01

    We study the turning point problem of a spherical pendulum. The special cases of the simple pendulum and the conical pendulum are noted. For simple initial conditions the solution to this problem involves the golden ratio, also called the golden section, or the golden number. This number often appears in mathematics where you least expect it. To…

  6. String and Sticky Tape Experiments: Simple Self-Lubricated Electric Motor for Elementary Physics Lab.

    ERIC Educational Resources Information Center

    Entrikin, Jerry; Griffiths, David

    1983-01-01

    The main problem in constructing functioning electric motors from simple parts is the mounting of the axle (which is too flimsy to maintain good electrical contacts or too tight, imposing excessive friction at the supports). This problem is solved by using a pencil sharpened at both ends as the axle. (JN)

  7. Special Relativity as a Simple Geometry Problem

    ERIC Educational Resources Information Center

    de Abreu, Rodrigo; Guerra, Vasco

    2009-01-01

    The null result of the Michelson-Morley experiment and the constancy of the one-way speed of light in the "rest system" are used to formulate a simple problem, to be solved by elementary geometry techniques using a pair of compasses and non-graduated rulers. The solution consists of a drawing allowing a direct visualization of all the fundamental…

  8. Terahertz reflection imaging using Kirchhoff migration.

    PubMed

    Dorney, T D; Johnson, J L; Van Rudd, J; Baraniuk, R G; Symes, W W; Mittleman, D M

    2001-10-01

    We describe a new imaging method that uses single-cycle pulses of terahertz (THz) radiation. This technique emulates data-collection and image-processing procedures developed for geophysical prospecting and is made possible by the availability of fiber-coupled THz receiver antennas. We use a simple migration procedure to solve the inverse problem; this permits us to reconstruct the location and shape of targets. These results demonstrate the feasibility of the THz system as a test-bed for the exploration of new seismic processing methods involving complex model systems.

  9. Adaptive Neural Networks for Automatic Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sakas, D. P.; Vlachos, D. S.; Simos, T. E.

    The use of fuzzy logic and fuzzy neural networks has been found effective for the modelling of the uncertain relations between the parameters of a negotiation procedure. The problem with these configurations is that they are static, that is, any new knowledge from theory or experiment lead to the construction of entirely new models. To overcome this difficulty, we apply in this work, an adaptive neural topology to model the negotiation process. Finally a simple simulation is carried in order to test the new method.

  10. Issues in Benchmark Metric Selection

    NASA Astrophysics Data System (ADS)

    Crolotte, Alain

    It is true that a metric can influence a benchmark but will esoteric metrics create more problems than they will solve? We answer this question affirmatively by examining the case of the TPC-D metric which used the much debated geometric mean for the single-stream test. We will show how a simple choice influenced the benchmark and its conduct and, to some extent, DBMS development. After examining other alternatives our conclusion is that the “real” measure for a decision-support benchmark is the arithmetic mean.

  11. The Cyclic Life-Test of a T5 Ion Thruster Hollow Cathode to 4200 Hours.

    DTIC Science & Technology

    1981-05-01

    Conference on Fluid Mechanics in Energy Conversion, Alta (Utah) 1979, p. 263. (Published by SIAM, 1980.) Invited paper. 101. G.S.S. Ludford & Asok K. Sen...GCttingen 1979. Progress in Astronautics and Aeronautics, 76 (1981). p. 427. (Combustion in Reactive Systems, ed. by J. Ray Bowen, N. Manson, Antoni...steady detonation waves in a simple model problem. To appear in Studies in Applied Mathematics. 106. Asok K. Sen & G.S.S. Ludford: Effects of mass

  12. Numerical approach for ECT by using boundary element method with Laplace transform

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Enokizono, M.; Todaka, T.; Shibao, K.

    1997-03-01

    This paper presents an inverse analysis by using BEM with Laplace transform. The method is applied to a simple problem in the eddy current testing (ECT). Some crack shapes in a conductive specimen are estimated from distributions of the transient eddy current on its sensing surface and magnetic flux density in the liftoff space. Because the transient behavior includes information on various frequency components, the method is applicable to the shape estimation of a comparative small crack.

  13. Fast algorithms for transforming back and forth between a signed permutation and its equivalent simple permutation.

    PubMed

    Gog, Simon; Bader, Martin

    2008-10-01

    The problem of sorting signed permutations by reversals is a well-studied problem in computational biology. The first polynomial time algorithm was presented by Hannenhalli and Pevzner in 1995. The algorithm was improved several times, and nowadays the most efficient algorithm has a subquadratic running time. Simple permutations played an important role in the development of these algorithms. Although the latest result of Tannier et al. does not require simple permutations, the preliminary version of their algorithm as well as the first polynomial time algorithm of Hannenhalli and Pevzner use the structure of simple permutations. More precisely, the latter algorithms require a precomputation that transforms a permutation into an equivalent simple permutation. To the best of our knowledge, all published algorithms for this transformation have at least a quadratic running time. For further investigations on genome rearrangement problems, the existence of a fast algorithm for the transformation could be crucial. Another important task is the back transformation, i.e. if we have a sorting on the simple permutation, transform it into a sorting on the original permutation. Again, the naive approach results in an algorithm with quadratic running time. In this paper, we present a linear time algorithm for transforming a permutation into an equivalent simple permutation, and an O(n log n) algorithm for the back transformation of the sorting sequence.

  14. Rule Systems for Runtime Verification: A Short Tutorial

    NASA Astrophysics Data System (ADS)

    Barringer, Howard; Havelund, Klaus; Rydeheard, David; Groce, Alex

    In this tutorial, we introduce two rule-based systems for on and off-line trace analysis, RuleR and LogScope. RuleR is a conditional rule-based system, which has a simple and easily implemented algorithm for effective runtime verification, and into which one can compile a wide range of temporal logics and other specification formalisms used for runtime verification. Specifications can be parameterized with data, or even with specifications, allowing for temporal logic combinators to be defined. We outline a number of simple syntactic extensions of core RuleR that can lead to further conciseness of specification but still enabling easy and efficient implementation. RuleR is implemented in Java and we will demonstrate its ease of use in monitoring Java programs. LogScope is a derivation of RuleR adding a simple very user-friendly temporal logic. It was developed in Python, specifically for supporting testing of spacecraft flight software for NASA’s next 2011 Mars mission MSL (Mars Science Laboratory). The system has been applied by test engineers to analysis of log files generated by running the flight software. Detailed logging is already part of the system design approach, and hence there is no added instrumentation overhead caused by this approach. While post-mortem log analysis prevents the autonomous reaction to problems possible with traditional runtime verification, it provides a powerful tool for test automation. A new system is being developed that integrates features from both RuleR and LogScope.

  15. A Powerful Test for Comparing Multiple Regression Functions.

    PubMed

    Maity, Arnab

    2012-09-01

    In this article, we address the important problem of comparison of two or more population regression functions. Recently, Pardo-Fernández, Van Keilegom and González-Manteiga (2007) developed test statistics for simple nonparametric regression models: Y(ij) = θ(j)(Z(ij)) + σ(j)(Z(ij))∊(ij), based on empirical distributions of the errors in each population j = 1, … , J. In this paper, we propose a test for equality of the θ(j)(·) based on the concept of generalized likelihood ratio type statistics. We also generalize our test for other nonparametric regression setups, e.g, nonparametric logistic regression, where the loglikelihood for population j is any general smooth function [Formula: see text]. We describe a resampling procedure to obtain the critical values of the test. In addition, we present a simulation study to evaluate the performance of the proposed test and compare our results to those in Pardo-Fernández et al. (2007).

  16. Benchmarking the Multidimensional Stellar Implicit Code MUSIC

    NASA Astrophysics Data System (ADS)

    Goffrey, T.; Pratt, J.; Viallet, M.; Baraffe, I.; Popov, M. V.; Walder, R.; Folini, D.; Geroux, C.; Constantino, T.

    2017-04-01

    We present the results of a numerical benchmark study for the MUltidimensional Stellar Implicit Code (MUSIC) based on widely applicable two- and three-dimensional compressible hydrodynamics problems relevant to stellar interiors. MUSIC is an implicit large eddy simulation code that uses implicit time integration, implemented as a Jacobian-free Newton Krylov method. A physics based preconditioning technique which can be adjusted to target varying physics is used to improve the performance of the solver. The problems used for this benchmark study include the Rayleigh-Taylor and Kelvin-Helmholtz instabilities, and the decay of the Taylor-Green vortex. Additionally we show a test of hydrostatic equilibrium, in a stellar environment which is dominated by radiative effects. In this setting the flexibility of the preconditioning technique is demonstrated. This work aims to bridge the gap between the hydrodynamic test problems typically used during development of numerical methods and the complex flows of stellar interiors. A series of multidimensional tests were performed and analysed. Each of these test cases was analysed with a simple, scalar diagnostic, with the aim of enabling direct code comparisons. As the tests performed do not have analytic solutions, we verify MUSIC by comparing it to established codes including ATHENA and the PENCIL code. MUSIC is able to both reproduce behaviour from established and widely-used codes as well as results expected from theoretical predictions. This benchmarking study concludes a series of papers describing the development of the MUSIC code and provides confidence in future applications.

  17. A Radiation Transfer Solver for Athena Using Short Characteristics

    NASA Astrophysics Data System (ADS)

    Davis, Shane W.; Stone, James M.; Jiang, Yan-Fei

    2012-03-01

    We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiation MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.

  18. A low-rank matrix recovery approach for energy efficient EEG acquisition for a wireless body area network.

    PubMed

    Majumdar, Angshul; Gogna, Anupriya; Ward, Rabab

    2014-08-25

    We address the problem of acquiring and transmitting EEG signals in Wireless Body Area Networks (WBAN) in an energy efficient fashion. In WBANs, the energy is consumed by three operations: sensing (sampling), processing and transmission. Previous studies only addressed the problem of reducing the transmission energy. For the first time, in this work, we propose a technique to reduce sensing and processing energy as well: this is achieved by randomly under-sampling the EEG signal. We depart from previous Compressed Sensing based approaches and formulate signal recovery (from under-sampled measurements) as a matrix completion problem. A new algorithm to solve the matrix completion problem is derived here. We test our proposed method and find that the reconstruction accuracy of our method is significantly better than state-of-the-art techniques; and we achieve this while saving sensing, processing and transmission energy. Simple power analysis shows that our proposed methodology consumes considerably less power compared to previous CS based techniques.

  19. Probabilistic Component Mode Synthesis of Nondeterministic Substructures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1996-01-01

    Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. We present a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.

  20. A functional renormalization method for wave propagation in random media

    NASA Astrophysics Data System (ADS)

    Lamagna, Federico; Calzetta, Esteban

    2017-08-01

    We develop the exact renormalization group approach as a way to evaluate the effective speed of the propagation of a scalar wave in a medium with random inhomogeneities. We use the Martin-Siggia-Rose formalism to translate the problem into a non equilibrium field theory one, and then consider a sequence of models with a progressively lower infrared cutoff; in the limit where the cutoff is removed we recover the problem of interest. As a test of the formalism, we compute the effective dielectric constant of an homogeneous medium interspersed with randomly located, interpenetrating bubbles. A simple approximation to the renormalization group equations turns out to be equivalent to a self-consistent two-loops evaluation of the effective dielectric constant.

  1. Comparison of iterative inverse coarse-graining methods

    NASA Astrophysics Data System (ADS)

    Rosenberger, David; Hanke, Martin; van der Vegt, Nico F. A.

    2016-10-01

    Deriving potentials for coarse-grained Molecular Dynamics (MD) simulations is frequently done by solving an inverse problem. Methods like Iterative Boltzmann Inversion (IBI) or Inverse Monte Carlo (IMC) have been widely used to solve this problem. The solution obtained by application of these methods guarantees a match in the radial distribution function (RDF) between the underlying fine-grained system and the derived coarse-grained system. However, these methods often fail in reproducing thermodynamic properties. To overcome this deficiency, additional thermodynamic constraints such as pressure or Kirkwood-Buff integrals (KBI) may be added to these methods. In this communication we test the ability of these methods to converge to a known solution of the inverse problem. With this goal in mind we have studied a binary mixture of two simple Lennard-Jones (LJ) fluids, in which no actual coarse-graining is performed. We further discuss whether full convergence is actually needed to achieve thermodynamic representability.

  2. Driving Parameters for Distributed and Centralized Air Transportation Architectures

    NASA Technical Reports Server (NTRS)

    Feron, Eric

    2001-01-01

    This report considers the problem of intersecting aircraft flows under decentralized conflict avoidance rules. Using an Eulerian standpoint (aircraft flow through a fixed control volume), new air traffic control models and scenarios are defined that enable the study of long-term airspace stability problems. Considering a class of two intersecting aircraft flows, it is shown that airspace stability, defined both in terms of safety and performance, is preserved under decentralized conflict resolution algorithms. Performance bounds are derived for the aircraft flow problem under different maneuver models. Besides analytical approaches, numerical examples are presented to test the theoretical results, as well as to generate some insight about the structure of the traffic flow after resolution. Considering more than two intersecting aircraft flows, simulations indicate that flow stability may not be guaranteed under simple conflict avoidance rules. Finally, a comparison is made with centralized strategies to conflict resolution.

  3. The contact sport of rough surfaces

    NASA Astrophysics Data System (ADS)

    Carpick, Robert W.

    2018-01-01

    Describing the way two surfaces touch and make contact may seem simple, but it is not. Fully describing the elastic deformation of ideally smooth contacting bodies, under even low applied pressure, involves second-order partial differential equations and fourth-rank elastic constant tensors. For more realistic rough surfaces, the problem becomes a multiscale exercise in surface-height statistics, even before including complex phenomena such as adhesion, plasticity, and fracture. A recent research competition, the “Contact Mechanics Challenge” (1), was designed to test various approximate methods for solving this problem. A hypothetical rough surface was generated, and the community was invited to model contact with this surface with competing theories for the calculation of properties, including contact area and pressure. A supercomputer-generated numerical solution was kept secret until competition entries were received. The comparison of results (2) provides insights into the relative merits of competing models and even experimental approaches to the problem.

  4. The Lippmann-Dewey "Debate" Revisited: The Problem of Knowledge and the Role of Experts in Modern Democratic Theory

    ERIC Educational Resources Information Center

    DeCesare, Tony

    2012-01-01

    With only some fear of oversimplification, the fundamental differences between Walter Lippmann and John Dewey that are of concern here can be introduced by giving attention to Lippmann's deceptively simple formulation of a central problem in democratic theory: "The environment is complex. Man's political capacity is simple. Can a bridge be built…

  5. Interference and problem size effect in multiplication fact solving: Individual differences in brain activations and arithmetic performance.

    PubMed

    De Visscher, Alice; Vogel, Stephan E; Reishofer, Gernot; Hassler, Eva; Koschutnig, Karl; De Smedt, Bert; Grabner, Roland H

    2018-05-15

    In the development of math ability, a large variability of performance in solving simple arithmetic problems is observed and has not found a compelling explanation yet. One robust effect in simple multiplication facts is the problem size effect, indicating better performance for small problems compared to large ones. Recently, behavioral studies brought to light another effect in multiplication facts, the interference effect. That is, high interfering problems (receiving more proactive interference from previously learned problems) are more difficult to retrieve than low interfering problems (in terms of physical feature overlap, namely the digits, De Visscher and Noël, 2014). At the behavioral level, the sensitivity to the interference effect is shown to explain individual differences in the performance of solving multiplications in children as well as in adults. The aim of the present study was to investigate the individual differences in multiplication ability in relation to the neural interference effect and the neural problem size effect. To that end, we used a paradigm developed by De Visscher, Berens, et al. (2015) that contrasts the interference effect and the problem size effect in a multiplication verification task, during functional magnetic resonance imaging (fMRI) acquisition. Forty-two healthy adults, who showed high variability in an arithmetic fluency test, participated in our fMRI study. In order to control for the general reasoning level, the IQ was taken into account in the individual differences analyses. Our findings revealed a neural interference effect linked to individual differences in multiplication in the left inferior frontal gyrus, while controlling for the IQ. This interference effect in the left inferior frontal gyrus showed a negative relation with individual differences in arithmetic fluency, indicating a higher interference effect for low performers compared to high performers. This region is suggested in the literature to be involved in resolution of proactive interference. Besides, no correlation between the neural problem size effect and multiplication performance was found. This study supports the idea that the interference due to similarities/overlap of physical traits (the digits) is crucial in memorizing arithmetic facts and in determining individual differences in arithmetic. Copyright © 2018 Elsevier Inc. All rights reserved.

  6. Development of small scale cluster computer for numerical analysis

    NASA Astrophysics Data System (ADS)

    Zulkifli, N. H. N.; Sapit, A.; Mohammed, A. N.

    2017-09-01

    In this study, two units of personal computer were successfully networked together to form a small scale cluster. Each of the processor involved are multicore processor which has four cores in it, thus made this cluster to have eight processors. Here, the cluster incorporate Ubuntu 14.04 LINUX environment with MPI implementation (MPICH2). Two main tests were conducted in order to test the cluster, which is communication test and performance test. The communication test was done to make sure that the computers are able to pass the required information without any problem and were done by using simple MPI Hello Program where the program written in C language. Additional, performance test was also done to prove that this cluster calculation performance is much better than single CPU computer. In this performance test, four tests were done by running the same code by using single node, 2 processors, 4 processors, and 8 processors. The result shows that with additional processors, the time required to solve the problem decrease. Time required for the calculation shorten to half when we double the processors. To conclude, we successfully develop a small scale cluster computer using common hardware which capable of higher computing power when compare to single CPU processor, and this can be beneficial for research that require high computing power especially numerical analysis such as finite element analysis, computational fluid dynamics, and computational physics analysis.

  7. Study on shear properties of coral sand under cyclic simple shear condition

    NASA Astrophysics Data System (ADS)

    Ji, Wendong; Zhang, Yuting; Jin, Yafei

    2018-05-01

    In recent years, the ocean development in our country urgently needs to be accelerated. The construction of artificial coral reefs has become an important development direction. In this paper, experimental studies of simple shear and cyclic simple shear of coral sand are carried out, and the shear properties and particle breakage of coral sand are analyzed. The results show that the coral sand samples show an overall shear failure in the simple shear test, which is more accurate and effective for studying the particle breakage. The shear displacement corresponding to the peak shear stress of the simple shear test is significantly larger than that corresponding to the peak shear stress of the direct shear test. The degree of particle breakage caused by the simple shear test is significantly related to the normal stress level. The particle breakage of coral sand after the cyclic simple shear test obviously increases compared with that of the simple shear test, and universal particle breakage occurs within the whole particle size range. The increasing of the cycle-index under cyclic simple shear test results in continuous compacting of the sample, so that the envelope curve of peak shearing force increases with the accumulated shear displacement.

  8. A survey of methods of feasible directions for the solution of optimal control problems

    NASA Technical Reports Server (NTRS)

    Polak, E.

    1972-01-01

    Three methods of feasible directions for optimal control are reviewed. These methods are an extension of the Frank-Wolfe method, a dual method devised by Pironneau and Polack, and a Zontendijk method. The categories of continuous optimal control problems are shown as: (1) fixed time problems with fixed initial state, free terminal state, and simple constraints on the control; (2) fixed time problems with inequality constraints on both the initial and the terminal state and no control constraints; (3) free time problems with inequality constraints on the initial and terminal states and simple constraints on the control; and (4) fixed time problems with inequality state space contraints and constraints on the control. The nonlinear programming algorithms are derived for each of the methods in its associated category.

  9. Laser absorption phenomena in flowing gas devices

    NASA Technical Reports Server (NTRS)

    Chapman, P. K.; Otis, J. H.

    1976-01-01

    A theoretical and experimental investigation is presented of inverse Bremsstrahlung absorption of CW CO2 laser radiation in flowing gases seeded with alkali metals. In order to motivate this development, some simple models are described of several space missions which could use laser powered rocket vehicles. Design considerations are given for a test call to be used with a welding laser, using a diamond window for admission of laser radiation at power levels in excess of 10 kW. A detailed analysis of absorption conditions in the test cell is included. The experimental apparatus and test setup are described and the results of experiments presented. Injection of alkali seedant and steady state absorption of the laser radiation were successfully demonstrated, but problems with the durability of the diamond windows at higher powers prevented operation of the test cell as an effective laser powered thruster.

  10. Null but not void: considerations for hypothesis testing.

    PubMed

    Shaw, Pamela A; Proschan, Michael A

    2013-01-30

    Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.

  11. Orthodontics for the dog. Treatment methods.

    PubMed

    Ross, D L

    1986-09-01

    This article considers the prevention of orthodontic problems, occlusal adjustments, simple tooth movements, rotational techniques, tipping problems, adjustment of crown height, descriptions of common orthodontic appliances, and problems associated with therapy.

  12. Contribution of a new electrophysiologic test to Morton's neuroma diagnosis.

    PubMed

    Pardal-Fernández, José Manuel; Palazón-García, Elena; Hernández-Fernández, Francisco; de Cabo, Carlos

    2014-06-01

    Morton's neuroma causes metatarsalgia due to the interdigital neuropathy. The small nerve diameter compromises their evaluation in image studies. To overcome this problem we propose a new electrophysiological test. We conducted a prospective case-control study performing a orthodromic electroneurography using subdermal electrodes in controls and patients to assess the validity. Additionally all patients were tested with magnetic resonance. Some patients required surgery and subsequent histological evaluation. The new ENG procedure showed higher sensitivity and specificity. Methodological standardization was easy and the test was well tolerated by the subjects. Our test demonstrated remarkable diagnostic efficiency, and also was able to identify symptomatic patients undetected by magnetic resonance, which underlines the lack of correlation between the size and intensity of the lesion. This new electrophysiological method appears to be a highly sensitivity, well-tolerated, simple and low-cost for Morton's neuroma diagnosis. Copyright © 2014 European Foot and Ankle Society. Published by Elsevier Ltd. All rights reserved.

  13. Generating compact classifier systems using a simple artificial immune system.

    PubMed

    Leung, Kevin; Cheong, France; Cheong, Christopher

    2007-10-01

    Current artificial immune system (AIS) classifiers have two major problems: 1) their populations of B-cells can grow to huge proportions, and 2) optimizing one B-cell (part of the classifier) at a time does not necessarily guarantee that the B-cell pool (the whole classifier) will be optimized. In this paper, the design of a new AIS algorithm and classifier system called simple AIS is described. It is different from traditional AIS classifiers in that it takes only one B-cell, instead of a B-cell pool, to represent the classifier. This approach ensures global optimization of the whole system, and in addition, no population control mechanism is needed. The classifier was tested on seven benchmark data sets using different classification techniques and was found to be very competitive when compared to other classifiers.

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maestas, J.H.

    The Loopback Tester is an Intel SBC 86/12A Single Board Computer and an Intel SBC 534 Communications Expansion Board configured and programmed to perform various basic or less. These tests include: (1) Data Communications Equipment (DCE) transmit timing detection (2) data rate measurement (3) instantaneous loopback indication and (4) bit error rate testing. It requires no initial setup after plug in, and can be used to locate the source of communications loss in a circuit. It can also be used to determine when crypto variable mismatch problems are the source of communications loss. This report discusses the functionality of themore » Loopback Tester as a diagnostic device. It also discusses the hardware and software which implements this simple yet reliable device.« less

  15. Method for evaluating wind turbine wake effects on wind farm performance

    NASA Technical Reports Server (NTRS)

    Neustadter, H. E.; Spera, D. A.

    1985-01-01

    A method of testing the performance of a cluster of wind turbine units an data analysis equations are presented which together form a simple and direct procedure for determining the reduction in energy output caused by the wake of an upwind turbine. This method appears to solve the problems presented by data scatter and wind variability. Test data from the three-unit Mod-2 wind turbine cluster at Goldendale, Washington, are analyzed to illustrate the application of the proposed method. In this sample case the reduction in energy was found to be about 10 percent when the Mod-2 units were separated a distance equal to seven diameters and winds were below rated.

  16. Syncope: causes, clinical evaluation, and current therapy.

    PubMed

    Benditt, D G; Remole, S; Milstein, S; Bailin, S

    1992-01-01

    Syncope is a common clinical problem comprising the sudden loss of both consciousness and postural tone, with a subsequent spontaneous and relatively prompt recovery. Often it is difficult to differentiate a true syncopal spell from other conditions, such as seizure disorders, or from some simple accidents. Even more difficult is the identification of the cause of syncopal episodes. Nonetheless, establishing a definitive diagnosis ia an important task given the high risk of recurrent symptoms. Careful use of noninvasive and invasive cardiovascular studies (including electrophysiologic testing and tilt-table testing) along with selected hematologic, biochemical, and neurologic studies provides, in the majority of cases, the most effective strategy for obtaining a specific diagnosis and for directing therapy.

  17. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron

    Recent efforts at Los Alamos National Laboratory to develop a moment-based, scale-bridging [or high-order (HO)–low-order (LO)] algorithm for solving large varieties of the transport (kinetic) systems have shown promising results. A part of our ongoing effort is incorporating this methodology into the framework of the Eulerian Applications Project to achieve algorithmic acceleration of radiationhydrodynamics simulations in production software. By starting from the thermal radiative transfer equations with a simple material-motion correction, we derive a discretely consistent energy balance equation (LO equation). We demonstrate that the corresponding LO system for the Monte Carlo HO solver is closely related to the originalmore » LO system without material-motion corrections. We test the implementation on a radiative shock problem and show consistency between the energy densities and temperatures in the HO and LO solutions as well as agreement with the semianalytic solution. We also test the approach on a more challenging two-dimensional problem and demonstrate accuracy enhancements and algorithmic speedups. This paper extends a recent conference paper by including multigroup effects.« less

  18. Year 2000 compliance issues.

    PubMed

    1999-03-01

    This month, we continue our coverage of the year 2000 (Y2K) problem as it affects healthcare facilities and the professionals who work in them. We present the following articles: "Checking PCs for Y2K Compliance"--In this article, we describe the probable sources of Y2K-related errors in PCs and present simple procedures for testing the Y2K compliance of PCs and application software. "Y2K Assessment Equipment Expectations"--In this article, we review the Y2K compliance data from a small sampling of hospitals to help answer the question "What percentage of medical equipment will likely be susceptible to Y2K problems?" "Y2K Labeling of Medical Devices"--In this article, we discuss the pros and cons of instituting a program to label each medical device with its Y2K status. Also in this section, we present an updated list of organizations that support ECRI's Position Statement on the testing of medical devices for Y2K compliance, which we published in the December 1998 issue of Health Devices (27[12]). And we remind readers of the services ECRI can offer to help healthcare institutions cope with the Y2K problem.

  19. Hardware problems encountered in solar heating and cooling systems

    NASA Technical Reports Server (NTRS)

    Cash, M.

    1978-01-01

    Numerous problems in the design, production, installation, and operation of solar energy systems are discussed. Described are hardware problems, which range from simple to obscure and complex, and their resolution.

  20. Numerical Modeling in Geodynamics: Success, Failure and Perspective

    NASA Astrophysics Data System (ADS)

    Ismail-Zadeh, A.

    2005-12-01

    A real success in numerical modeling of dynamics of the Earth can be achieved only by multidisciplinary research teams of experts in geodynamics, applied and pure mathematics, and computer science. The success in numerical modeling is based on the following basic, but simple, rules. (i) People need simplicity most, but they understand intricacies best (B. Pasternak, writer). Start from a simple numerical model, which describes basic physical laws by a set of mathematical equations, and move then to a complex model. Never start from a complex model, because you cannot understand the contribution of each term of the equations to the modeled geophysical phenomenon. (ii) Study the numerical methods behind your computer code. Otherwise it becomes difficult to distinguish true and erroneous solutions to the geodynamic problem, especially when your problem is complex enough. (iii) Test your model versus analytical and asymptotic solutions, simple 2D and 3D model examples. Develop benchmark analysis of different numerical codes and compare numerical results with laboratory experiments. Remember that the numerical tool you employ is not perfect, and there are small bugs in every computer code. Therefore the testing is the most important part of your numerical modeling. (iv) Prove (if possible) or learn relevant statements concerning the existence, uniqueness and stability of the solution to the mathematical and discrete problems. Otherwise you can solve an improperly-posed problem, and the results of the modeling will be far from the true solution of your model problem. (v) Try to analyze numerical models of a geological phenomenon using as less as possible tuning model variables. Already two tuning variables give enough possibilities to constrain your model well enough with respect to observations. The data fitting sometimes is quite attractive and can take you far from a principal aim of your numerical modeling: to understand geophysical phenomena. (vi) If the number of tuning model variables are greater than two, test carefully the effect of each of the variables on the modeled phenomenon. Remember: With four exponents I can fit an elephant (E. Fermi, physicist). (vii) Make your numerical model as accurate as possible, but never put the aim to reach a great accuracy: Undue precision of computations is the first symptom of mathematical illiteracy (N. Krylov, mathematician). How complex should be a numerical model? A model which images any detail of the reality is as useful as a map of scale 1:1 (J. Robinson, economist). This message is quite important for geoscientists, who study numerical models of complex geodynamical processes. I believe that geoscientists will never create a model of the real Earth dynamics, but we should try to model the dynamics such a way to simulate basic geophysical processes and phenomena. Does a particular model have a predictive power? Each numerical model has a predictive power, otherwise the model is useless. The predictability of the model varies with its complexity. Remember that a solution to the numerical model is an approximate solution to the equations, which have been chosen in believe that they describe dynamic processes of the Earth. Hence a numerical model predicts dynamics of the Earth as well as the mathematical equations describe this dynamics. What methodological advances are still needed for testable geodynamic modeling? Inverse (time-reverse) numerical modeling and data assimilation are new methodologies in geodynamics. The inverse modeling can allow to test geodynamic models forward in time using restored (from present-day observations) initial conditions instead of unknown conditions.

  1. A sequential solution for anisotropic total variation image denoising with interval constraints

    NASA Astrophysics Data System (ADS)

    Xu, Jingyan; Noo, Frédéric

    2017-09-01

    We show that two problems involving the anisotropic total variation (TV) and interval constraints on the unknown variables admit, under some conditions, a simple sequential solution. Problem 1 is a constrained TV penalized image denoising problem; problem 2 is a constrained fused lasso signal approximator. The sequential solution entails finding first the solution to the unconstrained problem, and then applying a thresholding to satisfy the constraints. If the interval constraints are uniform, this sequential solution solves problem 1. If the interval constraints furthermore contain zero, the sequential solution solves problem 2. Here uniform interval constraints refer to all unknowns being constrained to the same interval. A typical example of application is image denoising in x-ray CT, where the image intensities are non-negative as they physically represent linear attenuation coefficient in the patient body. Our results are simple yet seem unknown; we establish them using the Karush-Kuhn-Tucker conditions for constrained convex optimization.

  2. Individualized Math Problems in Simple Equations. Oregon Vo-Tech Mathematics Problem Sets.

    ERIC Educational Resources Information Center

    Cosler, Norma, Ed.

    This is one of eighteen sets of individualized mathematics problems developed by the Oregon Vo-Tech Math Project. Each of these problem packages is organized around a mathematical topic and contains problems related to diverse vocations. Solutions are provided for all problems. Problems in this volume require solution of linear equations, systems…

  3. Statistical methodologies for the control of dynamic remapping

    NASA Technical Reports Server (NTRS)

    Saltz, J. H.; Nicol, D. M.

    1986-01-01

    Following an initial mapping of a problem onto a multiprocessor machine or computer network, system performance often deteriorates with time. In order to maintain high performance, it may be necessary to remap the problem. The decision to remap must take into account measurements of performance deterioration, the cost of remapping, and the estimated benefits achieved by remapping. We examine the tradeoff between the costs and the benefits of remapping two qualitatively different kinds of problems. One problem assumes that performance deteriorates gradually, the other assumes that performance deteriorates suddenly. We consider a variety of policies for governing when to remap. In order to evaluate these policies, statistical models of problem behaviors are developed. Simulation results are presented which compare simple policies with computationally expensive optimal decision policies; these results demonstrate that for each problem type, the proposed simple policies are effective and robust.

  4. Simple and Effective Algorithms: Computer-Adaptive Testing.

    ERIC Educational Resources Information Center

    Linacre, John Michael

    Computer-adaptive testing (CAT) allows improved security, greater scoring accuracy, shorter testing periods, quicker availability of results, and reduced guessing and other undesirable test behavior. Simple approaches can be applied by the classroom teacher, or other content specialist, who possesses simple computer equipment and elementary…

  5. Fracture mechanics life analytical methods verification testing

    NASA Technical Reports Server (NTRS)

    Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.

    1994-01-01

    Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.

  6. Use of refractometry and colorimetry as field methods to rapidly assess antimalarial drug quality.

    PubMed

    Green, Michael D; Nettey, Henry; Villalva Rojas, Ofelia; Pamanivong, Chansapha; Khounsaknalath, Lamphet; Grande Ortiz, Miguel; Newton, Paul N; Fernández, Facundo M; Vongsack, Latsamy; Manolin, Ot

    2007-01-04

    The proliferation of counterfeit and poor-quality drugs is a major public health problem; especially in developing countries lacking adequate resources to effectively monitor their prevalence. Simple and affordable field methods provide a practical means of rapidly monitoring drug quality in circumstances where more advanced techniques are not available. Therefore, we have evaluated refractometry, colorimetry and a technique combining both processes as simple and accurate field assays to rapidly test the quality of the commonly available antimalarial drugs; artesunate, chloroquine, quinine, and sulfadoxine. Method bias, sensitivity, specificity and accuracy relative to high-performance liquid chromatographic (HPLC) analysis of drugs collected in the Lao PDR were assessed for each technique. The HPLC method for each drug was evaluated in terms of assay variability and accuracy. The accuracy of the combined method ranged from 0.96 to 1.00 for artesunate tablets, chloroquine injectables, quinine capsules, and sulfadoxine tablets while the accuracy was 0.78 for enterically coated chloroquine tablets. These techniques provide a generally accurate, yet simple and affordable means to assess drug quality in resource-poor settings.

  7. Simple adaptive sparse representation based classification schemes for EEG based brain-computer interface applications.

    PubMed

    Shin, Younghak; Lee, Seungchan; Ahn, Minkyu; Cho, Hohyun; Jun, Sung Chan; Lee, Heung-No

    2015-11-01

    One of the main problems related to electroencephalogram (EEG) based brain-computer interface (BCI) systems is the non-stationarity of the underlying EEG signals. This results in the deterioration of the classification performance during experimental sessions. Therefore, adaptive classification techniques are required for EEG based BCI applications. In this paper, we propose simple adaptive sparse representation based classification (SRC) schemes. Supervised and unsupervised dictionary update techniques for new test data and a dictionary modification method by using the incoherence measure of the training data are investigated. The proposed methods are very simple and additional computation for the re-training of the classifier is not needed. The proposed adaptive SRC schemes are evaluated using two BCI experimental datasets. The proposed methods are assessed by comparing classification results with the conventional SRC and other adaptive classification methods. On the basis of the results, we find that the proposed adaptive schemes show relatively improved classification accuracy as compared to conventional methods without requiring additional computation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  8. A simple model of hysteresis behavior using spreadsheet analysis

    NASA Astrophysics Data System (ADS)

    Ehrmann, A.; Blachowicz, T.

    2015-01-01

    Hysteresis loops occur in many scientific and technical problems, especially as field dependent magnetization of ferromagnetic materials, but also as stress-strain-curves of materials measured by tensile tests including thermal effects, liquid-solid phase transitions, in cell biology or economics. While several mathematical models exist which aim to calculate hysteresis energies and other parameters, here we offer a simple model for a general hysteretic system, showing different hysteresis loops depending on the defined parameters. The calculation which is based on basic spreadsheet analysis plus an easy macro code can be used by students to understand how these systems work and how the parameters influence the reactions of the system on an external field. Importantly, in the step-by-step mode, each change of the system state, compared to the last step, becomes visible. The simple program can be developed further by several changes and additions, enabling the building of a tool which is capable of answering real physical questions in the broad field of magnetism as well as in other scientific areas, in which similar hysteresis loops occur.

  9. Counterfeit anti-infective drugs.

    PubMed

    Newton, Paul N; Green, Michael D; Fernández, Facundo M; Day, Nicholas P J; White, Nicholas J

    2006-09-01

    The production of counterfeit or substandard anti-infective drugs is a widespread and under-recognised problem that contributes to morbidity, mortality, and drug resistance, and leads to spurious reporting of resistance and toxicity and loss of confidence in health-care systems. Counterfeit drugs particularly affect the most disadvantaged people in poor countries. Although advances in forensic chemical analysis and simple field tests will enhance drug quality monitoring, improved access to inexpensive genuine medicines, support of drug regulatory authorities, more open reporting, vigorous law enforcement, and more international cooperation with determined political leadership will be essential to counter this threat.

  10. A Novel Attitude Determination Algorithm for Spinning Spacecraft

    NASA Technical Reports Server (NTRS)

    Bar-Itzhack, Itzhack Y.; Harman, Richard R.

    2007-01-01

    This paper presents a single frame algorithm for the spin-axis orientation-determination of spinning spacecraft that encounters no ambiguity problems, as well as a simple Kalman filter for continuously estimating the full attitude of a spinning spacecraft. The later algorithm is comprised of two low order decoupled Kalman filters; one estimates the spin axis orientation, and the other estimates the spin rate and the spin (phase) angle. The filters are ambiguity free and do not rely on the spacecraft dynamics. They were successfully tested using data obtained from one of the ST5 satellites.

  11. Preliminary study of the use of the STAR-100 computer for transonic flow calculations

    NASA Technical Reports Server (NTRS)

    Keller, J. D.; Jameson, A.

    1977-01-01

    An explicit method for solving the transonic small-disturbance potential equation is presented. This algorithm, which is suitable for the new vector-processor computers such as the CDC STAR-100, is compared to successive line over-relaxation (SLOR) on a simple test problem. The convergence rate of the explicit scheme is slower than that of SLOR, however, the efficiency of the explicit scheme on the STAR-100 computer is sufficient to overcome the slower convergence rate and allow an overall speedup compared to SLOR on the CYBER 175 computer.

  12. High order filtering methods for approximating hyperbolic systems of conservation laws

    NASA Technical Reports Server (NTRS)

    Lafon, F.; Osher, S.

    1991-01-01

    The essentially nonoscillatory (ENO) schemes, while potentially useful in the computation of discontinuous solutions of hyperbolic conservation-law systems, are computationally costly relative to simple central-difference methods. A filtering technique is presented which employs central differencing of arbitrarily high-order accuracy except where a local test detects the presence of spurious oscillations and calls upon the full ENO apparatus to remove them. A factor-of-three speedup is thus obtained over the full-ENO method for a wide range of problems, with high-order accuracy in regions of smooth flow.

  13. Analysis of whisker-toughened CMC structural components using an interactive reliability model

    NASA Technical Reports Server (NTRS)

    Duffy, Stephen F.; Palko, Joseph L.

    1992-01-01

    Realizing wider utilization of ceramic matrix composites (CMC) requires the development of advanced structural analysis technologies. This article focuses on the use of interactive reliability models to predict component probability of failure. The deterministic William-Warnke failure criterion serves as theoretical basis for the reliability model presented here. The model has been implemented into a test-bed software program. This computer program has been coupled to a general-purpose finite element program. A simple structural problem is presented to illustrate the reliability model and the computer algorithm.

  14. Single Axis Attitude Control and DC Bus Regulation with Two Flywheels

    NASA Technical Reports Server (NTRS)

    Kascak, Peter E.; Jansen, Ralph H.; Kenny, Barbara; Dever, Timothy P.

    2002-01-01

    A computer simulation of a flywheel energy storage single axis attitude control system is described. The simulation models hardware which will be experimentally tested in the future. This hardware consists of two counter rotating flywheels mounted to an air table. The air table allows one axis of rotational motion. An inertia DC bus coordinator is set forth that allows the two control problems, bus regulation and attitude control, to be separated. Simulation results are presented with a previously derived flywheel bus regulator and a simple PID attitude controller.

  15. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Viklund, H.I.; Kennedy, R.H.

    Uranium precipitates obtained from Congo leach liquors by an ion exchange process contained more than 0.1% chloride. Attempts were made to reduce the chloride content of typical precipitates by calcination of dried precipitate, releaching of dried precipitate with water, and washing of wet precipitate with water. Washing of wet precipitate with an aqueous solution of 0.25% Na/sub 2/SO/ sub 4/, to prevent peptization, provided a simple solution to the problem. Precipitation tests on Congo ion exchange eluates showed a marked advantage in subsequent thickening and filtration operations for precipitation from hot solution. (auth)

  16. Optimal Limited Contingency Planning

    NASA Technical Reports Server (NTRS)

    Meuleau, Nicolas; Smith, David E.

    2003-01-01

    For a given problem, the optimal Markov policy over a finite horizon is a conditional plan containing a potentially large number of branches. However, there are applications where it is desirable to strictly limit the number of decision points and branches in a plan. This raises the question of how one goes about finding optimal plans containing only a limited number of branches. In this paper, we present an any-time algorithm for optimal k-contingency planning. It is the first optimal algorithm for limited contingency planning that is not an explicit enumeration of possible contingent plans. By modelling the problem as a partially observable Markov decision process, it implements the Bellman optimality principle and prunes the solution space. We present experimental results of applying this algorithm to some simple test cases.

  17. A simple and fast heuristic for protein structure comparison.

    PubMed

    Pelta, David A; González, Juan R; Moreno Vega, Marcos

    2008-03-25

    Protein structure comparison is a key problem in bioinformatics. There exist several methods for doing protein comparison, being the solution of the Maximum Contact Map Overlap problem (MAX-CMO) one of the alternatives available. Although this problem may be solved using exact algorithms, researchers require approximate algorithms that obtain good quality solutions using less computational resources than the formers. We propose a variable neighborhood search metaheuristic for solving MAX-CMO. We analyze this strategy in two aspects: 1) from an optimization point of view the strategy is tested on two different datasets, obtaining an error of 3.5%(over 2702 pairs) and 1.7% (over 161 pairs) with respect to optimal values; thus leading to high accurate solutions in a simpler and less expensive way than exact algorithms; 2) in terms of protein structure classification, we conduct experiments on three datasets and show that is feasible to detect structural similarities at SCOP's family and CATH's architecture levels using normalized overlap values. Some limitations and the role of normalization are outlined for doing classification at SCOP's fold level. We designed, implemented and tested.a new tool for solving MAX-CMO, based on a well-known metaheuristic technique. The good balance between solution's quality and computational effort makes it a valuable tool. Moreover, to the best of our knowledge, this is the first time the MAX-CMO measure is tested at SCOP's fold and CATH's architecture levels with encouraging results.

  18. The Hole-Count Test Revisited: Effects of Test Specimen Thickness

    NASA Technical Reports Server (NTRS)

    Lyman, C. E.; Ackland, D. W.; Williams, D. B.; Goldstein, J. I.

    1989-01-01

    For historical reasons the hole count, an important performance test for the Analytical Electron Microscope (AEM), is somewhat arbitrary yielding different numbers for different investigators. This was not a problem a decade ago when AEM specimens were often bathed with large fluxes of stray electrons and hard x rays. At that time the presence or absence of a thick Pt second condenser (C2) aperture could be detected by a simple comparison of the x-ray spectrum taken 'somewhere in the hole' with a spectrum collected on a 'typical thickness' of Mo or Ag foil. A high hole count of about 10-20% indicated that the electron column needed modifications; whereas a hole count of 1-2% was accepted for most AEM work. The absolute level of the hole count is a function of test specimen atomic number, overall specimen shape, and thin-foil thickness. In order that equivalent results may be obtained for any AEM in any laboratory in the world, this test must become standardized. The hole-count test we seek must be as simpl and as nonsubjective as the graphite 0.344nm lattice-line-resolution test. This lattice-resolution test spurred manufacturers to improve the image resolution of the TEM significantly in the 1970s and led to the even more stringent resolution tests of today. A similar phenomenon for AEM instruments would be welcome. The hole-count test can also indicate whether the spurious x-ray signal is generated by high-energy continuum x rays (bremsstrahlung) generated in the electron column (high K-line to L-line ratio) or uncollimated electrons passing through or around the C2 aperture (low K/L ratio).

  19. Measuring CAMD technique performance. 2. How "druglike" are drugs? Implications of Random test set selection exemplified using druglikeness classification models.

    PubMed

    Good, Andrew C; Hermsmeier, Mark A

    2007-01-01

    Research into the advancement of computer-aided molecular design (CAMD) has a tendency to focus on the discipline of algorithm development. Such efforts are often wrought to the detriment of the data set selection and analysis used in said algorithm validation. Here we highlight the potential problems this can cause in the context of druglikeness classification. More rigorous efforts are applied to the selection of decoy (nondruglike) molecules from the ACD. Comparisons are made between model performance using the standard technique of random test set creation with test sets derived from explicit ontological separation by drug class. The dangers of viewing druglike space as sufficiently coherent to permit simple classification are highlighted. In addition the issues inherent in applying unfiltered data and random test set selection to (Q)SAR models utilizing large and supposedly heterogeneous databases are discussed.

  20. Effect of load eccentricity on the buckling of thin-walled laminated C-columns

    NASA Astrophysics Data System (ADS)

    Wysmulski, Pawel; Teter, Andrzej; Debski, Hubert

    2018-01-01

    The study investigates the behaviour of short, thin-walled laminated C-columns under eccentric compression. The tested columns are simple-supported. The effect of load inaccuracy on the critical and post-critical (local buckling) states is examined. A numerical analysis by the finite element method and experimental tests on a test stand are performed. The samples were produced from a carbon-epoxy prepreg by the autoclave technique. The experimental tests rest on the assumption that compressive loads are 1.5 higher than the theoretical critical force. Numerical modelling is performed using the commercial software package ABAQUS®. The critical load is determined by solving an eigen problem using the Subspace algorithm. The experimental critical loads are determined based on post-buckling paths. The numerical and experimental results show high agreement, thus demonstrating a significant effect of load inaccuracy on the critical load corresponding to the column's local buckling.

  1. Development of test methods for textile composites

    NASA Technical Reports Server (NTRS)

    Masters, John E.; Ifju, Peter G.; Fedro, Mark J.

    1993-01-01

    NASA's Advanced Composite Technology (ACT) Program was initiated in 1990 with the purpose of developing less costly composite aircraft structures. A number of innovative materials and processes were evaluated as a part of this effort. Chief among them are composite materials reinforced with textile preforms. These new forms of composite materials bring with them potential testing problems. Methods currently in practice were developed over the years for composite materials made from prepreg tape or simple 2-D woven fabrics. A wide variety of 2-D and 3-D braided, woven, stitched, and knit preforms were suggested for application in the ACT program. The applicability of existing test methods to the wide range of emerging materials bears investigation. The overriding concern is that the values measured are accurate representations of the true material response. The ultimate objective of this work is to establish a set of test methods to evaluate the textile composites developed for the ACT Program.

  2. Comparison of Refractory Performance in Black Liquor Gasifiers and a Smelt Test System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Peascoe, RA

    2001-09-25

    Prior laboratory corrosion studies along with experience at the black liquor gasifier in New Bern, North Carolina, clearly demonstrate that serious material problems exist with the gasifier's refractory lining. Mullite-based and alumina-based refractories used at the New Bern facility suffered significant degradation even though they reportedly performed adequately in smaller scale systems. Oak Ridge National Laboratory's involvement in the failure analysis, and the initial exploration of suitable replacement materials, led to the realization that a simple and reliable, complementary method for refractory screening was needed. The development of a laboratory test system and its suitability for simulating the environment ofmore » black liquor gasifiers was undertaken. Identification and characterization of corrosion products were used to evaluate the test system as a rapid screening tool for refractory performance and as a predictor of refractory lifetime. Results from the test systems and pl ants were qualitatively similar.« less

  3. A Simple Metallothionein-Based Biosensor for Enhanced Detection of Arsenic and Mercury

    PubMed Central

    Irvine, Gordon W.; Tan, Swee Ngin; Stillman, Martin J.

    2017-01-01

    Metallothioneins (MTs) are a family of cysteine-rich proteins whose biological roles include the regulation of essential metal ions and protection against the harmful effects of toxic metals. Due to its high affinity for many toxic, soft metals, recombinant human MT isoform 1a was incorporated into an electrochemical-based biosensor for the detection of As3+ and Hg2+. A simple design was chosen to maximize its potential in environmental monitoring and MT was physically adsorbed onto paper discs placed on screen-printed carbon electrodes (SPCEs). This system was tested with concentrations of arsenic and mercury typical of contaminated water sources ranging from 5 to 1000 ppb. The analytical performance of the MT-adsorbed paper discs on SPCEs demonstrated a greater than three-fold signal enhancement and a lower detection limit compared to blank SPCEs, 13 ppb for As3+ and 45 ppb for Hg2+. While not being as low as some of the recommended drinking water limits, the sensitivity of the simple MT-biosensor would be potentially useful in monitoring of areas of concern with a known contamination problem. This paper describes the ability of the metal binding protein metallothionein to enhance the effectiveness of a simple, low-cost electrochemical sensor. PMID:28335390

  4. A problem-solving task specialized for functional neuroimaging: validation of the Scarborough adaptation of the Tower of London (S-TOL) using near-infrared spectroscopy

    PubMed Central

    Ruocco, Anthony C.; Rodrigo, Achala H.; Lam, Jaeger; Di Domenico, Stefano I.; Graves, Bryanna; Ayaz, Hasan

    2014-01-01

    Problem-solving is an executive function subserved by a network of neural structures of which the dorsolateral prefrontal cortex (DLPFC) is central. Whereas several studies have evaluated the role of the DLPFC in problem-solving, few standardized tasks have been developed specifically for use with functional neuroimaging. The current study adapted a measure with established validity for the assessment of problem-solving abilities to design a test more suitable for functional neuroimaging protocols. The Scarborough adaptation of the Tower of London (S-TOL) was administered to 38 healthy adults while hemodynamic oxygenation of the PFC was measured using 16-channel continuous-wave functional near-infrared spectroscopy (fNIRS). Compared to a baseline condition, problems that required two or three steps to achieve a goal configuration were associated with higher activation in the left DLPFC and deactivation in the medial PFC. Individuals scoring higher in trait deliberation showed consistently higher activation in the left DLPFC regardless of task difficulty, whereas individuals lower in this trait displayed less activation when solving simple problems. Based on these results, the S-TOL may serve as a standardized task to evaluate problem-solving abilities in functional neuroimaging studies. PMID:24734017

  5. Type of motion and lubricant in wear simulation of polyethylene acetabular cup.

    PubMed

    Saikko, V; Ahlroos, T

    1999-01-01

    The wear of ultra-high molecular weight polyethylene, the most commonly used bearing material in prosthetic joints, is often substantial, posing a significant clinical problem. For a long time, there has been a need for simple but still realistic wear test devices for prosthetic joint materials. The wear factors produced by earlier reciprocating and unidirectionally rotating wear test devices for polyethylene are typically two orders of magnitude too low, both in water and in serum lubrication. Wear is negligible even under multidirectional motion in water. A twelve-station, circularly translating pin-on-disc (CTPOD) device and a modification of the established biaxial rocking motion hip joint simulator were built. With these simple and inexpensive devices, and with the established three-axis hip joint simulator, realistic wear simulation was achieved. This was due to serum lubrication and to the fact that the direction of sliding constantly changed relative to the polyethylene specimen. The type and magnitude of load was found to be less important. The CTPOD tests showed that the subsurface brittle region, which results from gamma irradiation sterilization of polyethylene in air, has poor wear resistance. Phospholipid and soy protein lubrication resulted in unrealistic wear. The introduction of devices like CTPOD may boost wear studies, rendering them feasible without heavy investment.

  6. Problem Solving with the Elementary Youngster.

    ERIC Educational Resources Information Center

    Swartz, Vicki

    This paper explores research on problem solving and suggests a problem-solving approach to elementary school social studies, using a culture study of the ancient Egyptians and King Tut as a sample unit. The premise is that problem solving is particularly effective in dealing with problems which do not have one simple and correct answer but rather…

  7. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert

    PubMed Central

    Schmidt, Henk G.; Rikers, Remy M. J. P.; Custers, Eugene J. F. M.; Splinter, Ted A. W.; van Saase, Jan L. C. M.

    2010-01-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices’ decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases. PMID:20354726

  8. Conscious thought beats deliberation without attention in diagnostic decision-making: at least when you are an expert.

    PubMed

    Mamede, Sílvia; Schmidt, Henk G; Rikers, Remy M J P; Custers, Eugene J F M; Splinter, Ted A W; van Saase, Jan L C M

    2010-11-01

    Contrary to what common sense makes us believe, deliberation without attention has recently been suggested to produce better decisions in complex situations than deliberation with attention. Based on differences between cognitive processes of experts and novices, we hypothesized that experts make in fact better decisions after consciously thinking about complex problems whereas novices may benefit from deliberation-without-attention. These hypotheses were confirmed in a study among doctors and medical students. They diagnosed complex and routine problems under three conditions, an immediate-decision condition and two delayed conditions: conscious thought and deliberation-without-attention. Doctors did better with conscious deliberation when problems were complex, whereas reasoning mode did not matter in simple problems. In contrast, deliberation-without-attention improved novices' decisions, but only in simple problems. Experts benefit from consciously thinking about complex problems; for novices thinking does not help in those cases.

  9. Secular Orbit Evolution in Systems with a Strong External Perturber—A Simple and Accurate Model

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrade-Ines, Eduardo; Eggl, Siegfried, E-mail: eandrade.ines@gmail.com, E-mail: siegfried.eggl@jpl.nasa.gov

    We present a semi-analytical correction to the seminal solution for the secular motion of a planet’s orbit under gravitational influence of an external perturber derived by Heppenheimer. A comparison between analytical predictions and numerical simulations allows us to determine corrective factors for the secular frequency and forced eccentricity in the coplanar restricted three-body problem. The correction is given in the form of a polynomial function of the system’s parameters that can be applied to first-order forced eccentricity and secular frequency estimates. The resulting secular equations are simple, straight forward to use, and improve the fidelity of Heppenheimers solution well beyond higher-ordermore » models. The quality and convergence of the corrected secular equations are tested for a wide range of parameters and limits of its applicability are given.« less

  10. Charge carrier mobility in thin films of organic semiconductors by the gated van der Pauw method

    PubMed Central

    Rolin, Cedric; Kang, Enpu; Lee, Jeong-Hwan; Borghs, Gustaaf; Heremans, Paul; Genoe, Jan

    2017-01-01

    Thin film transistors based on high-mobility organic semiconductors are prone to contact problems that complicate the interpretation of their electrical characteristics and the extraction of important material parameters such as the charge carrier mobility. Here we report on the gated van der Pauw method for the simple and accurate determination of the electrical characteristics of thin semiconducting films, independently from contact effects. We test our method on thin films of seven high-mobility organic semiconductors of both polarities: device fabrication is fully compatible with common transistor process flows and device measurements deliver consistent and precise values for the charge carrier mobility and threshold voltage in the high-charge carrier density regime that is representative of transistor operation. The gated van der Pauw method is broadly applicable to thin films of semiconductors and enables a simple and clean parameter extraction independent from contact effects. PMID:28397852

  11. Eye movements show similar adaptations in temporal coordination to movement planning conditions in both people with and without cerebral palsy.

    PubMed

    Payne, Alexander R; Plimmer, Beryl; McDaid, Andrew; Davies, T Claire

    2017-05-01

    The effects of cerebral palsy on movement planning for simple reaching tasks are not well understood. Movement planning is complex and entails many processes which could be affected. This study specifically sought to evaluate integrating task information, decoupling movements, and adjusting to altered mapping. For a reaching task, the asynchrony between the eye onset and the hand onset was measured across different movement planning conditions for participants with and without cerebral palsy. Previous research shows people without cerebral palsy vary this temporal coordination for different planning conditions. Our measurements show similar adaptations in temporal coordination for groups with and without cerebral palsy, to three of the four variations in planning condition tested. However, movement durations were still longer for the participants with cerebral palsy. Hence for simple goal-directed reaching, movement execution problems appear to limit activity more than movement planning deficits.

  12. A remote sensing based vegetation classification logic for global land cover analysis

    USGS Publications Warehouse

    Running, Steven W.; Loveland, Thomas R.; Pierce, Lars L.; Nemani, R.R.; Hunt, E. Raymond

    1995-01-01

    This article proposes a simple new logic for classifying global vegetation. The critical features of this classification are that 1) it is based on simple, observable, unambiguous characteristics of vegetation structure that are important to ecosystem biogeochemistry and can be measured in the field for validation, 2) the structural characteristics are remotely sensible so that repeatable and efficient global reclassifications of existing vegetation will be possible, and 3) the defined vegetation classes directly translate into the biophysical parameters of interest by global climate and biogeochemical models. A first test of this logic for the continental United States is presented based on an existing 1 km AVHRR normalized difference vegetation index database. Procedures for solving critical remote sensing problems needed to implement the classification are discussed. Also, some inferences from this classification to advanced vegetation biophysical variables such as specific leaf area and photosynthetic capacity useful to global biogeochemical modeling are suggested.

  13. Evaluation of Pediatric Questions on the Orthopaedic In-Training Examination-An Update.

    PubMed

    Murphy, Robert F; Nunez, Leah; Barfield, William R; Mooney, James F

    2017-09-01

    Pediatric orthopaedics is tested frequently on the Orthopaedic In-Training Examination (OITE). The most recent data on the pediatrics section of the OITE were generated from content 10 years old. The purpose of this study is to assess the pediatric orthopaedic questions on the 2011 to 2014 OITE, and to compare question categories and cognitive taxonomy with previous data. Four years (2011 to 2014) of OITE questions, answers, and references were reviewed. The number of pediatric questions per year was recorded, as well as presence of a clinical photo or imaging modality. Each question was categorized and assigned a cognitive taxonomy level. Categories included: knowledge; knowledge-treatment modalities; diagnosis; diagnosis/recognition of associated conditions; diagnosis/further studies; and diagnosis/treatment. Cognitive taxonomy levels included: simple recall, interpretation of data, and advanced problem-solving. The 3 most commonly covered topics were upper extremity trauma (17.4%), scoliosis (10.1%), and developmental dysplasia of the hip (5.7%). Compared with previous data, the percentage of pediatric questions was constant (13% vs. 14%). Categorically, the more recent OITE examinations contained significantly fewer questions testing simple knowledge (19% vs. 39%, P=0.0047), and significantly more questions testing knowledge of treatment modalities (17% vs. 9%, P=0.016) and diagnosis with associated conditions (19% vs. 9%, P=0.0034). Regarding cognitive taxonomy, there was a significant increase in the average number of questions that required advanced problem-solving (57% vs. 46%, P=0.048). Significantly more questions utilized clinical photographs and imaging studies (62% vs. 48%, P=0.012). The most common reference materials provided to support correct responses included Lovell and Winter's Pediatric Orthopaedics (25.7%) and the Journal of Pediatric Orthopaedics (23.4%). Although the percentage of pediatric questions on the OITE has remained essentially constant, the percentage of questions requiring advanced problem-solving or interpretation of images has increased significantly in the past 10 years. Knowledge of question type and content may be helpful for those involved in resident education and in the development of didactic pediatric orthopaedic curricula. Level IV.

  14. The Synthesis of Proteins-A Simple Experiment To Show the Procedures and Problems of Using Radioisotopes in Biochemical Studies

    NASA Astrophysics Data System (ADS)

    Hawcroft, David M.

    1996-11-01

    Courses of organic chemistry frequently include studies of biochemistry and hence of biochemical techniques. Radioisotopes have played a major role in the understanding of metabolic pathways, transport, enzyme activity and other processes. The experiment described in this paper uses simple techniques to illustrate the procedures involved in working with radioisotopes when following a simplified metabolic pathway. Safety considerations are discussed and a list of safety rules is provided, but the experiment itself uses very low levels of a weak beta-emitting isotope (tritium). Plant material is suggested to reduce legal, financial and emotive problems, but the techniques are applicable to all soft-tissued material. The problems involved in data interpretation in radioisotope experiments resulting from radiation quenching are resolved by simple correction calculations, and the merits of using radioisotopes shown by a calculation of the low mass of material being measured. Suggestions for further experiments are given.

  15. Visual Recognition Software for Binary Classification and Its Application to Spruce Pollen Identification

    PubMed Central

    Tcheng, David K.; Nayak, Ashwin K.; Fowlkes, Charless C.; Punyasena, Surangi W.

    2016-01-01

    Discriminating between black and white spruce (Picea mariana and Picea glauca) is a difficult palynological classification problem that, if solved, would provide valuable data for paleoclimate reconstructions. We developed an open-source visual recognition software (ARLO, Automated Recognition with Layered Optimization) capable of differentiating between these two species at an accuracy on par with human experts. The system applies pattern recognition and machine learning to the analysis of pollen images and discovers general-purpose image features, defined by simple features of lines and grids of pixels taken at different dimensions, size, spacing, and resolution. It adapts to a given problem by searching for the most effective combination of both feature representation and learning strategy. This results in a powerful and flexible framework for image classification. We worked with images acquired using an automated slide scanner. We first applied a hash-based “pollen spotting” model to segment pollen grains from the slide background. We next tested ARLO’s ability to reconstruct black to white spruce pollen ratios using artificially constructed slides of known ratios. We then developed a more scalable hash-based method of image analysis that was able to distinguish between the pollen of black and white spruce with an estimated accuracy of 83.61%, comparable to human expert performance. Our results demonstrate the capability of machine learning systems to automate challenging taxonomic classifications in pollen analysis, and our success with simple image representations suggests that our approach is generalizable to many other object recognition problems. PMID:26867017

  16. Estimating Model Probabilities using Thermodynamic Markov Chain Monte Carlo Methods

    NASA Astrophysics Data System (ADS)

    Ye, M.; Liu, P.; Beerli, P.; Lu, D.; Hill, M. C.

    2014-12-01

    Markov chain Monte Carlo (MCMC) methods are widely used to evaluate model probability for quantifying model uncertainty. In a general procedure, MCMC simulations are first conducted for each individual model, and MCMC parameter samples are then used to approximate marginal likelihood of the model by calculating the geometric mean of the joint likelihood of the model and its parameters. It has been found the method of evaluating geometric mean suffers from the numerical problem of low convergence rate. A simple test case shows that even millions of MCMC samples are insufficient to yield accurate estimation of the marginal likelihood. To resolve this problem, a thermodynamic method is used to have multiple MCMC runs with different values of a heating coefficient between zero and one. When the heating coefficient is zero, the MCMC run is equivalent to a random walk MC in the prior parameter space; when the heating coefficient is one, the MCMC run is the conventional one. For a simple case with analytical form of the marginal likelihood, the thermodynamic method yields more accurate estimate than the method of using geometric mean. This is also demonstrated for a case of groundwater modeling with consideration of four alternative models postulated based on different conceptualization of a confining layer. This groundwater example shows that model probabilities estimated using the thermodynamic method are more reasonable than those obtained using the geometric method. The thermodynamic method is general, and can be used for a wide range of environmental problem for model uncertainty quantification.

  17. Bio-charcoal production from municipal organic solid wastes

    NASA Astrophysics Data System (ADS)

    AlKhayat, Z. Q.

    2017-08-01

    The economic and environmental problems of handling the increasingly huge amounts of urban and/or suburban organic municipal solid wastes MSW, from collection to end disposal, in addition to the big fluctuations in power supply and other energy form costs for the various civilian needs, is studied for Baghdad city, the ancient and glamorous capital of Iraq, and a simple control device is suggested, built and tested by carbonizing these dried organic wastes in simple environment friendly bio-reactor in order to produce low pollution potential, economical and local charcoal capsules that might be useful for heating, cooking and other municipal uses. That is in addition to the solve of solid wastes management problem which involves huge human and financial resources and causes many lethal health and environmental problems. Leftovers of different social level residential campuses were collected, classified for organic materials then dried in order to be supplied into the bio-reactor, in which it is burnt and then mixed with small amounts of sugar sucrose that is extracted from Iraqi planted sugar cane, to produce well shaped charcoal capsules. The burning process is smoke free as the closed burner’s exhaust pipe is buried 1m underground hole, in order to use the subsurface soil as natural gas filter. This process has proved an excellent performance of handling about 120kg/day of classified MSW, producing about 80-100 kg of charcoal capsules, by the use of 200 l reactor volume.

  18. Learning to reach by reinforcement learning using a receptive field based function approximation approach with continuous actions.

    PubMed

    Tamosiunaite, Minija; Asfour, Tamim; Wörgötter, Florentin

    2009-03-01

    Reinforcement learning methods can be used in robotics applications especially for specific target-oriented problems, for example the reward-based recalibration of goal directed actions. To this end still relatively large and continuous state-action spaces need to be efficiently handled. The goal of this paper is, thus, to develop a novel, rather simple method which uses reinforcement learning with function approximation in conjunction with different reward-strategies for solving such problems. For the testing of our method, we use a four degree-of-freedom reaching problem in 3D-space simulated by a two-joint robot arm system with two DOF each. Function approximation is based on 4D, overlapping kernels (receptive fields) and the state-action space contains about 10,000 of these. Different types of reward structures are being compared, for example, reward-on- touching-only against reward-on-approach. Furthermore, forbidden joint configurations are punished. A continuous action space is used. In spite of a rather large number of states and the continuous action space these reward/punishment strategies allow the system to find a good solution usually within about 20 trials. The efficiency of our method demonstrated in this test scenario suggests that it might be possible to use it on a real robot for problems where mixed rewards can be defined in situations where other types of learning might be difficult.

  19. Technology in rural transportation: "Simple Solutions"

    DOT National Transportation Integrated Search

    1997-10-01

    The Rural Outreach Project: Simple Solutions Report contains the findings of a research effort aimed at identifying and describing proven, cost-effective, low-tech solutions for rural transportation-related problems or needs. Through a process ...

  20. Neural net diagnostics for VLSI test

    NASA Technical Reports Server (NTRS)

    Lin, T.; Tseng, H.; Wu, A.; Dogan, N.; Meador, J.

    1990-01-01

    This paper discusses the application of neural network pattern analysis algorithms to the IC fault diagnosis problem. A fault diagnostic is a decision rule combining what is known about an ideal circuit test response with information about how it is distorted by fabrication variations and measurement noise. The rule is used to detect fault existence in fabricated circuits using real test equipment. Traditional statistical techniques may be used to achieve this goal, but they can employ unrealistic a priori assumptions about measurement data. Our approach to this problem employs an adaptive pattern analysis technique based on feedforward neural networks. During training, a feedforward network automatically captures unknown sample distributions. This is important because distributions arising from the nonlinear effects of process variation can be more complex than is typically assumed. A feedforward network is also able to extract measurement features which contribute significantly to making a correct decision. Traditional feature extraction techniques employ matrix manipulations which can be particularly costly for large measurement vectors. In this paper we discuss a software system which we are developing that uses this approach. We also provide a simple example illustrating the use of the technique for fault detection in an operational amplifier.

  1. The epistemological status of general circulation models

    NASA Astrophysics Data System (ADS)

    Loehle, Craig

    2018-03-01

    Forecasts of both likely anthropogenic effects on climate and consequent effects on nature and society are based on large, complex software tools called general circulation models (GCMs). Forecasts generated by GCMs have been used extensively in policy decisions related to climate change. However, the relation between underlying physical theories and results produced by GCMs is unclear. In the case of GCMs, many discretizations and approximations are made, and simulating Earth system processes is far from simple and currently leads to some results with unknown energy balance implications. Statistical testing of GCM forecasts for degree of agreement with data would facilitate assessment of fitness for use. If model results need to be put on an anomaly basis due to model bias, then both visual and quantitative measures of model fit depend strongly on the reference period used for normalization, making testing problematic. Epistemology is here applied to problems of statistical inference during testing, the relationship between the underlying physics and the models, the epistemic meaning of ensemble statistics, problems of spatial and temporal scale, the existence or not of an unforced null for climate fluctuations, the meaning of existing uncertainty estimates, and other issues. Rigorous reasoning entails carefully quantifying levels of uncertainty.

  2. Computationally efficient stochastic optimization using multiple realizations

    NASA Astrophysics Data System (ADS)

    Bayer, P.; Bürger, C. M.; Finkel, M.

    2008-02-01

    The presented study is concerned with computationally efficient methods for solving stochastic optimization problems involving multiple equally probable realizations of uncertain parameters. A new and straightforward technique is introduced that is based on dynamically ordering the stack of realizations during the search procedure. The rationale is that a small number of critical realizations govern the output of a reliability-based objective function. By utilizing a problem, which is typical to designing a water supply well field, several variants of this "stack ordering" approach are tested. The results are statistically assessed, in terms of optimality and nominal reliability. This study demonstrates that the simple ordering of a given number of 500 realizations while applying an evolutionary search algorithm can save about half of the model runs without compromising the optimization procedure. More advanced variants of stack ordering can, if properly configured, save up to more than 97% of the computational effort that would be required if the entire number of realizations were considered. The findings herein are promising for similar problems of water management and reliability-based design in general, and particularly for non-convex problems that require heuristic search techniques.

  3. A deterministic Lagrangian particle separation-based method for advective-diffusion problems

    NASA Astrophysics Data System (ADS)

    Wong, Ken T. M.; Lee, Joseph H. W.; Choi, K. W.

    2008-12-01

    A simple and robust Lagrangian particle scheme is proposed to solve the advective-diffusion transport problem. The scheme is based on relative diffusion concepts and simulates diffusion by regulating particle separation. This new approach generates a deterministic result and requires far less number of particles than the random walk method. For the advection process, particles are simply moved according to their velocity. The general scheme is mass conservative and is free from numerical diffusion. It can be applied to a wide variety of advective-diffusion problems, but is particularly suited for ecological and water quality modelling when definition of particle attributes (e.g., cell status for modelling algal blooms or red tides) is a necessity. The basic derivation, numerical stability and practical implementation of the NEighborhood Separation Technique (NEST) are presented. The accuracy of the method is demonstrated through a series of test cases which embrace realistic features of coastal environmental transport problems. Two field application examples on the tidal flushing of a fish farm and the dynamics of vertically migrating marine algae are also presented.

  4. Bladder Control Problems in Women: Lifestyle Strategies for Relief

    MedlinePlus

    Bladder control: Lifestyle strategies ease problems Simple lifestyle changes may improve bladder control or enhance response to medication. Find out what you can do to help with your bladder control problem. By Mayo Clinic Staff If you've ...

  5. The beneficial effects of cognitive training with simple calculation and reading aloud in an elderly postsurgical population: study protocol for a randomized controlled trial.

    PubMed

    Kulason, Kay; Nouchi, Rui; Hoshikawa, Yasushi; Noda, Masafumi; Okada, Yoshinori; Kawashima, Ryuta

    2016-07-22

    This project proposes a pilot study to investigate the positive healing effects of cognitive training with simple arithmetic and reading aloud on elderly postsurgical patients. Elderly patients undergoing surgery have an increased risk of Postoperative Cognitive Decline (POCD), a condition in which learning, memory, and processing speed is greatly reduced after surgery. Since elderly patients are more likely to exhibit symptoms of POCD, the incidence is increasing as the population receiving surgery has aged. Little effort has been expended, however, to find treatments for POCD. Learning therapy, which consists of a combination of reading aloud and solving simple arithmetic problems, was developed in Japan as a treatment for Alzheimer's Disease to improve cognitive functions. Because patients with Alzheimer's Disease experience similar issues as those with POCD in learning, memory, and processing speed, a cognitive intervention based on the learning-therapy treatments used for Alzheimer's Disease could show advantageous outcomes for those at risk of POCD. Cognitive function will be measured before and after surgery using three different tests (Mini-Mental Status Exam, Frontal Assessment Battery, and Cogstate computerized tests). Subjects will be randomly divided into two groups-one that receives a Simple Calculation and Reading Aloud intervention (SCRA) and a waitlisted control group that does not receive SCRA. To measure cognition before and after the intervention, the previously mentioned three tests will be used. The obtained data will be analyzed using statistical tests such as ANCOVA to indicate whether the cognitive intervention group has made improvements in their cognitive functions. In addition, questionnaires will also be administered to collect data on mental and emotional statuses. This report will be the first pilot study to investigate the beneficial effects of SCRA on elderly surgical patients. Previous studies have shown sufficient evidence on the effectiveness of learning therapy in healthy elderly people and in those with Dementia. Therefore, this study will clarify whether SCRA can improve cognitive function in the more specialized group of elderly surgical patients. University Hospital Medical Information Network Clinical Trial Registry, UMIN000019832 . Registered on 18 November 2015.

  6. Clinical approaches to infertility in the bitch.

    PubMed

    Wilborn, Robyn R; Maxwell, Herris S

    2012-05-01

    When presented with the apparently infertile bitch, the practitioner must sort through a myriad of facts, historical events, and diagnostic tests to uncover the etiology of the problem. Many bitches that present for infertility are reproductively normal and are able to conceive with appropriate intervention and breeding management. An algorithmic approach is helpful in cases of infertility, where simple questions lead to the next appropriate step. Most bitches can be categorized as either cyclic or acyclic, and then further classified based on historical data and diagnostic testing. Each female has a unique set of circumstances that can affect her reproductive potential. By utilizing all available information and a logical approach, the clinician can narrow the list of differentials and reach a diagnosis more quickly.

  7. Loopback Tester: a synchronous communications circuit diagnostic device

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Maestas, J.H.

    1986-07-01

    The Loopback Tester is an Intel SBC 86/12A Single Board Computer and an Intel SBC 534 Communications Expansion Board configured and programmed to perform various basic or less. These tests include: (1) Data Communications Equipment (DCE) transmit timing detection (2) data rate measurement (3) instantaneous loopback indication and (4) bit error rate testing. It requires no initial setup after plug in, and can be used to locate the source of communications loss in a circuit. It can also be used to determine when crypto variable mismatch problems are the source of communications loss. This report discusses the functionality of themore » Loopback Tester as a diagnostic device. It also discusses the hardware and software which implements this simple yet reliable device.« less

  8. Detection of osmotic damages in GRP boat hulls

    NASA Astrophysics Data System (ADS)

    Krstulović-Opara, L.; Domazet, Ž.; Garafulić, E.

    2013-09-01

    Infrared thermography as a tool of non-destructive testing is method enabling visualization and estimation of structural anomalies and differences in structure's topography. In presented paper problem of osmotic damage in submerged glass reinforced polymer structures is addressed. The osmotic damage can be detected by a simple humidity gauging, but for proper evaluation and estimation testing methods are restricted and hardly applicable. In this paper it is demonstrated that infrared thermography, based on estimation of heat wave propagation, can be used. Three methods are addressed; Pulsed thermography, Fast Fourier Transform and Continuous Morlet Wavelet. An additional image processing based on gradient approach is applied on all addressed methods. It is shown that the Continuous Morlet Wavelet is the most appropriate method for detection of osmotic damage.

  9. Six simple questions to detect malnutrition or malnutrition risk in elderly women

    PubMed Central

    Gutiérrez-Gómez, Tranquilina; Cortés, Ernesto; Peñarrieta-de Córdova, Isabel; Gil-Guillén, Vicente Francisco; Ferrer-Diego, Rosa María

    2015-01-01

    Of the numerous instruments available to detect nutritional risk, the most widely used is the Mini Nutritional Assessment (MNA), but it takes 15–20 min to complete and its systematic administration in primary care units is not feasible in practice. We developed a tool to evaluate malnutrition risk that can be completed more rapidly using just clinical variables. Between 2008 and 2013, we conducted a cross-sectional study of 418 women aged ≥60 years from Mexico. Our outcome was positive MNA and our secondary variables included were: physical activity, diabetes mellitus, hypertension, educational level, dentition, psychological problems, living arrangements, history of falls, age and the number of tablets taken daily. The sample was divided randomly into two groups: construction and validation. Construction: a risk table was constructed to estimate the likelihood of the outcome, and risk groups were formed. Validation: the area under the ROC curve (AUC) was calculated and we compared the expected and the observed outcomes. The following risk factors were identified: physical activity, hypertension, diabetes, dentition, psychological problems and living with the family. The AUC was 0.77 (95% CI [0.68–0.86], p < 0.001). No differences were found between the expected and the observed outcomes (p = 0.902). This study presents a new malnutrition screening test for use in elderly women. The test is based on six very simple, quick and easy-to-evaluate questions, enabling the MNA to be reserved for confirmation. However, it should be used with caution until validation studies have been performed in other geographical areas. PMID:26500824

  10. Clinical history for diagnosis of dementia in men: Caerphilly Prospective Study

    PubMed Central

    Creavin, Sam; Fish, Mark; Gallacher, John; Bayer, Antony; Ben-Shlomo, Yoav

    2015-01-01

    Background Diagnosis of dementia often requires specialist referral and detailed, time-consuming assessments. Aim To investigate the utility of simple clinical items that non-specialist clinicians could use, in addition to routine practice, to diagnose all-cause dementia syndrome. Design and setting Cross-sectional diagnostic test accuracy study. Participants were identified from the electoral roll and general practice lists in Caerphilly and adjoining villages in South Wales, UK. Method Participants (1225 men aged 45–59 years) were screened for cognitive impairment using the Cambridge Cognitive Examination, CAMCOG, at phase 5 of the Caerphilly Prospective Study (CaPS). Index tests were a standardised clinical evaluation, neurological examination, and individual items on the Informant Questionnaire for Cognitive Disorders in the Elderly (IQCODE). Results Two-hundred and five men who screened positive (68%) and 45 (4.8%) who screened negative were seen, with 59 diagnosed with dementia. The model comprising problems with personal finance and planning had an area under the curve (AUC) of 0.92 (95% confidence interval [CI] = 0.86 to 0.97), positive likelihood ratio (LR+) of 23.7 (95% CI = 5.88 to 95.6), negative likelihood ratio (LR−) of 0.41 (95% CI = 0.27 to 0.62). The best single item for ruling out was no problems learning to use new gadgets (LR− of 0.22, 95% CI = 0.11 to 0.43). Conclusion This study found that three simple questions have high utility for diagnosing dementia in men who are cognitively screened. If confirmed, this could lead to less burdensome assessment where clinical assessment suggests possible dementia. PMID:26212844

  11. BOOK REVIEW: The Quantum Mechanics Solver: How to Apply Quantum Theory to Modern Physics, 2nd edition

    NASA Astrophysics Data System (ADS)

    Robbin, J. M.

    2007-07-01

    he hallmark of a good book of problems is that it allows you to become acquainted with an unfamiliar topic quickly and efficiently. The Quantum Mechanics Solver fits this description admirably. The book contains 27 problems based mainly on recent experimental developments, including neutrino oscillations, tests of Bell's inequality, Bose Einstein condensates, and laser cooling and trapping of atoms, to name a few. Unlike many collections, in which problems are designed around a particular mathematical method, here each problem is devoted to a small group of phenomena or experiments. Most problems contain experimental data from the literature, and readers are asked to estimate parameters from the data, or compare theory to experiment, or both. Standard techniques (e.g., degenerate perturbation theory, addition of angular momentum, asymptotics of special functions) are introduced only as they are needed. The style is closer to a non-specialist seminar rather than an undergraduate lecture. The physical models are kept simple; the emphasis is on cultivating conceptual and qualitative understanding (although in many of the problems, the simple models fit the data quite well). Some less familiar theoretical techniques are introduced, e.g. a variational method for lower (not upper) bounds on ground-state energies for many-body systems with two-body interactions, which is then used to derive a surprisingly accurate relation between baryon and meson masses. The exposition is succinct but clear; the solutions can be read as worked examples if you don't want to do the problems yourself. Many problems have additional discussion on limitations and extensions of the theory, or further applications outside physics (e.g., the accuracy of GPS positioning in connection with atomic clocks; proton and ion tumor therapies in connection with the Bethe Bloch formula for charged particles in solids). The problems use mainly non-relativistic quantum mechanics and are organised into three sections: Elementary Particles, Nuclei and Atoms; Quantum Entanglement and Measurement; and Complex Systems. The coverage is not comprehensive; there is little on scattering theory, for example, and some areas of recent interest, such as topological aspects of quantum mechanics and semiclassics, are not included. The problems are based on examination questions given at the École Polytechnique in the last 15 years. The book is accessible to undergraduates, but working physicists should find it a delight.

  12. Testlet-Based Multidimensional Adaptive Testing.

    PubMed

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range.

  13. Integrated testing strategies can be optimal for chemical risk classification.

    PubMed

    Raseta, Marko; Pitchford, Jon; Cussens, James; Doe, John

    2017-08-01

    There is an urgent need to refine strategies for testing the safety of chemical compounds. This need arises both from the financial and ethical costs of animal tests, but also from the opportunities presented by new in-vitro and in-silico alternatives. Here we explore the mathematical theory underpinning the formulation of optimal testing strategies in toxicology. We show how the costs and imprecisions of the various tests, and the variability in exposures and responses of individuals, can be assembled rationally to form a Markov Decision Problem. We compute the corresponding optimal policies using well developed theory based on Dynamic Programming, thereby identifying and overcoming some methodological and logical inconsistencies which may exist in the current toxicological testing. By illustrating our methods for two simple but readily generalisable examples we show how so-called integrated testing strategies, where information of different precisions from different sources is combined and where different initial test outcomes lead to different sets of future tests, can arise naturally as optimal policies. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Evaluating the role of social marketing campaigns to prevent youth gambling problems: a qualitative study.

    PubMed

    Messerlian, Carmen; Derevensky, Jeffrey

    2007-01-01

    Gambling among adolescents is a growing public health concern. To date, social marketing as a strategy to address problem gambling among youth has not been widely used. A qualitative study through the use of focus groups was conducted to explore adolescents' exposure to existing prevention campaigns and their message content and communication strategy preferences for a youth gambling social marketing campaign. Participants prefer that youth gambling ads depict real-life stories, use an emotional appeal and portray the negative consequences associated with gambling problems. They further recommend illustrating the basic facts of gambling using simple messages that raise awareness without making a judgement. Participants caution against the "don't do it" approach, suggesting it does not reflect the current youth gambling culture. This study should serve as a starting point for the development of a gambling prevention social marketing campaign. Targeting variables and campaign strategies highlighted should be considered in the early stages of development and tested along the way.

  15. libSRES: a C library for stochastic ranking evolution strategy for parameter estimation.

    PubMed

    Ji, Xinglai; Xu, Ying

    2006-01-01

    Estimation of kinetic parameters in a biochemical pathway or network represents a common problem in systems studies of biological processes. We have implemented a C library, named libSRES, to facilitate a fast implementation of computer software for study of non-linear biochemical pathways. This library implements a (mu, lambda)-ES evolutionary optimization algorithm that uses stochastic ranking as the constraint handling technique. Considering the amount of computing time it might require to solve a parameter-estimation problem, an MPI version of libSRES is provided for parallel implementation, as well as a simple user interface. libSRES is freely available and could be used directly in any C program as a library function. We have extensively tested the performance of libSRES on various pathway parameter-estimation problems and found its performance to be satisfactory. The source code (in C) is free for academic users at http://csbl.bmb.uga.edu/~jix/science/libSRES/

  16. A comparative study of history-based versus vectorized Monte Carlo methods in the GPU/CUDA environment for a simple neutron eigenvalue problem

    NASA Astrophysics Data System (ADS)

    Liu, Tianyu; Du, Xining; Ji, Wei; Xu, X. George; Brown, Forrest B.

    2014-06-01

    For nuclear reactor analysis such as the neutron eigenvalue calculations, the time consuming Monte Carlo (MC) simulations can be accelerated by using graphics processing units (GPUs). However, traditional MC methods are often history-based, and their performance on GPUs is affected significantly by the thread divergence problem. In this paper we describe the development of a newly designed event-based vectorized MC algorithm for solving the neutron eigenvalue problem. The code was implemented using NVIDIA's Compute Unified Device Architecture (CUDA), and tested on a NVIDIA Tesla M2090 GPU card. We found that although the vectorized MC algorithm greatly reduces the occurrence of thread divergence thus enhancing the warp execution efficiency, the overall simulation speed is roughly ten times slower than the history-based MC code on GPUs. Profiling results suggest that the slow speed is probably due to the memory access latency caused by the large amount of global memory transactions. Possible solutions to improve the code efficiency are discussed.

  17. A RADIATION TRANSFER SOLVER FOR ATHENA USING SHORT CHARACTERISTICS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davis, Shane W.; Stone, James M.; Jiang Yanfei

    2012-03-01

    We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiationmore » MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.« less

  18. Non-alcoholic Fatty Liver Disease (NAFLD)--A Review.

    PubMed

    Karim, M F; Al-Mahtab, M; Rahman, S; Debnath, C R

    2015-10-01

    Non-alcoholic fatty liver disease (NAFLD) is an emerging problem in Hepatology clinics. It is closely related to the increased frequency of overweight or obesity. It has recognised association with metabolic syndrome. Central obesity, diabetes mellitus, dyslipidemia are commonest risk factors. Association with hepatitis C genotype 3 is also recognised. NAFLD is an important cause of cyptogenic cirrhosis of liver. It affects all populations and all age groups. Most patients with NAFLD are asymptomatic or vague upper abdominal pain. Liver function tests are mostly normal or mild elevation of aminotranferases. Histological features almost identical to those of alcohol-induced liver damage and can range from mild steatosis to cirrhosis. Two hit hypothesis is prevailing theory for the development of NAFLD. Diagnosis is usually made by imaging tools like ultrasonogram which reveal a bright liver while liver biopsy is gold standard for diagnosis as well as differentiating simple fatty liver and non-alcoholic steatohepatitis (NASH). Prognosis is variable. Simple hepatic steatosis generally has a benign long-term prognosis. However, one to two third of NASH progress to fibrosis or cirrhosis and may have a similar prognosis as cirrhosis from other liver diseases. Treatment is mostly control of underlying disorders and dietary advice, exercise, insulin sensitizers, antioxidants, or cytoprotective agents. The prevalence of NAFLD is increasing. So it needs more research to address this problem.

  19. Large-scale computation of incompressible viscous flow by least-squares finite element method

    NASA Technical Reports Server (NTRS)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  20. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics

    PubMed Central

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P.; Nordsletten, David A.

    2014-01-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii–Newton–Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics. PMID:25187672

  1. A displacement-based finite element formulation for incompressible and nearly-incompressible cardiac mechanics.

    PubMed

    Hadjicharalambous, Myrianthi; Lee, Jack; Smith, Nicolas P; Nordsletten, David A

    2014-06-01

    The Lagrange Multiplier (LM) and penalty methods are commonly used to enforce incompressibility and compressibility in models of cardiac mechanics. In this paper we show how both formulations may be equivalently thought of as a weakly penalized system derived from the statically condensed Perturbed Lagrangian formulation, which may be directly discretized maintaining the simplicity of penalty formulations with the convergence characteristics of LM techniques. A modified Shamanskii-Newton-Raphson scheme is introduced to enhance the nonlinear convergence of the weakly penalized system and, exploiting its equivalence, modifications are developed for the penalty form. Focusing on accuracy, we proceed to study the convergence behavior of these approaches using different interpolation schemes for both a simple test problem and more complex models of cardiac mechanics. Our results illustrate the well-known influence of locking phenomena on the penalty approach (particularly for lower order schemes) and its effect on accuracy for whole-cycle mechanics. Additionally, we verify that direct discretization of the weakly penalized form produces similar convergence behavior to mixed formulations while avoiding the use of an additional variable. Combining a simple structure which allows the solution of computationally challenging problems with good convergence characteristics, the weakly penalized form provides an accurate and efficient alternative to incompressibility and compressibility in cardiac mechanics.

  2. Optimal shielding design for minimum materials cost or mass

    DOE PAGES

    Woolley, Robert D.

    2015-12-02

    The mathematical underpinnings of cost optimal radiation shielding designs based on an extension of optimal control theory are presented, a heuristic algorithm to iteratively solve the resulting optimal design equations is suggested, and computational results for a simple test case are discussed. A typical radiation shielding design problem can have infinitely many solutions, all satisfying the problem's specified set of radiation attenuation requirements. Each such design has its own total materials cost. For a design to be optimal, no admissible change in its deployment of shielding materials can result in a lower cost. This applies in particular to very smallmore » changes, which can be restated using the calculus of variations as the Euler-Lagrange equations. Furthermore, the associated Hamiltonian function and application of Pontryagin's theorem lead to conditions for a shield to be optimal.« less

  3. Height Measuring System On Video Using Otsu Method

    NASA Astrophysics Data System (ADS)

    Sandy, C. L. M.; Meiyanti, R.

    2017-01-01

    A measurement of height is comparing the value of the magnitude of an object with a standard measuring tool. The problems that exist in the measurement are still the use of a simple apparatus in which one of them is by using a meter. This method requires a relatively long time. To overcome these problems, this research aims to create software with image processing that is used for the measurement of height. And subsequent that image is tested, where the object captured by the video camera can be known so that the height of the object can be measured using the learning method of Otsu. The system was built using Delphi 7 of Vision Lab VCL 4.5 component. To increase the quality of work of the system in future research, the developed system can be combined with other methods.

  4. Parallel-In-Time For Moving Meshes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Falgout, R. D.; Manteuffel, T. A.; Southworth, B.

    2016-02-04

    With steadily growing computational resources available, scientists must develop e ective ways to utilize the increased resources. High performance, highly parallel software has be- come a standard. However until recent years parallelism has focused primarily on the spatial domain. When solving a space-time partial di erential equation (PDE), this leads to a sequential bottleneck in the temporal dimension, particularly when taking a large number of time steps. The XBraid parallel-in-time library was developed as a practical way to add temporal parallelism to existing se- quential codes with only minor modi cations. In this work, a rezoning-type moving mesh is appliedmore » to a di usion problem and formulated in a parallel-in-time framework. Tests and scaling studies are run using XBraid and demonstrate excellent results for the simple model problem considered herein.« less

  5. Adaptive Discrete Hypergraph Matching.

    PubMed

    Yan, Junchi; Li, Changsheng; Li, Yin; Cao, Guitao

    2018-02-01

    This paper addresses the problem of hypergraph matching using higher-order affinity information. We propose a solver that iteratively updates the solution in the discrete domain by linear assignment approximation. The proposed method is guaranteed to converge to a stationary discrete solution and avoids the annealing procedure and ad-hoc post binarization step that are required in several previous methods. Specifically, we start with a simple iterative discrete gradient assignment solver. This solver can be trapped in an -circle sequence under moderate conditions, where is the order of the graph matching problem. We then devise an adaptive relaxation mechanism to jump out this degenerating case and show that the resulting new path will converge to a fixed solution in the discrete domain. The proposed method is tested on both synthetic and real-world benchmarks. The experimental results corroborate the efficacy of our method.

  6. Capacity-constrained traffic assignment in networks with residual queues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lam, W.H.K.; Zhang, Y.

    2000-04-01

    This paper proposes a capacity-constrained traffic assignment model for strategic transport planning in which the steady-state user equilibrium principle is extended for road networks with residual queues. Therefore, the road-exit capacity and the queuing effects can be incorporated into the strategic transport model for traffic forecasting. The proposed model is applicable to the congested network particularly when the traffic demands exceeds the capacity of the network during the peak period. An efficient solution method is proposed for solving the steady-state traffic assignment problem with residual queues. Then a simple numerical example is employed to demonstrate the application of the proposedmore » model and solution method, while an example of a medium-sized arterial highway network in Sioux Falls, South Dakota, is used to test the applicability of the proposed solution to real problems.« less

  7. Evolution of cellular automata with memory: The Density Classification Task.

    PubMed

    Stone, Christopher; Bull, Larry

    2009-08-01

    The Density Classification Task is a well known test problem for two-state discrete dynamical systems. For many years researchers have used a variety of evolutionary computation approaches to evolve solutions to this problem. In this paper, we investigate the evolvability of solutions when the underlying Cellular Automaton is augmented with a type of memory based on the Least Mean Square algorithm. To obtain high performance solutions using a simple non-hybrid genetic algorithm, we design a novel representation based on the ternary representation used for Learning Classifier Systems. The new representation is found able to produce superior performance to the bit string traditionally used for representing Cellular automata. Moreover, memory is shown to improve evolvability of solutions and appropriate memory settings are able to be evolved as a component part of these solutions.

  8. Development of a Probabilistic Component Mode Synthesis Method for the Analysis of Non-Deterministic Substructures

    NASA Technical Reports Server (NTRS)

    Brown, Andrew M.; Ferri, Aldo A.

    1995-01-01

    Standard methods of structural dynamic analysis assume that the structural characteristics are deterministic. Recognizing that these characteristics are actually statistical in nature, researchers have recently developed a variety of methods that use this information to determine probabilities of a desired response characteristic, such as natural frequency, without using expensive Monte Carlo simulations. One of the problems in these methods is correctly identifying the statistical properties of primitive variables such as geometry, stiffness, and mass. This paper presents a method where the measured dynamic properties of substructures are used instead as the random variables. The residual flexibility method of component mode synthesis is combined with the probabilistic methods to determine the cumulative distribution function of the system eigenvalues. A simple cantilever beam test problem is presented that illustrates the theory.

  9. Evaluation of mechanical properties of hybrid fiber (hemp, jute, kevlar) reinforced composites

    NASA Astrophysics Data System (ADS)

    Suresha, K. V.; Shivanand, H. K.; Amith, A.; Vidyasagar, H. N.

    2018-04-01

    In today's world composites play wide role in all the engineering fields. The reinforcement of composites decides the properties of the material. Natural fiber composites compared to synthetic fiber possesses poor mechanical properties. The solution for this problem is to use combination of natural fiber and synthetic fiber. Hybridization helps to improve the overall mechanical properties of the material. In this study, hybrid reinforced composites of Hemp fabric/Kevlar fabric/Epoxy and Jute fabric/ Kevlar fabric/Epoxy composites are fabricated using Simple hand layup technique followed by Vacuum bagging process. Appropriate test methods as per standards and guidelines are followed to analyze mechanical behavior of the composites. The mechanical characteristics like tensile, compression and flexural properties of the hybrid reinforced composites are tested as per the ASTM standards by series of tensile test; compression test and three point bending tests were conducted on the hybrid composites. A quantitative relationship between the Hemp fabric/Kevlar fabric/Epoxy and Jute/ Kevlar fabric/Epoxy has been established with constant thickness.

  10. The X-43 Fin Actuation System Problem - Reliability in Shades of Gray

    NASA Technical Reports Server (NTRS)

    Peebles, Curtis

    2006-01-01

    Following the loss of the first X-43 during launch, the mishap investigation board indicated the Fin Actuator System (FAS) needed to have a larger torque margin. To supply this added torque, a second actuator was added. The consequences of what seemed to be a simple modification would trouble the X-43 program. Because of the second actuator, a new computer board was required. This proved to be subject to electronic noise. This resulted in the actuator latch up in ground tests of the FAS for the second launch. Such a latch up would cause the Pegasus booster to fail, as the FAS was a single string system. The problem was corrected and the second flight was successful. The same modifications were added to the FAS for flight three. When the FAS underwent ground tests, it also latched up. The failure indicated that each computer board had a different tolerance to electronic noise. The problem with the FAS was corrected. Subsequently, another failure occurred, raising questions about the design, and the probability of failure for the X-43 Mach 10 flight. This was not simply a technical issue, but illuminated the difficulties facing both managers and engineers in assessing risk, design requirements, and probabilities in cutting edge aerospace projects.

  11. The mGA1.0: A common LISP implementation of a messy genetic algorithm

    NASA Technical Reports Server (NTRS)

    Goldberg, David E.; Kerzic, Travis

    1990-01-01

    Genetic algorithms (GAs) are finding increased application in difficult search, optimization, and machine learning problems in science and engineering. Increasing demands are being placed on algorithm performance, and the remaining challenges of genetic algorithm theory and practice are becoming increasingly unavoidable. Perhaps the most difficult of these challenges is the so-called linkage problem. Messy GAs were created to overcome the linkage problem of simple genetic algorithms by combining variable-length strings, gene expression, messy operators, and a nonhomogeneous phasing of evolutionary processing. Results on a number of difficult deceptive test functions are encouraging with the mGA always finding global optima in a polynomial number of function evaluations. Theoretical and empirical studies are continuing, and a first version of a messy GA is ready for testing by others. A Common LISP implementation called mGA1.0 is documented and related to the basic principles and operators developed by Goldberg et. al. (1989, 1990). Although the code was prepared with care, it is not a general-purpose code, only a research version. Important data structures and global variations are described. Thereafter brief function descriptions are given, and sample input data are presented together with sample program output. A source listing with comments is also included.

  12. The Physics Workbook: A Needed Instructional Device.

    ERIC Educational Resources Information Center

    Brekke, Stewart E.

    2003-01-01

    Points out the importance of problem solving as a fundamental skill and how students struggle with problem solving in physics courses. Describes a workbook developed as a solution to students' struggles that features simple exercises and advanced problem solving. (Contains 12 references.) (Author/YDS)

  13. Evaluation of an online, case-based interactive approach to teaching pathophysiology.

    PubMed

    Van Dijken, Pieter Canham; Thévoz, Sara; Jucker-Kupper, Patrick; Feihl, François; Bonvin, Raphaël; Waeber, Bernard

    2008-06-01

    The aim of this study was to evaluate a new pedagogical approach in teaching fluid, electrolyte and acid-base pathophysiology in undergraduate students. This approach comprises traditional lectures, the study of clinical cases on the web and a final interactive discussion of these cases in the classroom. When on the web, the students are asked to select laboratory tests that seem most appropriate to understand the pathophysiological condition underlying the clinical case. The percentage of students having chosen a given test is made available to the teacher who uses it in an interactive session to stimulate discussion with the whole class of students. The same teacher used the same case studies during 2 consecutive years during the third year of the curriculum. The majority of students answered the questions on the web as requested and evaluated positively their experience with this form of teaching and learning. Complementing traditional lectures with online case-based studies and interactive group discussions represents, therefore, a simple means to promote the learning and the understanding of complex pathophysiological mechanisms. This simple problem-based approach to teaching and learning may be implemented to cover all fields of medicine.

  14. The Development of a Novel, Validated, Rapid and Simple Method for the Detection of Sarcocystis fayeri in Horse Meat in the Sanitary Control Setting.

    PubMed

    Furukawa, Masato; Minegishi, Yasutaka; Izumiyama, Shinji; Yagita, Kenji; Mori, Hideto; Uemura, Taku; Etoh, Yoshiki; Maeda, Eriko; Sasaki, Mari; Ichinose, Kazuya; Harada, Seiya; Kamata, Yoichi; Otagiri, Masaki; Sugita-Konishi, Yoshiko; Ohnishi, Takahiro

    2016-01-01

    Sarcocystis fayeri (S. fayeri) is a newly identified causative agent of foodborne disease that is associated with the consumption of raw horse meat. The testing methods prescribed by the Ministry of Health, Labour and Welfare of Japan are time consuming and require the use of expensive equipment and a high level of technical expertise. Accordingly, these methods are not suitable for use in the routine sanitary control setting to prevent outbreaks of foodborne disease. In order to solve these problems, we have developed a new, rapid and simple testing method using LAMP, which takes only 1 hour to perform and which does not involve the use of any expensive equipment or expert techniques. For the validation of this method, an inter-laboratory study was performed among 5 institutes using 10 samples infected with various concentrations of S. fayeri. The results of the inter-laboratory study demonstrated that our LAMP method could detect S. fayeri at concentrations greater than 10(4) copies/g. Thus, this new method could be useful in screening for S. fayeri as a routine sanitary control procedure.

  15. Simple prostatectomy

    MedlinePlus

    ... if you have: Problems emptying your bladder (urinary retention) Frequent urinary tract infections Frequent bleeding from the ... to internal organs Erection problems (impotence) Loss of sperm fertility ( infertility ) Passing semen back up into the ...

  16. Using a Modified Simple Pendulum to Find the Variations in the Value of “g”

    NASA Astrophysics Data System (ADS)

    Arnold, Jonathan P.; Efthimiou, C.

    2007-05-01

    The simple pendulum is one of the most known and studied system of Newtonian Mechanics. It also provides one of the most elegant and simple devices to measure the acceleration of gravity at any location. In this presentation we will revisit the problem of measuring the acceleration of gravity using a simple pendulum and will present a modification to the standard technique that increases the accuracy of the measurement.

  17. Prediction of Sound Waves Propagating Through a Nozzle Without/With a Shock Wave Using the Space-Time CE/SE Method

    NASA Technical Reports Server (NTRS)

    Wang, Xiao-Yen; Chang, Sin-Chung; Jorgenson, Philip C. E.

    2000-01-01

    The benchmark problems in Category 1 (Internal Propagation) of the third Computational Aeroacoustics (CAA) Work-shop sponsored by NASA Glenn Research Center are solved using the space-time conservation element and solution element (CE/SE) method. The first problem addresses the propagation of sound waves through a nearly choked transonic nozzle. The second one concerns shock-sound interaction in a supersonic nozzle. A quasi one-dimension CE/SE Euler solver for a nonuniform mesh is developed and employed to solve both problems. Numerical solutions are compared with the analytical solution for both problems. It is demonstrated that the CE/SE method is capable of solving aeroacoustic problems with/without shock waves in a simple way. Furthermore, the simple nonreflecting boundary condition used in the CE/SE method which is not based on the characteristic theory works very well.

  18. Rebecca Erikson – Solving Problems with Love for Science

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erikson, Rebecca

    Rebecca Erikson’s love for science began at a young age. Today, she’s a senior scientist at PNNL trying to solve problems that address national security concerns. Through one project, she developed a sleek, simple and inexpensive way to turn a cellphone into a high-powered, high-quality microscope that helps authorities determine if white powder that falls from an envelope is anthrax or something simple like baby powder. Listen as Rebecca describes her work in this Energy Department video.

  19. Multigroup Radiation-Hydrodynamics with a High-Order, Low-Order Method

    DOE PAGES

    Wollaber, Allan Benton; Park, HyeongKae; Lowrie, Robert Byron; ...

    2016-12-09

    Recent efforts at Los Alamos National Laboratory to develop a moment-based, scale-bridging [or high-order (HO)–low-order (LO)] algorithm for solving large varieties of the transport (kinetic) systems have shown promising results. A part of our ongoing effort is incorporating this methodology into the framework of the Eulerian Applications Project to achieve algorithmic acceleration of radiationhydrodynamics simulations in production software. By starting from the thermal radiative transfer equations with a simple material-motion correction, we derive a discretely consistent energy balance equation (LO equation). We demonstrate that the corresponding LO system for the Monte Carlo HO solver is closely related to the originalmore » LO system without material-motion corrections. We test the implementation on a radiative shock problem and show consistency between the energy densities and temperatures in the HO and LO solutions as well as agreement with the semianalytic solution. We also test the approach on a more challenging two-dimensional problem and demonstrate accuracy enhancements and algorithmic speedups. This paper extends a recent conference paper by including multigroup effects.« less

  20. Conceptual and statistical problems associated with the use of diversity indices in ecology.

    PubMed

    Barrantes, Gilbert; Sandoval, Luis

    2009-09-01

    Diversity indices, particularly the Shannon-Wiener index, have extensively been used in analyzing patterns of diversity at different geographic and ecological scales. These indices have serious conceptual and statistical problems which make comparisons of species richness or species abundances across communities nearly impossible. There is often no a single statistical method that retains all information needed to answer even a simple question. However, multivariate analyses could be used instead of diversity indices, such as cluster analyses or multiple regressions. More complex multivariate analyses, such as Canonical Correspondence Analysis, provide very valuable information on environmental variables associated to the presence and abundance of the species in a community. In addition, particular hypotheses associated to changes in species richness across localities, or change in abundance of one, or a group of species can be tested using univariate, bivariate, and/or rarefaction statistical tests. The rarefaction method has proved to be robust to standardize all samples to a common size. Even the simplest method as reporting the number of species per taxonomic category possibly provides more information than a diversity index value.

  1. Building Automatic Grading Tools for Basic of Programming Lab in an Academic Institution

    NASA Astrophysics Data System (ADS)

    Harimurti, Rina; Iwan Nurhidayat, Andi; Asmunin

    2018-04-01

    The skills of computer programming is a core competency that must be mastered by students majoring in computer sciences. The best way to improve this skill is through the practice of writing many programs to solve various problems from simple to complex. It takes hard work and a long time to check and evaluate the results of student labs one by one, especially if the number of students a lot. Based on these constrain, web proposes Automatic Grading Tools (AGT), the application that can evaluate and deeply check the source code in C, C++. The application architecture consists of students, web-based applications, compilers, and operating systems. Automatic Grading Tools (AGT) is implemented MVC Architecture and using open source software, such as laravel framework version 5.4, PostgreSQL 9.6, Bootstrap 3.3.7, and jquery library. Automatic Grading Tools has also been tested for real problems by submitting source code in C/C++ language and then compiling. The test results show that the AGT application has been running well.

  2. Sterilization of instruments in solar ovens.

    PubMed

    Jørgensen, A F; Nøhr, K; Boisen, F; Nøhr, J

    2002-01-01

    The sterilization of instruments in rural health clinics in less developed countries is an increasing problem as chemical methods can no longer be recommended and fuel wood is becoming increasingly scarce. It seems obvious, therefore, to utilize solar energy for sterilization purposes. A solar oven was designed and manufactured using local materials and simple tools. It was tested by physical, chemical and microbiological methods and, after successful testing, installed in a rural clinic. The oven was able to generate temperatures above 180 degrees C. On days with direct sunlight the oven fulfilled the international recommendations for hot air sterilization. The chemical indicators, Browne's tubes type 3 and 5, also changed colour. It was difficult to reach the right value for the sterilization effect during months with a low sun position. A moveable oven, or two ovens, must be installed to solve this problem. The solar oven has proven to be a realistic method for the sterilization of instruments. The solar oven is easy to make and use. It saves fuel and can be used in most tropical areas.

  3. The Buffer Diagnostic Prototype: A fault isolation application using CLIPS

    NASA Technical Reports Server (NTRS)

    Porter, Ken

    1994-01-01

    This paper describes problem domain characteristics and development experiences from using CLIPS 6.0 in a proof-of-concept troubleshooting application called the Buffer Diagnostic Prototype. The problem domain is a large digital communications subsystems called the real-time network (RTN), which was designed to upgrade the launch processing system used for shuttle support at KSC. The RTN enables up to 255 computers to share 50,000 data points with millisecond response times. The RTN's extensive built-in test capability but lack of any automatic fault isolation capability presents a unique opportunity for a diagnostic expert system application. The Buffer Diagnostic Prototype addresses RTN diagnosis with a multiple strategy approach. A novel technique called 'faulty causality' employs inexact qualitative models to process test results. Experimental knowledge provides a capability to recognize symptom-fault associations. The implementation utilizes rule-based and procedural programming techniques, including a goal-directed control structure and simple text-based generic user interface that may be reusable for other rapid prototyping applications. Although limited in scope, this project demonstrates a diagnostic approach that may be adapted to troubleshoot a broad range of equipment.

  4. Domain Regeneration for Cross-Database Micro-Expression Recognition

    NASA Astrophysics Data System (ADS)

    Zong, Yuan; Zheng, Wenming; Huang, Xiaohua; Shi, Jingang; Cui, Zhen; Zhao, Guoying

    2018-05-01

    In this paper, we investigate the cross-database micro-expression recognition problem, where the training and testing samples are from two different micro-expression databases. Under this setting, the training and testing samples would have different feature distributions and hence the performance of most existing micro-expression recognition methods may decrease greatly. To solve this problem, we propose a simple yet effective method called Target Sample Re-Generator (TSRG) in this paper. By using TSRG, we are able to re-generate the samples from target micro-expression database and the re-generated target samples would share same or similar feature distributions with the original source samples. For this reason, we can then use the classifier learned based on the labeled source samples to accurately predict the micro-expression categories of the unlabeled target samples. To evaluate the performance of the proposed TSRG method, extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases are conducted. Compared with recent state-of-the-art cross-database emotion recognition methods, the proposed TSRG achieves more promising results.

  5. Advance Resource Provisioning in Bulk Data Scheduling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Balman, Mehmet

    2012-10-01

    Today?s scientific and business applications generate mas- sive data sets that need to be transferred to remote sites for sharing, processing, and long term storage. Because of increasing data volumes and enhancement in current net- work technology that provide on-demand high-speed data access between collaborating institutions, data handling and scheduling problems have reached a new scale. In this paper, we present a new data scheduling model with ad- vance resource provisioning, in which data movement operations are defined with earliest start and latest comple- tion times. We analyze time-dependent resource assign- ment problem, and propose a new methodology to improvemore » the current systems by allowing researchers and higher-level meta-schedulers to use data-placement as-a-service, so they can plan ahead and submit transfer requests in advance. In general, scheduling with time and resource conflicts is NP-hard. We introduce an efficient algorithm to organize multiple requests on the fly, while satisfying users? time and resource constraints. We successfully tested our algorithm in a simple benchmark simulator that we have developed, and demonstrated its performance with initial test results.« less

  6. A new multistage groundwater transport inverse method: presentation, evaluation, and implications

    USGS Publications Warehouse

    Anderman, Evan R.; Hill, Mary C.

    1999-01-01

    More computationally efficient methods of using concentration data are needed to estimate groundwater flow and transport parameters. This work introduces and evaluates a three‐stage nonlinear‐regression‐based iterative procedure in which trial advective‐front locations link decoupled flow and transport models. Method accuracy and efficiency are evaluated by comparing results to those obtained when flow‐ and transport‐model parameters are estimated simultaneously. The new method is evaluated as conclusively as possible by using a simple test case that includes distinct flow and transport parameters, but does not include any approximations that are problem dependent. The test case is analytical; the only flow parameter is a constant velocity, and the transport parameters are longitudinal and transverse dispersivity. Any difficulties detected using the new method in this ideal situation are likely to be exacerbated in practical problems. Monte‐Carlo analysis of observation error ensures that no specific error realization obscures the results. Results indicate that, while this, and probably other, multistage methods do not always produce optimal parameter estimates, the computational advantage may make them useful in some circumstances, perhaps as a precursor to using a simultaneous method.

  7. Expression and Purification of Recombinant Proteins in Escherichia coli with a His6 or Dual His6-MBP Tag.

    PubMed

    Raran-Kurussi, Sreejith; Waugh, David S

    2017-01-01

    Rapid advances in bioengineering and biotechnology over the past three decades have greatly facilitated the production of recombinant proteins in Escherichia coli. Affinity-based methods that employ protein or peptide based tags for protein purification have been instrumental in this progress. Yet insolubility of recombinant proteins in E. coli remains a persistent problem. One way around this problem is to fuse an aggregation-prone protein to a highly soluble partner. E. coli maltose-binding protein (MBP) is widely acknowledged as a highly effective solubilizing agent. In this chapter, we describe how to construct either a His 6 - or a dual His 6 -MBP tagged fusion protein by Gateway ® recombinational cloning and how to evaluate their yield and solubility. We also describe a simple and rapid procedure to test the solubility of proteins after removing their N-terminal fusion tags by tobacco etch virus (TEV) protease digestion. The choice of whether to use a His 6 tag or a His 6 -MBP tag can be made on the basis of this solubility test.

  8. The Academic Diligence Task (ADT): Assessing Individual Differences in Effort on Tedious but Important Schoolwork

    PubMed Central

    Galla, Brian M.; Plummer, Benjamin D.; White, Rachel E.; Meketon, David; D’Mello, Sidney K.; Duckworth, Angela L.

    2014-01-01

    The current study reports on the development and validation of the Academic Diligence Task (ADT), designed to assess the tendency to expend effort on academic tasks which are tedious in the moment but valued in the long-term. In this novel online task, students allocate their time between solving simple math problems (framed as beneficial for problem solving skills) and, alternatively, playing Tetris or watching entertaining videos. Using a large sample of high school seniors (N = 921), the ADT demonstrated convergent validity with self-report ratings of Big Five conscientiousness and its facets, self-control and grit, as well as discriminant validity from theoretically unrelated constructs, such as Big Five extraversion, openness, and emotional stability, test anxiety, life satisfaction, and positive and negative affect. The ADT also demonstrated incremental predictive validity for objectively measured GPA, standardized math and reading achievement test scores, high school graduation, and college enrollment, over and beyond demographics and intelligence. Collectively, findings suggest the feasibility of online behavioral measures to assess noncognitive individual differences that predict academic outcomes. PMID:25258470

  9. The Academic Diligence Task (ADT): Assessing Individual Differences in Effort on Tedious but Important Schoolwork.

    PubMed

    Galla, Brian M; Plummer, Benjamin D; White, Rachel E; Meketon, David; D'Mello, Sidney K; Duckworth, Angela L

    2014-10-01

    The current study reports on the development and validation of the Academic Diligence Task (ADT), designed to assess the tendency to expend effort on academic tasks which are tedious in the moment but valued in the long-term. In this novel online task, students allocate their time between solving simple math problems (framed as beneficial for problem solving skills) and, alternatively, playing Tetris or watching entertaining videos. Using a large sample of high school seniors ( N = 921), the ADT demonstrated convergent validity with self-report ratings of Big Five conscientiousness and its facets, self-control and grit, as well as discriminant validity from theoretically unrelated constructs, such as Big Five extraversion, openness, and emotional stability, test anxiety, life satisfaction, and positive and negative affect. The ADT also demonstrated incremental predictive validity for objectively measured GPA, standardized math and reading achievement test scores, high school graduation, and college enrollment, over and beyond demographics and intelligence. Collectively, findings suggest the feasibility of online behavioral measures to assess noncognitive individual differences that predict academic outcomes.

  10. Vortical Flow Prediction Using an Adaptive Unstructured Grid Method

    NASA Technical Reports Server (NTRS)

    Pirzadeh, Shahyar Z.

    2001-01-01

    A computational fluid dynamics (CFD) method has been employed to compute vortical flows around slender wing/body configurations. The emphasis of the paper is on the effectiveness of an adaptive grid procedure in "capturing" concentrated vortices generated at sharp edges or flow separation lines of lifting surfaces flying at high angles of attack. The method is based on a tetrahedral unstructured grid technology developed at the NASA Langley Research Center. Two steady-state, subsonic, inviscid and Navier-Stokes flow test cases are presented to demonstrate the applicability of the method for solving practical vortical flow problems. The first test case concerns vortex flow over a simple 65deg delta wing with different values of leading-edge bluntness, and the second case is that of a more complex fighter configuration. The superiority of the adapted solutions in capturing the vortex flow structure over the conventional unadapted results is demonstrated by comparisons with the windtunnel experimental data. The study shows that numerical prediction of vortical flows is highly sensitive to the local grid resolution and that the implementation of grid adaptation is essential when applying CFD methods to such complicated flow problems.

  11. Quality Assessment of Mixed and Ceramic Recycled Aggregates from Construction and Demolition Wastes in the Concrete Manufacture According to the Spanish Standard.

    PubMed

    Rodríguez-Robles, Desirée; García-González, Julia; Juan-Valdés, Andrés; Morán-Del Pozo, Julia Mª; Guerra-Romero, Manuel I

    2014-08-13

    Construction and demolition waste (CDW) constitutes an increasingly significant problem in society due to the volume generated, rendering sustainable management and disposal problematic. The aim of this study is to identify a possible reuse option in the concrete manufacturing for recycled aggregates with a significant ceramic content: mixed recycled aggregates (MixRA) and ceramic recycled aggregates (CerRA). In order to do so, several tests are conducted in accordance with the Spanish Code on Structural Concrete (EHE-08) to determine the composition in weight and physic-mechanical characteristics (particle size distributions, fine content, sand equivalent, density, water absorption, flakiness index, and resistance to fragmentation) of the samples for the partial inclusion of the recycled aggregates in concrete mixes. The results of these tests clearly support the hypothesis that this type of material may be suitable for such partial replacements if simple pretreatment is carried out. Furthermore, this measure of reuse is in line with European, national, and regional policies on sustainable development, and presents a solution to the environmental problem caused by the generation of CDW.

  12. Quantum teleportation of nonclassical wave packets: An effective multimode theory

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Benichi, Hugo; Takeda, Shuntaro; Lee, Noriyuki

    2011-07-15

    We develop a simple and efficient theoretical model to understand the quantum properties of broadband continuous variable quantum teleportation. We show that, if stated properly, the problem of multimode teleportation can be simplified to teleportation of a single effective mode that describes the input state temporal characteristic. Using that model, we show how the finite bandwidth of squeezing and external noise in the classical channel affect the output teleported quantum field. We choose an approach that is especially relevant for the case of non-Gaussian nonclassical quantum states and we finally back-test our model with recent experimental results.

  13. Radiometric calibration of Landsat Thematic Mapper multispectral images

    USGS Publications Warehouse

    Chavez, P.S.

    1989-01-01

    A main problem encountered in radiometric calibration of satellite image data is correcting for atmospheric effects. Without this correction, an image digital number (DN) cannot be converted to a surface reflectance value. In this paper the accuracy of a calibration procedure, which includes a correction for atmospheric scattering, is tested. Two simple methods, a stand-alone and an in situ sky radiance measurement technique, were used to derive the HAZE DN values for each of the six reflectance Thematic Mapper (TM) bands. The DNs of two Landsat TM images of Phoenix, Arizona were converted to surface reflectances. -from Author

  14. Periodic Pulay method for robust and efficient convergence acceleration of self-consistent field iterations

    DOE PAGES

    Banerjee, Amartya S.; Suryanarayana, Phanish; Pask, John E.

    2016-01-21

    Pulay's Direct Inversion in the Iterative Subspace (DIIS) method is one of the most widely used mixing schemes for accelerating the self-consistent solution of electronic structure problems. In this work, we propose a simple generalization of DIIS in which Pulay extrapolation is performed at periodic intervals rather than on every self-consistent field iteration, and linear mixing is performed on all other iterations. Lastly, we demonstrate through numerical tests on a wide variety of materials systems in the framework of density functional theory that the proposed generalization of Pulay's method significantly improves its robustness and efficiency.

  15. Comparison of 1 mg and 2 mg overnight dexamethasone suppression tests for the screening of Cushing's syndrome in obese patients.

    PubMed

    Sahin, Mustafa; Kebapcilar, Levent; Taslipinar, Abdullah; Azal, Omer; Ozgurtas, Taner; Corakci, Ahmet; Akgul, Emin Ozgur; Taslipinar, Mine Yavuz; Yazici, Mahmut; Kutlu, Mustafa

    2009-01-01

    Obesity is currently a major public health problem and one of the potential underlying causes of obesity in a minority of patients is Cushing's syndrome (CS). Traditionally, the gold standard screening test for CS is 1 mg dexamethasone overnight suppression test. However, it is known that obese subjects have high false positive results with this test. We have therefore compared the 1 mg and 2 mg overnight dexamethasone suppression tests in obese subjects. Patients whose serum cortisol after ODST was >50 nM underwent and a low-dose dexamethasone suppression test (LDDST); 24-hour urine cortisol was collected for basal urinary free cortisol (UFC). For positive results after overnight 1-mg dexamethasone suppression test we also performed the overnight 2-mg dexamethasone suppression test. We prospectively evaluated 100 patients (22 men and 78 women, ranging in age from 17 to 73 years with a body mass index (BMI) >30 kg/m2 who had been referred to our hospital-affiliated endocrine clinic because of simple obesity. Suppression of serum cortisol to <50 nM (1.8 microg/dL) after dexamethasone administration was chosen as the cut-off point for normal suppression. Thyroid function tests, lipid profiles, homocysteine, antithyroglobulin, anti-thyroid peroxidase antibody levels, vitamin B12, folate levels, insulin resistance [by homeostasis model assessment (HOMA)] and 1.0 mg postdexamethasone (postdex) suppression cortisol levels were measured. We found an 8% false-positive rate in 1 mg overnight test and 2% in 2 mg overnight test (p=0.001). There was no correlation between the cortisol levels after ODST and other parameters. Our results indicate that the 2 mg overnight dexamethasone suppression test (ODST) is more convenient and accurate than 1-mg ODST as a screening test for excluding CS in subjects with simple obesity.

  16. Towards Risk-Based Test Protocols: Estimating the Contribution of Intensive Testing to the UK Bovine Tuberculosis Problem

    PubMed Central

    van Dijk, Jan

    2013-01-01

    Eradicating disease from livestock populations involves the balancing act of removing sufficient numbers of diseased animals without removing too many healthy individuals in the process. As ever more tests for bovine tuberculosis (BTB) are carried out on the UK cattle herd, and each positive herd test triggers more testing, the question arises whether ‘false positive’ results contribute significantly to the measured BTB prevalence. Here, this question is explored using simple probabilistic models of test behaviour. When the screening test is applied to the average UK herd, the estimated proportion of test-associated false positive new outbreaks is highly sensitive to small fluctuations in screening test specificity. Estimations of this parameter should be updated as a priority. Once outbreaks have been confirmed in screening-test positive herds, the following rounds of intensive testing with more sensitive, albeit less specific, tests are highly likely to remove large numbers of false positive animals from herds. Despite this, it is unlikely that significantly more truly infected animals are removed. BTB test protocols should become based on quantified risk in order to prevent the needless slaughter of large numbers of healthy animals. PMID:23717517

  17. Information fusion in regularized inversion of tomographic pumping tests

    USGS Publications Warehouse

    Bohling, Geoffrey C.; ,

    2008-01-01

    In this chapter we investigate a simple approach to incorporating geophysical information into the analysis of tomographic pumping tests for characterization of the hydraulic conductivity (K) field in an aquifer. A number of authors have suggested a tomographic approach to the analysis of hydraulic tests in aquifers - essentially simultaneous analysis of multiple tests or stresses on the flow system - in order to improve the resolution of the estimated parameter fields. However, even with a large amount of hydraulic data in hand, the inverse problem is still plagued by non-uniqueness and ill-conditioning and the parameter space for the inversion needs to be constrained in some sensible fashion in order to obtain plausible estimates of aquifer properties. For seismic and radar tomography problems, the parameter space is often constrained through the application of regularization terms that impose penalties on deviations of the estimated parameters from a prior or background model, with the tradeoff between data fit and model norm explored through systematic analysis of results for different levels of weighting on the regularization terms. In this study we apply systematic regularized inversion to analysis of tomographic pumping tests in an alluvial aquifer, taking advantage of the steady-shape flow regime exhibited in these tests to expedite the inversion process. In addition, we explore the possibility of incorporating geophysical information into the inversion through a regularization term relating the estimated K distribution to ground penetrating radar velocity and attenuation distributions through a smoothing spline model. ?? 2008 Springer-Verlag Berlin Heidelberg.

  18. Accuracy of simple urine tests for diagnosis of urinary tract infections in low-risk pregnant women.

    PubMed

    Feitosa, Danielle Cristina Alves; da Silva, Márcia Guimarães; de Lima Parada, Cristina Maria Garcia

    2009-01-01

    Anatomic and physiological alterations during pregnancy predispose pregnant women to urinary tract infections (UTI). This study aimed to identify the accuracy of the simple urine test for UTI diagnosis in low-risk pregnant women. Diagnostic test performance was conducted in Botucatu, SP, involving 230 pregnant women, between 2006 and 2008. Results showed 10% UTI prevalence. Sensitivity, specificity and accuracy of the simple urine test were 95.6%, 63.3% and 66.5%, respectively, in relation to UTI diagnoses. The analysis of positive (PPV) and negative (NPV) predictive values showed that, when a regular simple urine test was performed, the chance of UTI occurrence was small (NPV 99.2%). In view of an altered result for such a test, the possibility of UTI existence was small (PPV 22.4%). It was concluded that the accuracy of the simple urine test as a diagnostic means for UTI was low, and that performing a urine culture is essential for appropriate diagnosis.

  19. A simple and fast heuristic for protein structure comparison

    PubMed Central

    Pelta, David A; González, Juan R; Moreno Vega, Marcos

    2008-01-01

    Background Protein structure comparison is a key problem in bioinformatics. There exist several methods for doing protein comparison, being the solution of the Maximum Contact Map Overlap problem (MAX-CMO) one of the alternatives available. Although this problem may be solved using exact algorithms, researchers require approximate algorithms that obtain good quality solutions using less computational resources than the formers. Results We propose a variable neighborhood search metaheuristic for solving MAX-CMO. We analyze this strategy in two aspects: 1) from an optimization point of view the strategy is tested on two different datasets, obtaining an error of 3.5%(over 2702 pairs) and 1.7% (over 161 pairs) with respect to optimal values; thus leading to high accurate solutions in a simpler and less expensive way than exact algorithms; 2) in terms of protein structure classification, we conduct experiments on three datasets and show that is feasible to detect structural similarities at SCOP's family and CATH's architecture levels using normalized overlap values. Some limitations and the role of normalization are outlined for doing classification at SCOP's fold level. Conclusion We designed, implemented and tested.a new tool for solving MAX-CMO, based on a well-known metaheuristic technique. The good balance between solution's quality and computational effort makes it a valuable tool. Moreover, to the best of our knowledge, this is the first time the MAX-CMO measure is tested at SCOP's fold and CATH's architecture levels with encouraging results. Software is available for download at . PMID:18366735

  20. Using Programmable Calculators to Solve Electrostatics Problems.

    ERIC Educational Resources Information Center

    Yerian, Stephen C.; Denker, Dennis A.

    1985-01-01

    Provides a simple routine which allows first-year physics students to use programmable calculators to solve otherwise complex electrostatic problems. These problems involve finding electrostatic potential and electric field on the axis of a uniformly charged ring. Modest programing skills are required of students. (DH)

  1. A strategy for reducing turnaround time in design optimization using a distributed computer system

    NASA Technical Reports Server (NTRS)

    Young, Katherine C.; Padula, Sharon L.; Rogers, James L.

    1988-01-01

    There is a need to explore methods for reducing lengthly computer turnaround or clock time associated with engineering design problems. Different strategies can be employed to reduce this turnaround time. One strategy is to run validated analysis software on a network of existing smaller computers so that portions of the computation can be done in parallel. This paper focuses on the implementation of this method using two types of problems. The first type is a traditional structural design optimization problem, which is characterized by a simple data flow and a complicated analysis. The second type of problem uses an existing computer program designed to study multilevel optimization techniques. This problem is characterized by complicated data flow and a simple analysis. The paper shows that distributed computing can be a viable means for reducing computational turnaround time for engineering design problems that lend themselves to decomposition. Parallel computing can be accomplished with a minimal cost in terms of hardware and software.

  2. A Physics-Based Engineering Approach to Predict the Cross Section for Advanced SRAMs

    NASA Astrophysics Data System (ADS)

    Li, Lei; Zhou, Wanting; Liu, Huihua

    2012-12-01

    This paper presents a physics-based engineering approach to estimate the heavy ion induced upset cross section for 6T SRAM cells from layout and technology parameters. The new approach calculates the effects of radiation with junction photocurrent, which is derived based on device physics. The new and simple approach handles the problem by using simple SPICE simulations. At first, the approach uses a standard SPICE program on a typical PC to predict the SPICE-simulated curve of the collected charge vs. its affected distance from the drain-body junction with the derived junction photocurrent. And then, the SPICE-simulated curve is used to calculate the heavy ion induced upset cross section with a simple model, which considers that the SEU cross section of a SRAM cell is more related to a “radius of influence” around a heavy ion strike than to the physical size of a diffusion node in the layout for advanced SRAMs in nano-scale process technologies. The calculated upset cross section based on this method is in good agreement with the test results for 6T SRAM cells processed using 90 nm process technology.

  3. Validation of the Simple Shoulder Test in a Portuguese-Brazilian population. Is the latent variable structure and validation of the Simple Shoulder Test Stable across cultures?

    PubMed

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Factor analysis demonstrated a three factor solution. Cronbach's alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples.

  4. Validation of the Simple Shoulder Test in a Portuguese-Brazilian Population. Is the Latent Variable Structure and Validation of the Simple Shoulder Test Stable across Cultures?

    PubMed Central

    Neto, Jose Osni Bruggemann; Gesser, Rafael Lehmkuhl; Steglich, Valdir; Bonilauri Ferreira, Ana Paula; Gandhi, Mihir; Vissoci, João Ricardo Nickenig; Pietrobon, Ricardo

    2013-01-01

    Background The validation of widely used scales facilitates the comparison across international patient samples. The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Objective The objective of this study was to translate, culturally adapt and validate the Simple Shoulder Test into Brazilian Portuguese. Also we test the stability of factor analysis across different cultures. Methods The Simple Shoulder Test was translated from English into Brazilian Portuguese, translated back into English, and evaluated for accuracy by an expert committee. It was then administered to 100 patients with shoulder conditions. Psychometric properties were analyzed including factor analysis, internal reliability, test-retest reliability at seven days, and construct validity in relation to the Short Form 36 health survey (SF-36). Results Factor analysis demonstrated a three factor solution. Cronbach’s alpha was 0.82. Test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.84. Associations were observed in the hypothesized direction with all subscales of SF-36 questionnaire. Conclusion The Simple Shoulder Test translation and cultural adaptation to Brazilian-Portuguese demonstrated adequate factor structure, internal reliability, and validity, ultimately allowing for its use in the comparison with international patient samples. PMID:23675436

  5. Central auditory processing disorder (CAPD) in children with specific language impairment (SLI). Central auditory tests.

    PubMed

    Dlouha, Olga; Novak, Alexej; Vokral, Jan

    2007-06-01

    The aim of this project is to use central auditory tests for diagnosis of central auditory processing disorder (CAPD) in children with specific language impairment (SLI), in order to confirm relationship between speech-language impairment and central auditory processing. We attempted to establish special dichotic binaural tests in Czech language modified for younger children. Tests are based on behavioral audiometry using dichotic listening (different auditory stimuli that presented to each ear simultaneously). The experimental tasks consisted of three auditory measures (test 1-3)-dichotic listening of two-syllable words presented like binaural interaction tests. Children with SLI are unable to create simple sentences from two words that are heard separately but simultaneously. Results in our group of 90 pre-school children (6-7 years old) confirmed integration deficit and problems with quality of short-term memory. Average rate of success of children with specific language impairment was 56% in test 1, 64% in test 2 and 63% in test 3. Results of control group: 92% in test 1, 93% in test 2 and 92% in test 3 (p<0.001). Our results indicate the relationship between disorders of speech-language perception and central auditory processing disorders.

  6. Benchmark problems for numerical implementations of phase field models

    DOE PAGES

    Jokisaari, A. M.; Voorhees, P. W.; Guyer, J. E.; ...

    2016-10-01

    Here, we present the first set of benchmark problems for phase field models that are being developed by the Center for Hierarchical Materials Design (CHiMaD) and the National Institute of Standards and Technology (NIST). While many scientific research areas use a limited set of well-established software, the growing phase field community continues to develop a wide variety of codes and lacks benchmark problems to consistently evaluate the numerical performance of new implementations. Phase field modeling has become significantly more popular as computational power has increased and is now becoming mainstream, driving the need for benchmark problems to validate and verifymore » new implementations. We follow the example set by the micromagnetics community to develop an evolving set of benchmark problems that test the usability, computational resources, numerical capabilities and physical scope of phase field simulation codes. In this paper, we propose two benchmark problems that cover the physics of solute diffusion and growth and coarsening of a second phase via a simple spinodal decomposition model and a more complex Ostwald ripening model. We demonstrate the utility of benchmark problems by comparing the results of simulations performed with two different adaptive time stepping techniques, and we discuss the needs of future benchmark problems. The development of benchmark problems will enable the results of quantitative phase field models to be confidently incorporated into integrated computational materials science and engineering (ICME), an important goal of the Materials Genome Initiative.« less

  7. The use of analysis of variance procedures in biological studies

    USGS Publications Warehouse

    Williams, B.K.

    1987-01-01

    The analysis of variance (ANOVA) is widely used in biological studies, yet there remains considerable confusion among researchers about the interpretation of hypotheses being tested. Ambiguities arise when statistical designs are unbalanced, and in particular when not all combinations of design factors are represented in the data. This paper clarifies the relationship among hypothesis testing, statistical modelling and computing procedures in ANOVA for unbalanced data. A simple two-factor fixed effects design is used to illustrate three common parametrizations for ANOVA models, and some associations among these parametrizations are developed. Biologically meaningful hypotheses for main effects and interactions are given in terms of each parametrization, and procedures for testing the hypotheses are described. The standard statistical computing procedures in ANOVA are given along with their corresponding hypotheses. Throughout the development unbalanced designs are assumed and attention is given to problems that arise with missing cells.

  8. Test images for the maximum entropy image restoration method

    NASA Technical Reports Server (NTRS)

    Mackey, James E.

    1990-01-01

    One of the major activities of any experimentalist is data analysis and reduction. In solar physics, remote observations are made of the sun in a variety of wavelengths and circumstances. In no case is the data collected free from the influence of the design and operation of the data gathering instrument as well as the ever present problem of noise. The presence of significant noise invalidates the simple inversion procedure regardless of the range of known correlation functions. The Maximum Entropy Method (MEM) attempts to perform this inversion by making minimal assumptions about the data. To provide a means of testing the MEM and characterizing its sensitivity to noise, choice of point spread function, type of data, etc., one would like to have test images of known characteristics that can represent the type of data being analyzed. A means of reconstructing these images is presented.

  9. Durham Smith Vest-Over-Pant Technique: Simple Procedure for a Complex Problem (Post-Hypospadias Repair Fistula).

    PubMed

    Gite, Venkat A; Patil, Saurabh R; Bote, Sachin M; Siddiqui, Mohd Ayub Karam Nabi; Nikose, Jayant V; Kandi, Anitha J

    2017-01-01

    Urethrocutaneous fistula, which occurs after hypospadias surgery, is often a baffling problem and its treatment is challenging. The study aimed to evaluate the results of the simple procedure (Durham Smith vest-over-pant technique) for this complex problem (post-hypospadias repair fistula). During the period from 2011 to 2015, 20 patients with post-hypospadias repair fistulas underwent Durham Smith repair. Common age group was between 5 and 12 years. Site wise distribution of fistula was coronal 2 (10%), distal penile 7 (35%), mid-penile 7 (35%), and proximal-penile 4 (20%). Out of 20 patients, 15 had fistula of size <5 mm (75%) and 5 patients had fistula of size >5 mm (25%). All cases were repaired with Durham Smith vest-over-pant technique by a single surgeon. In case of multiple fistulas adjacent to each other, all fistulas were joined to form single fistula and repaired. We have successfully repaired all post-hypospadias surgery urethrocutaneous fistulas using the technique described by Durham Smith with 100% success rate. Durham Smith vest-over-pant technique is a simple solution for a complex problem (post hypospadias surgery penile fistulas) in properly selected patients. © 2017 S. Karger AG, Basel.

  10. Solving transient acoustic boundary value problems with equivalent sources using a lumped parameter approach.

    PubMed

    Fahnline, John B

    2016-12-01

    An equivalent source method is developed for solving transient acoustic boundary value problems. The method assumes the boundary surface is discretized in terms of triangular or quadrilateral elements and that the solution is represented using the acoustic fields of discrete sources placed at the element centers. Also, the boundary condition is assumed to be specified for the normal component of the surface velocity as a function of time, and the source amplitudes are determined to match the known elemental volume velocity vector at a series of discrete time steps. Equations are given for marching-on-in-time schemes to solve for the source amplitudes at each time step for simple, dipole, and tripole source formulations. Several example problems are solved to illustrate the results and to validate the formulations, including problems with closed boundary surfaces where long-time numerical instabilities typically occur. A simple relationship between the simple and dipole source amplitudes in the tripole source formulation is derived so that the source radiates primarily in the direction of the outward surface normal. The tripole source formulation is shown to eliminate interior acoustic resonances and long-time numerical instabilities.

  11. Learning Problem-Solving Rules as Search through a Hypothesis Space

    ERIC Educational Resources Information Center

    Lee, Hee Seung; Betts, Shawn; Anderson, John R.

    2016-01-01

    Learning to solve a class of problems can be characterized as a search through a space of hypotheses about the rules for solving these problems. A series of four experiments studied how different learning conditions affected the search among hypotheses about the solution rule for a simple computational problem. Experiment 1 showed that a problem…

  12. Getting to the Bottom of a Ladder Problem

    ERIC Educational Resources Information Center

    McCartney, Mark

    2002-01-01

    In this paper, the author introduces a simple problem relating to a pair of ladders. A mathematical model of the problem produces an equation which can be solved in a number of ways using mathematics appropriate to "A" level students or first year undergraduates. The author concludes that the ladder problem can be used in class to develop and…

  13. Strategies of Pre-Service Primary School Teachers for Solving Addition Problems with Negative Numbers

    ERIC Educational Resources Information Center

    Almeida, Rut; Bruno, Alicia

    2014-01-01

    This paper analyses the strategies used by pre-service primary school teachers for solving simple addition problems involving negative numbers. The findings reveal six different strategies that depend on the difficulty of the problem and, in particular, on the unknown quantity. We note that students use negative numbers in those problems they find…

  14. Using a Five-Step Procedure for Inferential Statistical Analyses

    ERIC Educational Resources Information Center

    Kamin, Lawrence F.

    2010-01-01

    Many statistics texts pose inferential statistical problems in a disjointed way. By using a simple five-step procedure as a template for statistical inference problems, the student can solve problems in an organized fashion. The problem and its solution will thus be a stand-by-itself organic whole and a single unit of thought and effort. The…

  15. A new simple method with high precision for determining the toxicity of antifouling paints on brine shrimp larvae (Artemia): first results.

    PubMed

    Castritsi-Catharios, J; Bourdaniotis, N; Persoone, G

    2007-04-01

    The use of antifouling paints is the only truly effective method for the protection of underwater structures from the development of fouling organisms. In the present study, the surface to volume concept constitutes the basis for the development of a new and improved method for determining the toxicity of antifouling paints on marine organisms. Particular emphasis is placed on the attainment of a standardized uniformity of coated surfaces. Failure to control the thickness of the coat of paint in previous studies of this type, has led to inaccurate evaluation of the relative toxicity of samples. Herein, an attempt is made to solve this problem using a simple technique which gives completely uniform and smooth surfaces. The effectiveness of this technique is assessed through two series of experiments using two different types of test containers: 50 ml modified syringes and 7 ml multiwells. The results of the toxicity experiments follow a normal distribution around the average value which allows to consider these values as reliable for comparison of the level of toxic effect detected with the two types of test containers. The mean lethal concentration L(S/V)(50) in the test series conducted in the multiwells (20.38 mm(2)ml(-1)) does not differ significantly from that obtained in the test series using modified syringes (20.065 mm(2)ml(-1)). It can thus be concluded from this preliminary study that the new method and the two different ways of exposing the test organisms to the antifouling paints and their leachates gave reliable and replicable results.

  16. Simple Proof of Jury Test for Complex Polynomials

    NASA Astrophysics Data System (ADS)

    Choo, Younseok; Kim, Dongmin

    Recently some attempts have been made in the literature to give simple proofs of Jury test for real polynomials. This letter presents a similar result for complex polynomials. A simple proof of Jury test for complex polynomials is provided based on the Rouché's Theorem and a single-parameter characterization of Schur stability property for complex polynomials.

  17. Solution of Stochastic Capital Budgeting Problems in a Multidivisional Firm.

    DTIC Science & Technology

    1980-06-01

    linear programming with simple recourse (see, for example, Dantzig (9) or Ziemba (35)) - 12 - and has been applied to capital budgeting problems with...New York, 1972 34. Weingartner, H.M., Mathematical Programming and Analysis of Capital Budgeting Problems, Markham Pub. Co., Chicago, 1967 35. Ziemba

  18. Moving Material into Space Without Rockets.

    ERIC Educational Resources Information Center

    Cheng, R. S.; Trefil, J. S.

    1985-01-01

    In response to conventional rocket demands on fuel supplies, electromagnetic launches were developed to give payloads high velocity using a stationary energy source. Several orbital mechanics problems are solved including a simple problem (radial launch with no rotation) and a complex problem involving air resistance and gravity. (DH)

  19. Reflection on solutions in the form of refutation texts versus problem solving: the case of 8th graders studying simple electric circuits

    NASA Astrophysics Data System (ADS)

    Safadi, Rafi'; Safadi, Ekhlass; Meidav, Meir

    2017-01-01

    This study compared students’ learning in troubleshooting and problem solving activities. The troubleshooting activities provided students with solutions to conceptual problems in the form of refutation texts; namely, solutions that portray common misconceptions, refute them, and then present the accepted scientific ideas. They required students to individually diagnose these solutions; that is, to identify the erroneous and correct parts of the solutions and explain in what sense they differed, and later share their work in whole class discussions. The problem solving activities required the students to individually solve these same problems, and later share their work in whole class discussions. We compared the impact of the individual work stage in the troubleshooting and problem solving activities on promoting argumentation in the subsequent class discussions, and the effects of these activities on students’ engagement in self-repair processes; namely, in learning processes that allowed the students to self-repair their misconceptions, and by extension on advancing their conceptual knowledge. Two 8th grade classes studying simple electric circuits with the same teacher took part. One class (28 students) carried out four troubleshooting activities and the other (31 students) four problem solving activities. These activities were interwoven into a twelve lesson unit on simple electric circuits that was spread over a period of 2 months. The impact of the troubleshooting activities on students’ conceptual knowledge was significantly higher than that of the problem solving activities. This result is consistent with the finding that the troubleshooting activities engaged students in self-repair processes whereas the problem solving activities did not. The results also indicated that diagnosing solutions to conceptual problems in the form of refutation texts, as opposed to solving these same problems, apparently triggered argumentation in subsequent class discussions, even though the teacher was unfamiliar with the best ways to conduct argumentative classroom discussions. We account for these results and suggest possible directions for future research.

  20. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  1. Community Involvement Manual.

    DTIC Science & Technology

    1979-05-01

    and social problems, does not lend itself to a single or simple solution. This is why we must all be involved. For this reason we. believe that...of admission to decisionmaking. At times the implications of this relatively simple premise are not minor. Many people beginning community...involvement programs have found it extremely difficult to locate technical people able to translate technical reports into simple , every- day English. There

  2. HIA, the next step: Defining models and roles

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Putters, Kim

    If HIA is to be an effective instrument for optimising health interests in the policy making process it has to recognise the different contests in which policy is made and the relevance of both technical rationality and political rationality. Policy making may adopt a rational perspective in which there is a systematic and orderly progression from problem formulation to solution or a network perspective in which there are multiple interdependencies, extensive negotiation and compromise, and the steps from problem to formulation are not followed sequentially or in any particular order. Policy problems may be simple with clear causal pathways andmore » responsibilities or complex with unclear causal pathways and disputed responsibilities. Network analysis is required to show which stakeholders are involved, their support for health issues and the degree of consensus. From this analysis three models of HIA emerge. The first is the phases model which is fitted to simple problems and a rational perspective of policymaking. This model involves following structured steps. The second model is the rounds (Echternach) model that is fitted to complex problems and a network perspective of policymaking. This model is dynamic and concentrates on network solutions taking these steps in no particular order. The final model is the 'garbage can' model fitted to contexts which combine simple and complex problems. In this model HIA functions as a problem solver and signpost keeping all possible solutions and stakeholders in play and allowing solutions to emerge over time. HIA models should be the beginning rather than the conclusion of discussion the worlds of HIA and policymaking.« less

  3. Symmetries and "simple" solutions of the classical n-body problem

    NASA Astrophysics Data System (ADS)

    Chenciner, Alain

    2006-03-01

    The Lagrangian of the classical n-body problem has well known symmetries: isometries of the ambient Euclidean space (translations, rotations, reflexions) and changes of scale coming from the homogeneity of the potential. To these symmetries are associated "simple" solutions of the problem, the so-called homographic motions, which play a basic role in the global understanding of the dynamics. The classical subproblems (planar, isosceles) are also consequences of the existence of symmetries: invariance under reflexion through a plane in the first case, invariance under exchange of two equal masses in the second. In these two cases, the symmetry acts at the level of the "shape space" (the oriented one in the first case) whose existence is the main difference between the 2-body problem and the (n ≥ 3)-body problem. These symmetries of the Lagrangian imply symmetries of the action functional, which is defined on the space of regular enough loops of a given period in the configuration space of the problem. Minimization of the action under well-chosen symmetry constraints leads to remarkable solutions of the n-body problem which may also be called simple and could play after the homographic ones the role of organizing centers in the global dynamics. In [13] and [16], I have given a survey of the new classes of solutions which had been obtained in this way, mainly choreographies of n equal masses in a plane or in space and generalized Hip-Hops of at least 4 arbitrary masses in space. I give here an updated overview of the results and a quick glance at the methods of proofs.

  4. PopED lite: An optimal design software for preclinical pharmacokinetic and pharmacodynamic studies.

    PubMed

    Aoki, Yasunori; Sundqvist, Monika; Hooker, Andrew C; Gennemark, Peter

    2016-04-01

    Optimal experimental design approaches are seldom used in preclinical drug discovery. The objective is to develop an optimal design software tool specifically designed for preclinical applications in order to increase the efficiency of drug discovery in vivo studies. Several realistic experimental design case studies were collected and many preclinical experimental teams were consulted to determine the design goal of the software tool. The tool obtains an optimized experimental design by solving a constrained optimization problem, where each experimental design is evaluated using some function of the Fisher Information Matrix. The software was implemented in C++ using the Qt framework to assure a responsive user-software interaction through a rich graphical user interface, and at the same time, achieving the desired computational speed. In addition, a discrete global optimization algorithm was developed and implemented. The software design goals were simplicity, speed and intuition. Based on these design goals, we have developed the publicly available software PopED lite (http://www.bluetree.me/PopED_lite). Optimization computation was on average, over 14 test problems, 30 times faster in PopED lite compared to an already existing optimal design software tool. PopED lite is now used in real drug discovery projects and a few of these case studies are presented in this paper. PopED lite is designed to be simple, fast and intuitive. Simple, to give many users access to basic optimal design calculations. Fast, to fit a short design-execution cycle and allow interactive experimental design (test one design, discuss proposed design, test another design, etc). Intuitive, so that the input to and output from the software tool can easily be understood by users without knowledge of the theory of optimal design. In this way, PopED lite is highly useful in practice and complements existing tools. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  5. Improving the simple, complicated and complex realities of community-acquired pneumonia.

    PubMed

    Liu, S K; Homa, K; Butterly, J R; Kirkland, K B; Batalden, P B

    2009-04-01

    This paper first describes efforts to improve the care for patients hospitalised with community-acquired pneumonia and the associated changes in quality measures at a rural academic medical centre. The results of the improvement interventions and the associated clinical realities, expected outcomes, measures, improvement interventions and improvement aims are then re-examined using the Glouberman and Zimmerman typology of healthcare problems--simple, complicated and complex. The typology is then used to explore the future design and assessment of improvement interventions, which may allow better matching with the types of problem healthcare providers and organisations are confronted with. Matching improvement interventions with problem category has the possibility of improving the success of improvement efforts and the reliability of care while at the same time preserving needed provider autonomy and judgement to adapt care for more complex problems.

  6. Simple exercise test score versus cardiac stress test for the prediction of coronary artery disease in patients with type 2 diabetes.

    PubMed

    Pikto-Pietkiewicz, Witold; Przewłocka, Monika; Chybowska, Barbara; Cyciwa, Alona; Pasierski, Tomasz

    2014-01-01

    Type 2 diabetes markedly increases the risk of coronary heart disease (CHD), and screening for CHD is suggested by the guidelines. The aim of the study was to compare the diagnostic usefulness of the simple exercise test score, incorporating the clinical data and cardiac stress test results, with the standard stress test in patients with type 2 diabetes. A total of 62 consecutive patients (aged 65.4 ±8.5 years; 32 men) with type 2 diabetes and clinical symptoms suggesting CHD underwent a stress test followed by coronary angiography. The simple score was calculated for all patients. Significant coronary stenosis was observed in 41 patients (66.1%). Stress test results were positive in 36 patients (58.1%). The mean simple score was high (65.5 ±14.3 points). A positive linear relationship was observed between the score and the prevalence of CHD (R2 = 0.19; P <0.001) as well as its severity (R² = 0.23; P <0.001). The area under the receiver-operating characteristic curve for the simple score was 0.74 (95% confidence interval [CI], 0.62-0.86). At the original cut-off value of 60 points, the score had a similar prognostic value to that of the standard stress test. However, in a multivariate analysis, only the simple score (odds ratio [OR], 1.46; 95% CI, 1.11-1.94; P <0.01 for an increase in the score by 1 point) and male sex (OR, 1.57; 95% CI, 1.24-1.98; P <0.001) remained independent predictors of CHD. In patients with type 2 diabetes, the simple score correlated with the prevalence and severity of CHD. However, the cut-off value of 60 points was inadequate in the population of diabetic patients with high risk of CHD. The simple score used instead of or together with the stress test was a better predictor of CHD than the stress test alone.

  7. Electronic test and calibration circuits, a compilation

    NASA Technical Reports Server (NTRS)

    1972-01-01

    A wide variety of simple test calibration circuits are compiled for the engineer and laboratory technician. The majority of circuits were found inexpensive to assemble. Testing electronic devices and components, instrument and system test, calibration and reference circuits, and simple test procedures are presented.

  8. Mineral lineation produced by 3-D rotation of rigid inclusions in confined viscous simple shear

    NASA Astrophysics Data System (ADS)

    Marques, Fernando O.

    2016-08-01

    The solid-state flow of rocks commonly produces a parallel arrangement of elongate minerals with their longest axes coincident with the direction of flow-a mineral lineation. However, this does not conform to Jeffery's theory of the rotation of rigid ellipsoidal inclusions (REIs) in viscous simple shear, because rigid inclusions rotate continuously with applied shear. In 2-dimensional (2-D) flow, the REI's greatest axis (e1) is already in the shear direction; therefore, the problem is to find mechanisms that can prevent the rotation of the REI about one axis, the vorticity axis. In 3-D flow, the problem is to find a mechanism that can make e1 rotate towards the shear direction, and so generate a mineral lineation by rigid rotation about two axes. 3-D analogue and numerical modelling was used to test the effects of confinement on REI rotation and, for narrow channels (shear zone thickness over inclusion's least axis, Wr < 2), the results show that: (1) the rotational behaviour deviates greatly from Jeffery's model; (2) inclusions with aspect ratio Ar (greatest over least principle axis, e1/e3) > 1 can rotate backwards from an initial orientation w e1 parallel to the shear plane, in great contrast to Jeffery's model; (3) back rotation is limited because inclusions reach a stable equilibrium orientation; (4) most importantly and, in contrast to Jeffery's model and to the 2-D simulations, in 3-D, the confined REI gradually rotated about an axis orthogonal to the shear plane towards an orientation with e1 parallel to the shear direction, thus producing a lineation parallel to the shear direction. The modelling results lead to the conclusion that confined simple shear can be responsible for the mineral alignment (lineation) observed in ductile shear zones.

  9. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm.

    PubMed

    Raykov, Yordan P; Boukouvalas, Alexis; Baig, Fahd; Little, Max A

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism.

  10. What to Do When K-Means Clustering Fails: A Simple yet Principled Alternative Algorithm

    PubMed Central

    Baig, Fahd; Little, Max A.

    2016-01-01

    The K-means algorithm is one of the most popular clustering algorithms in current use as it is relatively fast yet simple to understand and deploy in practice. Nevertheless, its use entails certain restrictive assumptions about the data, the negative consequences of which are not always immediately apparent, as we demonstrate. While more flexible algorithms have been developed, their widespread use has been hindered by their computational and technical complexity. Motivated by these considerations, we present a flexible alternative to K-means that relaxes most of the assumptions, whilst remaining almost as fast and simple. This novel algorithm which we call MAP-DP (maximum a-posteriori Dirichlet process mixtures), is statistically rigorous as it is based on nonparametric Bayesian Dirichlet process mixture modeling. This approach allows us to overcome most of the limitations imposed by K-means. The number of clusters K is estimated from the data instead of being fixed a-priori as in K-means. In addition, while K-means is restricted to continuous data, the MAP-DP framework can be applied to many kinds of data, for example, binary, count or ordinal data. Also, it can efficiently separate outliers from the data. This additional flexibility does not incur a significant computational overhead compared to K-means with MAP-DP convergence typically achieved in the order of seconds for many practical problems. Finally, in contrast to K-means, since the algorithm is based on an underlying statistical model, the MAP-DP framework can deal with missing data and enables model testing such as cross validation in a principled way. We demonstrate the simplicity and effectiveness of this algorithm on the health informatics problem of clinical sub-typing in a cluster of diseases known as parkinsonism. PMID:27669525

  11. Explicitly solvable complex Chebyshev approximation problems related to sine polynomials

    NASA Technical Reports Server (NTRS)

    Freund, Roland

    1989-01-01

    Explicitly solvable real Chebyshev approximation problems on the unit interval are typically characterized by simple error curves. A similar principle is presented for complex approximation problems with error curves induced by sine polynomials. As an application, some new explicit formulae for complex best approximations are derived.

  12. Use of a Computer Simulation To Develop Mental Simulations for Understanding Relative Motion Concepts.

    ERIC Educational Resources Information Center

    Monaghan, James M.; Clement, John

    1999-01-01

    Presents evidence for students' qualitative and quantitative difficulties with apparently simple one-dimensional relative-motion problems, students' spontaneous visualization of relative-motion problems, the visualizations facilitating solution of these problems, and students' memories of the online computer simulation used as a framework for…

  13. On the Beauty of Mathematics as Exemplified by a Problem in Combinatorics.

    ERIC Educational Resources Information Center

    Dence, Thomas P.

    1982-01-01

    The beauty of discovering some simple yet elegant proof either to something new or to an already established fact is discussed. A combinatorial problem that deals with covering a checkerboard with dominoes is presented as a starting point for individual investigation of similar problems. (MP)

  14. Field Theory in Cultural Capital Studies of Educational Attainment

    ERIC Educational Resources Information Center

    Krarup, Troels; Munk, Martin D.

    2016-01-01

    This article argues that there is a double problem in international research in cultural capital and educational attainment: an empirical problem, since few new insights have been gained within recent years; and a theoretical problem, since cultural capital is seen as a simple hypothesis about certain isolated individual resources, disregarding…

  15. Inverse Modelling Problems in Linear Algebra Undergraduate Courses

    ERIC Educational Resources Information Center

    Martinez-Luaces, Victor E.

    2013-01-01

    This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…

  16. The Dementia Services Mini-Screen: A Simple Method to Identify Patients and Caregivers Needing Enhanced Dementia Care Services

    PubMed Central

    Borson, Soo; Scanlan, James M.; Sadak, Tatiana; Lessig, Mary; Vitaliano, Peter

    2014-01-01

    Objective The National Alzheimer’s Plan calls for targeted health system change to improve outcomes for persons with dementia and their family caregivers. We explored whether dementia-specific service needs and gaps could be predicted from simple information that can be readily acquired in routine medical care settings. Method Primary family caregivers for cognitively impaired older adults (n=215) were asked about current stress, challenging patient behaviors, and prior-year needs and gaps in 16 medical and psychosocial services. Demographic data, caregiver stress, and patient clinical features were evaluated in regression analyses to identify unique predictors of service needs and gaps. Results Caregiver stress and patient behavior problems together accounted for an average of 24% of the whole-sample variance in total needs and gaps. Across all analyses, including total, medical, and psychosocial services needs and gaps, all other variables combined (comorbid chronic disease, dementia severity, age, caregiver relationship, and residence) accounted for an accounted for a mean of 3%, with no variable yielding more than 4% in any equation. We combined stress and behavior problem indicators into a simple screen. In early/mild dementia dyads (n=111) typical in primary care settings, the screen identified gaps in total and psychosocial care in 84% and 77%, respectively, of those with high stress/high behavior problems vs. 25% and 23%, respectively, of those with low stress/low behavior problems. Medical care gaps were dramatically higher in high stress/high behavior problem dyads (66%) than all others (12%). Conclusion A simple tool (likely completed in 1–2 minutes) which combines caregiver stress and patient behavior problems, the Dementia Services Mini-Screen, could help clinicians rapidly identify high need, high gap dyads. Health care systems could use it to estimate population needs for targeted dementia services and facilitate their development. PMID:24315560

  17. Timing of testing and treatment for asymptomatic diseases

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kırkızlar, Eser; Faissol, Daniel M.; Griffin, Paul M.

    2010-07-01

    Many papers in the medical literature analyze the cost-effectiveness of screening for diseases by comparing a limited number of a priori testing policies under estimated problem parameters. However, this may be insufficient to determine the best timing of the tests or incorporate changes over time. In this paper, we develop and solve a Markov Decision Process (MDP) model for a simple class of asymptomatic diseases in order to provide the building blocks for analysis of a more general class of diseases. We provide a computationally efficient method for determining a cost-effective dynamic intervention strategy that takes into account (i) themore » results of the previous test for each individual and (ii) the change in the individual’s behavior based on awareness of the disease. We demonstrate the usefulness of the approach by applying the results to screening decisions for Hepatitis C (HCV) using medical data, and compare our findings to current HCV screening recommendations.« less

  18. Football for life versus antidoping for the masses: ethical antidoping issues and solutions based on the extenuating experiences of an elite footballer competing while undergoing treatment for metastatic testicular cancer.

    PubMed

    Weiler, Richard; Tombides, Dylan; Urwin, Jon; Clarke, Jane; Verroken, Michele

    2014-05-01

    It is thankfully rare for extenuating circumstances to fully test the processes and procedures enshrined in national and world antidoping authorities' rules and laws. It is also thankfully very rare that a failed drugs test can have some positive implications. Antidoping laws are undoubtedly focused on ensuring fair competition, however, there are occasions when honest athletes discover medical diagnoses through failed antidoping tests. The purpose of this paper is to broadly discuss antidoping considerations encountered, based on the four principles of medical ethics and to propose simple solutions to these problems. Unfortunately, extreme medical circumstances will often test the limits of antidoping and medical processes and with open channels for feedback, these systems can improve. Performance enhancement seems an illogical concept if an athlete's medical treatment and disease are more inherently performance harming than unintended potential doping, but needs to be carefully managed to maintain fair sport.

  19. Experimental investigation of wing installation effects on a two-dimensional mixer/ejector nozzle for supersonic transport aircraft

    NASA Technical Reports Server (NTRS)

    Anderson, David J.; Lambert, Heather H.; Mizukami, Masashi

    1992-01-01

    Experimental results from a wind tunnel test conducted to investigate propulsion/airframe integration (PAI) effects are presented. The objectives of the test were to examine rough order-of-magnitude changes in the acoustic characteristics of a mixer/ejector nozzle due to the presence of a wing and to obtain limited wing and nozzle flow-field measurements. A simple representative supersonic transport wing planform, with deflecting flaps, was installed above a two-dimensional mixer/ejector nozzle that was supplied with high-pressure heated air. Various configurations and wing positions with respect to the nozzle were studied. Because of hardware problems, no acoustics and only a limited set of flow-field data were obtained. For most hardware configurations tested, no significant propulsion/airframe integration effects were identified. Significant effects were seen for extreme flap deflections. The combination of the exploratory nature of the test and the limited flow-field instrumentation made it impossible to identify definitive propulsion/airframe integration effects.

  20. Football for life versus antidoping for the masses: ethical antidoping issues and solutions based on the extenuating experiences of an elite footballer competing while undergoing treatment for metastatic testicular cancer

    PubMed Central

    Weiler, Richard; Tombides, Dylan; Urwin, Jon; Clarke, Jane; Verroken, Michele

    2014-01-01

    It is thankfully rare for extenuating circumstances to fully test the processes and procedures enshrined in national and world antidoping authorities’ rules and laws. It is also thankfully very rare that a failed drugs test can have some positive implications. Antidoping laws are undoubtedly focused on ensuring fair competition, however, there are occasions when honest athletes discover medical diagnoses through failed antidoping tests. The purpose of this paper is to broadly discuss antidoping considerations encountered, based on the four principles of medical ethics and to propose simple solutions to these problems. Unfortunately, extreme medical circumstances will often test the limits of antidoping and medical processes and with open channels for feedback, these systems can improve. Performance enhancement seems an illogical concept if an athlete’s medical treatment and disease are more inherently performance harming than unintended potential doping, but needs to be carefully managed to maintain fair sport. PMID:24668050

  1. Precise Temperature Measurement for Increasing the Survival of Newborn Babies in Incubator Environments

    PubMed Central

    Frischer, Robert; Penhaker, Marek; Krejcar, Ondrej; Kacerovsky, Marian; Selamat, Ali

    2014-01-01

    Precise temperature measurement is essential in a wide range of applications in the medical environment, however the regarding the problem of temperature measurement inside a simple incubator, neither a simple nor a low cost solution have been proposed yet. Given that standard temperature sensors don't satisfy the necessary expectations, the problem is not measuring temperature, but rather achieving the desired sensitivity. In response, this paper introduces a novel hardware design as well as the implementation that increases measurement sensitivity in defined temperature intervals at low cost. PMID:25494352

  2. Referral and Diagnosis of Developmental Auditory Processing Disorder in a Large, United States Hospital-Based Audiology Service.

    PubMed

    Moore, David R; Sieswerda, Stephanie L; Grainger, Maureen M; Bowling, Alexandra; Smith, Nicholette; Perdew, Audrey; Eichert, Susan; Alston, Sandra; Hilbert, Lisa W; Summers, Lynn; Lin, Li; Hunter, Lisa L

    2018-05-01

    Children referred to audiology services with otherwise unexplained academic, listening, attention, language, or other difficulties are often found to be audiometrically normal. Some of these children receive further evaluation for auditory processing disorder (APD), a controversial construct that assumes neural processing problems within the central auditory nervous system. This study focuses on the evaluation of APD and how it relates to diagnosis in one large pediatric audiology facility. To analyze electronic records of children receiving a central auditory processing evaluation (CAPE) at Cincinnati Children's Hospital, with a broad goal of understanding current practice in APD diagnosis and the test information which impacts that practice. A descriptive, cross-sectional analysis of APD test outcomes in relation to final audiologist diagnosis for 1,113 children aged 5-19 yr receiving a CAPE between 2009 and 2014. Children had a generally high level of performance on the tests used, resulting in marked ceiling effects on about half the tests. Audiologists developed the diagnostic category "Weakness" because of the large number of referred children who clearly had problems, but who did not fulfill the AAA/ASHA criteria for diagnosis of a "Disorder." A "right-ear advantage" was found in all tests for which each ear was tested, irrespective of whether the tests were delivered monaurally or dichotically. However, neither the side nor size of the ear advantage predicted the ultimate diagnosis well. Cooccurrence of CAPE with other learning problems was nearly universal, but neither the number nor the pattern of cooccurring problems was a predictor of APD diagnosis. The diagnostic patterns of individual audiologists were quite consistent. The number of annual assessments decreased dramatically during the study period. A simple diagnosis of APD based on current guidelines is neither realistic, given the current tests used, nor appropriate, as judged by the audiologists providing the service. Methods used to test for APD must recognize that any form of hearing assessment probes both sensory and cognitive processing. Testing must embrace modern methods, including digital test delivery, adaptive testing, referral to normative data, appropriate testing for young children, validated screening questionnaires, and relevant objective (physiological) methods, as appropriate. Audiologists need to collaborate with other specialists to understand more fully the behaviors displayed by children presenting with listening difficulties. To achieve progress, it is essential for clinicians and researchers to work together. As new understanding and methods become available, it will be necessary to sort out together what works and what doesn't work in the clinic, both from a theoretical and a practical perspective. American Academy of Audiology.

  3. On Improving Efficiency of Differential Evolution for Aerodynamic Shape Optimization Applications

    NASA Technical Reports Server (NTRS)

    Madavan, Nateri K.

    2004-01-01

    Differential Evolution (DE) is a simple and robust evolutionary strategy that has been provEn effective in determining the global optimum for several difficult optimization problems. Although DE offers several advantages over traditional optimization approaches, its use in applications such as aerodynamic shape optimization where the objective function evaluations are computationally expensive is limited by the large number of function evaluations often required. In this paper various approaches for improving the efficiency of DE are reviewed and discussed. Several approaches that have proven effective for other evolutionary algorithms are modified and implemented in a DE-based aerodynamic shape optimization method that uses a Navier-Stokes solver for the objective function evaluations. Parallelization techniques on distributed computers are used to reduce turnaround times. Results are presented for standard test optimization problems and for the inverse design of a turbine airfoil. The efficiency improvements achieved by the different approaches are evaluated and compared.

  4. Least-squares Minimization Approaches to Interpret Total Magnetic Anomalies Due to Spheres

    NASA Astrophysics Data System (ADS)

    Abdelrahman, E. M.; El-Araby, T. M.; Soliman, K. S.; Essa, K. S.; Abo-Ezz, E. R.

    2007-05-01

    We have developed three different least-squares approaches to determine successively: the depth, magnetic angle, and amplitude coefficient of a buried sphere from a total magnetic anomaly. By defining the anomaly value at the origin and the nearest zero-anomaly distance from the origin on the profile, the problem of depth determination is transformed into the problem of finding a solution of a nonlinear equation of the form f(z)=0. Knowing the depth and applying the least-squares method, the magnetic angle and amplitude coefficient are determined using two simple linear equations. In this way, the depth, magnetic angle, and amplitude coefficient are determined individually from all observed total magnetic data. The method is applied to synthetic examples with and without random errors and tested on a field example from Senegal, West Africa. In all cases, the depth solutions are in good agreement with the actual ones.

  5. Development of a Pneumatic Robot for MRI-guided Transperineal Prostate Biopsy and Brachytherapy: New Approaches

    PubMed Central

    Song, Sang-Eun; Cho, Nathan B.; Fischer, Gregory; Hata, Nobuhito; Tempany, Clare; Fichtinger, Gabor; Iordachita, Iulian

    2011-01-01

    Magnetic Resonance Imaging (MRI) guided prostate biopsy and brachytherapy has been introduced in order to enhance the cancer detection and treatment. For the accurate needle positioning, a number of robotic assistants have been developed. However, problems exist due to the strong magnetic field and limited workspace. Pneumatically actuated robots have shown the minimum distraction in the environment but the confined workspace limits optimal robot design and thus controllability is often poor. To overcome the problem, a simple external damping mechanism using timing belts was sought and a 1-DOF mechanism test result indicated sufficient positioning accuracy. Based on the damping mechanism and modular system design approach, a new workspace-optimized 4-DOF parallel robot was developed for the MRI-guided prostate biopsy and brachytherapy. A preliminary evaluation of the robot was conducted using previously developed pneumatic controller and satisfying results were obtained. PMID:21399734

  6. State-constrained booster trajectory solutions via finite elements and shooting

    NASA Technical Reports Server (NTRS)

    Bless, Robert R.; Hodges, Dewey H.; Seywald, Hans

    1993-01-01

    This paper presents an extension of a FEM formulation based on variational principles. A general formulation for handling internal boundary conditions and discontinuities in the state equations is presented, and the general formulation is modified for optimal control problems subject to state-variable inequality constraints. Solutions which only touch the state constraint and solutions which have a boundary arc of finite length are considered. Suitable shape and test functions are chosen for a FEM discretization. All element quadrature (equivalent to one-point Gaussian quadrature over each element) may be done in closed form. The final form of the algebraic equations is then derived. A simple state-constrained problem is solved. Then, for a practical application of the use of the FEM formulation, a launch vehicle subject to a dynamic pressure constraint (a first-order state inequality constraint) is solved. The results presented for the launch-vehicle trajectory have some interesting features, including a touch-point solution.

  7. Data Synchronization Discrepancies in a Formation Flight Control System

    NASA Technical Reports Server (NTRS)

    Ryan, Jack; Hanson, Curtis E.; Norlin, Ken A.; Allen, Michael J.; Schkolnik, Gerard (Technical Monitor)

    2001-01-01

    Aircraft hardware-in-the-loop simulation is an invaluable tool to flight test engineers; it reveals design and implementation flaws while operating in a controlled environment. Engineers, however, must always be skeptical of the results and analyze them within their proper context. Engineers must carefully ascertain whether an anomaly that occurs in the simulation will also occur in flight. This report presents a chronology illustrating how misleading simulation timing problems led to the implementation of an overly complex position data synchronization guidance algorithm in place of a simpler one. The report illustrates problems caused by the complex algorithm and how the simpler algorithm was chosen in the end. Brief descriptions of the project objectives, approach, and simulation are presented. The misleading simulation results and the conclusions then drawn are presented. The complex and simple guidance algorithms are presented with flight data illustrating their relative success.

  8. Positivity-preserving numerical schemes for multidimensional advection

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Macvean, M. K.; Lock, A. P.

    1993-01-01

    This report describes the construction of an explicit, single time-step, conservative, finite-volume method for multidimensional advective flow, based on a uniformly third-order polynomial interpolation algorithm (UTOPIA). Particular attention is paid to the problem of flow-to-grid angle-dependent, anisotropic distortion typical of one-dimensional schemes used component-wise. The third-order multidimensional scheme automatically includes certain cross-difference terms that guarantee good isotropy (and stability). However, above first-order, polynomial-based advection schemes do not preserve positivity (the multidimensional analogue of monotonicity). For this reason, a multidimensional generalization of the first author's universal flux-limiter is sought. This is a very challenging problem. A simple flux-limiter can be found; but this introduces strong anisotropic distortion. A more sophisticated technique, limiting part of the flux and then restoring the isotropy-maintaining cross-terms afterwards, gives more satisfactory results. Test cases are confined to two dimensions; three-dimensional extensions are briefly discussed.

  9. The Hubbard Dimer: A Complete DFT Solution to a Many-Body Problem

    NASA Astrophysics Data System (ADS)

    Smith, Justin; Carrascal, Diego; Ferrer, Jaime; Burke, Kieron

    2015-03-01

    In this work we explain the relationship between density functional theory and strongly correlated models using the simplest possible example, the two-site asymmetric Hubbard model. We discuss the connection between the lattice and real-space and how this is a simple model for stretched H2. We can solve this elementary example analytically, and with that we can illuminate the underlying logic and aims of DFT. While the many-body solution is analytic, the density functional is given only implicitly. We overcome this difficulty by creating a highly accurate parameterization of the exact function. We use this parameterization to perform benchmark calculations of correlation kinetic energy, the adiabatic connection, etc. We also test Hartree-Fock and the Bethe Ansatz Local Density Approximation. We also discuss and illustrate the derivative discontinuity in the exchange-correlation energy and the infamous gap problem in DFT. DGE-1321846, DE-FG02-08ER46496.

  10. Canonical Statistical Model for Maximum Expected Immission of Wire Conductor in an Aperture Enclosure

    NASA Technical Reports Server (NTRS)

    Bremner, Paul G.; Vazquez, Gabriel; Christiano, Daniel J.; Trout, Dawn H.

    2016-01-01

    Prediction of the maximum expected electromagnetic pick-up of conductors inside a realistic shielding enclosure is an important canonical problem for system-level EMC design of space craft, launch vehicles, aircraft and automobiles. This paper introduces a simple statistical power balance model for prediction of the maximum expected current in a wire conductor inside an aperture enclosure. It calculates both the statistical mean and variance of the immission from the physical design parameters of the problem. Familiar probability density functions can then be used to predict the maximum expected immission for deign purposes. The statistical power balance model requires minimal EMC design information and solves orders of magnitude faster than existing numerical models, making it ultimately viable for scaled-up, full system-level modeling. Both experimental test results and full wave simulation results are used to validate the foundational model.

  11. ULTRA-SHARP solution of the Smith-Hutton problem

    NASA Technical Reports Server (NTRS)

    Leonard, B. P.; Mokhtari, Simin

    1992-01-01

    Highly convective scalar transport involving near-discontinuities and strong streamline curvature was addressed in a paper by Smith and Hutton in 1982, comparing several different convection schemes applied to a specially devised test problem. First order methods showed significant artificial diffusion, whereas higher order methods gave less smearing but had a tendency to overshoot and oscillate. Perhaps because unphysical oscillations are more obvious than unphysical smearing, the intervening period has seen a rise in popularity of low order artificially diffusive schemes, especially in the numerical heat transfer industry. The present paper describes an alternate strategy of using non-artificially diffusive high order methods, while maintaining strictly monotonic transitions through the use of simple flux limited constraints. Limited third order upwinding is usually found to be the most cost effective basic convection scheme. Tighter resolution of discontinuities can be obtained at little additional cost by using automatic adaptive stencil expansion to higher order in local regions, as needed.

  12. Gelled-electrolyte batteries for electric vehicles

    NASA Astrophysics Data System (ADS)

    Tuphorn, Hans

    Increasing problems of air pollution have pushed activities of electric vehicle projects worldwide and in spite of projects for developing new battery systems for high energy densities, today lead/acid batteries are almost the single system, ready for technical usage in this application. Valve-regulated lead/acid batteries with gelled electrolyte have the advantage that no maintenance is required and because the gel system does not cause problems with electrolyte stratification, no additional appliances for central filling or acid addition are required, which makes the system simple. Those batteries with high density active masses indicate high endurance results and field tests with 40 VW-CityStromers, equipped with 96 V/160 A h gel batteries with thermal management show good results during four years. In addition, gelled lead/acid batteries possess superior high rate performance compared with conventional lead/acid batteries, which guarantees good acceleration results of the car and which makes the system recommendable for application in electric vehicles.

  13. A dimensionally split Cartesian cut cell method for hyperbolic conservation laws

    NASA Astrophysics Data System (ADS)

    Gokhale, Nandan; Nikiforakis, Nikos; Klein, Rupert

    2018-07-01

    We present a dimensionally split method for solving hyperbolic conservation laws on Cartesian cut cell meshes. The approach combines local geometric and wave speed information to determine a novel stabilised cut cell flux, and we provide a full description of its three-dimensional implementation in the dimensionally split framework of Klein et al. [1]. The convergence and stability of the method are proved for the one-dimensional linear advection equation, while its multi-dimensional numerical performance is investigated through the computation of solutions to a number of test problems for the linear advection and Euler equations. When compared to the cut cell flux of Klein et al., it was found that the new flux alleviates the problem of oscillatory boundary solutions produced by the former at higher Courant numbers, and also enables the computation of more accurate solutions near stagnation points. Being dimensionally split, the method is simple to implement and extends readily to multiple dimensions.

  14. Multi Agent Reward Analysis for Learning in Noisy Domains

    NASA Technical Reports Server (NTRS)

    Tumer, Kagan; Agogino, Adrian K.

    2005-01-01

    In many multi agent learning problems, it is difficult to determine, a priori, the agent reward structure that will lead to good performance. This problem is particularly pronounced in continuous, noisy domains ill-suited to simple table backup schemes commonly used in TD(lambda)/Q-learning. In this paper, we present a new reward evaluation method that allows the tradeoff between coordination among the agents and the difficulty of the learning problem each agent faces to be visualized. This method is independent of the learning algorithm and is only a function of the problem domain and the agents reward structure. We then use this reward efficiency visualization method to determine an effective reward without performing extensive simulations. We test this method in both a static and a dynamic multi-rover learning domain where the agents have continuous state spaces and where their actions are noisy (e.g., the agents movement decisions are not always carried out properly). Our results show that in the more difficult dynamic domain, the reward efficiency visualization method provides a two order of magnitude speedup in selecting a good reward. Most importantly it allows one to quickly create and verify rewards tailored to the observational limitations of the domain.

  15. An improved exploratory search technique for pure integer linear programming problems

    NASA Technical Reports Server (NTRS)

    Fogle, F. R.

    1990-01-01

    The development is documented of a heuristic method for the solution of pure integer linear programming problems. The procedure draws its methodology from the ideas of Hooke and Jeeves type 1 and 2 exploratory searches, greedy procedures, and neighborhood searches. It uses an efficient rounding method to obtain its first feasible integer point from the optimal continuous solution obtained via the simplex method. Since this method is based entirely on simple addition or subtraction of one to each variable of a point in n-space and the subsequent comparison of candidate solutions to a given set of constraints, it facilitates significant complexity improvements over existing techniques. It also obtains the same optimal solution found by the branch-and-bound technique in 44 of 45 small to moderate size test problems. Two example problems are worked in detail to show the inner workings of the method. Furthermore, using an established weighted scheme for comparing computational effort involved in an algorithm, a comparison of this algorithm is made to the more established and rigorous branch-and-bound method. A computer implementation of the procedure, in PC compatible Pascal, is also presented and discussed.

  16. Multi-objective problem of the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints

    NASA Astrophysics Data System (ADS)

    Amallynda, I.; Santosa, B.

    2017-11-01

    This paper proposes a new generalization of the distributed parallel machine and assembly scheduling problem (DPMASP) with eligibility constraints referred to as the modified distributed parallel machine and assembly scheduling problem (MDPMASP) with eligibility constraints. Within this generalization, we assume that there are a set non-identical factories or production lines, each one with a set unrelated parallel machine with different speeds in processing them disposed to a single assembly machine in series. A set of different products that are manufactured through an assembly program of a set of components (jobs) according to the requested demand. Each product requires several kinds of jobs with different sizes. Beside that we also consider to the multi-objective problem (MOP) of minimizing mean flow time and the number of tardy products simultaneously. This is known to be NP-Hard problem, is important to practice, as the former criterions to reflect the customer's demand and manufacturer's perspective. This is a realistic and complex problem with wide range of possible solutions, we propose four simple heuristics and two metaheuristics to solve it. Various parameters of the proposed metaheuristic algorithms are discussed and calibrated by means of Taguchi technique. All proposed algorithms are tested by Matlab software. Our computational experiments indicate that the proposed problem and fourth proposed algorithms are able to be implemented and can be used to solve moderately-sized instances, and giving efficient solutions, which are close to optimum in most cases.

  17. The pear thrips problem

    Treesearch

    Bruce L. Parker

    1991-01-01

    As entomologists, we sometimes like to think of an insect pest problem as simply a problem with an insect and its host. Our jobs would be much easier if that were the case, but of course, it is never that simple. There are many other factors besides the insect, and each one must be fully considered to understand the problem and develop effective management solutions....

  18. Pictorial Representations of Simple Arithmetic Problems Are Not Always Helpful: A Cognitive Load Perspective

    ERIC Educational Resources Information Center

    van Lieshout, Ernest C. D. M.; Xenidou-Dervou, Iro

    2018-01-01

    At the start of mathematics education children are often presented with addition and subtraction problems in the form of pictures. They are asked to solve the problems by filling in corresponding number sentences. One type of problem concerns the representation of an increase or a decrease in a depicted amount. A decrease is, however, more…

  19. Quantifying uncertainty in stable isotope mixing models

    DOE PAGES

    Davis, Paul; Syme, James; Heikoop, Jeffrey; ...

    2015-05-19

    Mixing models are powerful tools for identifying biogeochemical sources and determining mixing fractions in a sample. However, identification of actual source contributors is often not simple, and source compositions typically vary or even overlap, significantly increasing model uncertainty in calculated mixing fractions. This study compares three probabilistic methods, SIAR [ Parnell et al., 2010] a pure Monte Carlo technique (PMC), and Stable Isotope Reference Source (SIRS) mixing model, a new technique that estimates mixing in systems with more than three sources and/or uncertain source compositions. In this paper, we use nitrate stable isotope examples (δ 15N and δ 18O) butmore » all methods tested are applicable to other tracers. In Phase I of a three-phase blind test, we compared methods for a set of six-source nitrate problems. PMC was unable to find solutions for two of the target water samples. The Bayesian method, SIAR, experienced anchoring problems, and SIRS calculated mixing fractions that most closely approximated the known mixing fractions. For that reason, SIRS was the only approach used in the next phase of testing. In Phase II, the problem was broadened where any subset of the six sources could be a possible solution to the mixing problem. Results showed a high rate of Type I errors where solutions included sources that were not contributing to the sample. In Phase III some sources were eliminated based on assumed site knowledge and assumed nitrate concentrations, substantially reduced mixing fraction uncertainties and lowered the Type I error rate. These results demonstrate that valuable insights into stable isotope mixing problems result from probabilistic mixing model approaches like SIRS. The results also emphasize the importance of identifying a minimal set of potential sources and quantifying uncertainties in source isotopic composition as well as demonstrating the value of additional information in reducing the uncertainty in calculated mixing fractions.« less

  20. A nodally condensed SUPG formulation for free-surface computation of steady-state flows constrained by unilateral contact - Application to rolling

    NASA Astrophysics Data System (ADS)

    Arora, Shitij; Fourment, Lionel

    2018-05-01

    In the context of the simulation of industrial hot forming processes, the resultant time-dependent thermo-mechanical multi-field problem (v →,p ,σ ,ɛ ) can be sped up by 10-50 times using the steady-state methods while compared to the conventional incremental methods. Though the steady-state techniques have been used in the past, but only on simple configurations and with structured meshes, and the modern-days problems are in the framework of complex configurations, unstructured meshes and parallel computing. These methods remove time dependency from the equations, but introduce an additional unknown into the problem: the steady-state shape. This steady-state shape x → can be computed as a geometric correction t → on the domain X → by solving the weak form of the steady-state equation v →.n →(t →)=0 using a Streamline Upwind Petrov Galerkin (SUPG) formulation. There exists a strong coupling between the domain shape and the material flow, hence, a two-step fixed point iterative resolution algorithm was proposed that involves (1) the computation of flow field from the resolution of thermo-mechanical equations on a prescribed domain shape and (2) the computation of steady-state shape for an assumed velocity field. The contact equations are introduced in the penalty form both during the flow computation as well as during the free-surface correction. The fact that the contact description is inhomogeneous, i.e., it is defined in the nodal form in the former, and in the weighted residual form in the latter, is assumed to be critical to the convergence of certain problems. Thus, the notion of nodal collocation is invoked in the weak form of the surface correction equation to homogenize the contact coupling. The surface correction algorithm is tested on certain analytical test cases and the contact coupling is tested with some hot rolling problems.

  1. Transformation Theory, Accelerating Frames, and Two Simple Problems

    ERIC Educational Resources Information Center

    Schmid, G. Bruno

    1977-01-01

    Presents an operator which transforms quantum functions to solve problems of the stationary state wave functions for a particle and the motion and spreading of a Gaussian wave packet in uniform gravitational fields. (SL)

  2. Efficient computation of significance levels for multiple associations in large studies of correlated data, including genomewide association studies.

    PubMed

    Dudbridge, Frank; Koeleman, Bobby P C

    2004-09-01

    Large exploratory studies, including candidate-gene-association testing, genomewide linkage-disequilibrium scans, and array-expression experiments, are becoming increasingly common. A serious problem for such studies is that statistical power is compromised by the need to control the false-positive rate for a large family of tests. Because multiple true associations are anticipated, methods have been proposed that combine evidence from the most significant tests, as a more powerful alternative to individually adjusted tests. The practical application of these methods is currently limited by a reliance on permutation testing to account for the correlated nature of single-nucleotide polymorphism (SNP)-association data. On a genomewide scale, this is both very time-consuming and impractical for repeated explorations with standard marker panels. Here, we alleviate these problems by fitting analytic distributions to the empirical distribution of combined evidence. We fit extreme-value distributions for fixed lengths of combined evidence and a beta distribution for the most significant length. An initial phase of permutation sampling is required to fit these distributions, but it can be completed more quickly than a simple permutation test and need be done only once for each panel of tests, after which the fitted parameters give a reusable calibration of the panel. Our approach is also a more efficient alternative to a standard permutation test. We demonstrate the accuracy of our approach and compare its efficiency with that of permutation tests on genomewide SNP data released by the International HapMap Consortium. The estimation of analytic distributions for combined evidence will allow these powerful methods to be applied more widely in large exploratory studies.

  3. The effectiveness of problem-based learning on teaching the first law of thermodynamics

    NASA Astrophysics Data System (ADS)

    Tatar, Erdal; Oktay, Münir

    2011-11-01

    Background: Problem-based learning (PBL) is a teaching approach working in cooperation with self-learning and involving research to solve real problems. The first law of thermodynamics states that energy can neither be created nor destroyed, but that energy is conserved. Students had difficulty learning or misconceptions about this law. This study is related to the teaching of the first law of thermodynamics within a PBL environment. Purpose: This study examined the effectiveness of PBL on candidate science teachers' understanding of the first law of thermodynamics and their science process skills. This study also examined their opinions about PBL. Sample: The sample consists of 48 third-grade university students from the Department of Science Education in one of the public universities in Turkey. Design and methods: A one-group pretest-posttest experimental design was used. Data collection tools included the Achievement Test, Science Process Skill Test, Constructivist Learning Environment Survey and an interview with open-ended questions. Paired samples t-test was conducted to examine differences in pre/post tests. Results: The PBL approach has a positive effect on the students' learning abilities and science process skills. The students thought that the PBL environment supports effective and permanent learning, and self-learning planning skills. On the other hand, some students think that the limited time and unfamiliarity of the approach impede learning. Conclusions: The PBL is an active learning approach supporting students in the process of learning. But there are still many practical disadvantages that could reduce the effectiveness of the PBL. To prevent the alienation of the students, simple PBL activities should be applied from the primary school level. In order to overcome time limitations, education researchers should examine short-term and effective PBL activities.

  4. Problematic video game play in a college sample and its relationship to time management skills and attention-deficit/hyperactivity disorder symptomology.

    PubMed

    Tolchinsky, Anatol; Jefferson, Stephen D

    2011-09-01

    Although numerous benefits have been uncovered related to moderate video game play, research suggests that problematic video game playing behaviors can cause problems in the lives of some video game players. To further our understanding of this phenomenon, we investigated how problematic video game playing symptoms are related to an assortment of variables, including time management skills and attention-deficit/hyperactivity disorder (ADHD) symptoms. Additionally, we tested several simple mediation/moderation models to better explain previous theories that posit simple correlations between these variables. As expected, the results from the present study indicated that time management skills appeared to mediate the relationship between ADHD symptoms and problematic play endorsement (though only for men). Unexpectedly, we found that ADHD symptoms appeared to mediate the relation between time management skills and problematic play behaviors; however, this was only found for women in our sample. Finally, future implications are discussed.

  5. Recoiling from a Kick in the Head-On Case

    NASA Technical Reports Server (NTRS)

    Choi, Dae-Il; Kelly, Bernard J.; Boggs, William D.; Baker, John G.; Centrella, Joan; Van Meter, James

    2007-01-01

    Recoil "kicks" induced by gravitational radiation are expected in the inspiral and merger of black holes. Recently the numerical relativity community has begun to measure the significant kicks found when both unequal masses and spins are considered. Because understanding the cause and magnitude of each component of this kick may be complicated in inspiral simulations, we consider these effects in the context of a simple test problem. We study recoils from collisions of binaries with initially head-on trajectories, starting with the simplest case of equal masses with no spin; adding spin and varying the mass ratio, both separately and jointly. We find spin-induced recoils to be significant even in head-on configurations. Additionally, it appears that the scaling of transverse kicks with spins is consistent with post-Newtonian (PN) theory, even though the kick is generated in the nonlinear merger interaction, where PN theory should not apply. This suggests that a simple heuristic description might be effective in the estimation of spin-kicks.

  6. Point-in-convex polygon and point-in-convex polyhedron algorithms with O(1) complexity using space subdivision

    NASA Astrophysics Data System (ADS)

    Skala, Vaclav

    2016-06-01

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E2 a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E3 case, the complexity is O(N) even for the convex polyhedron as no ordering is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.

  7. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Skala, Vaclav

    There are many space subdivision and space partitioning techniques used in many algorithms to speed up computations. They mostly rely on orthogonal space subdivision, resp. using hierarchical data structures, e.g. BSP trees, quadtrees, octrees, kd-trees, bounding volume hierarchies etc. However in some applications a non-orthogonal space subdivision can offer new ways for actual speed up. In the case of convex polygon in E{sup 2} a simple Point-in-Polygon test is of the O(N) complexity and the optimal algorithm is of O(log N) computational complexity. In the E{sup 3} case, the complexity is O(N) even for the convex polyhedron as no orderingmore » is defined. New Point-in-Convex Polygon and Point-in-Convex Polyhedron algorithms are presented based on space subdivision in the preprocessing stage resulting to O(1) run-time complexity. The presented approach is simple to implement. Due to the principle of duality, dual problems, e.g. line-convex polygon, line clipping, can be solved in a similarly.« less

  8. Exploring the neural bases of goal-directed motor behavior using fully resolved simulations

    NASA Astrophysics Data System (ADS)

    Patel, Namu; Patankar, Neelesh A.

    2016-11-01

    Undulatory swimming is an ideal problem for understanding the neural architecture for motor control and movement; a vertebrate's robust morphology and adaptive locomotive gait allows the swimmer to navigate complex environments. Simple mathematical models for neurally activated muscle contractions have been incorporated into a swimmer immersed in fluid. Muscle contractions produce bending moments which determine the swimming kinematics. The neurobiology of goal-directed locomotion is explored using fast, efficient, and fully resolved constraint-based immersed boundary simulations. Hierarchical control systems tune the strength, frequency, and duty cycle for neural activation waves to produce multifarious swimming gaits or synergies. Simulation results are used to investigate why the basal ganglia and other control systems may command a particular neural pattern to accomplish a task. Using simple neural models, the effect of proprioceptive feedback on refining the body motion is demonstrated. Lastly, the ability for a learned swimmer to successfully navigate a complex environment is tested. This work is supported by NSF CBET 1066575 and NSF CMMI 0941674.

  9. Testlet-Based Multidimensional Adaptive Testing

    PubMed Central

    Frey, Andreas; Seitz, Nicki-Nils; Brandt, Steffen

    2016-01-01

    Multidimensional adaptive testing (MAT) is a highly efficient method for the simultaneous measurement of several latent traits. Currently, no psychometrically sound approach is available for the use of MAT in testlet-based tests. Testlets are sets of items sharing a common stimulus such as a graph or a text. They are frequently used in large operational testing programs like TOEFL, PISA, PIRLS, or NAEP. To make MAT accessible for such testing programs, we present a novel combination of MAT with a multidimensional generalization of the random effects testlet model (MAT-MTIRT). MAT-MTIRT compared to non-adaptive testing is examined for several combinations of testlet effect variances (0.0, 0.5, 1.0, and 1.5) and testlet sizes (3, 6, and 9 items) with a simulation study considering three ability dimensions with simple loading structure. MAT-MTIRT outperformed non-adaptive testing regarding the measurement precision of the ability estimates. Further, the measurement precision decreased when testlet effect variances and testlet sizes increased. The suggested combination of the MTIRT model therefore provides a solution to the substantial problems of testlet-based tests while keeping the length of the test within an acceptable range. PMID:27917132

  10. Interoperation transfer in Chinese-English bilinguals' arithmetic.

    PubMed

    Campbell, Jamie I D; Dowd, Roxanne R

    2012-10-01

    We examined interoperation transfer of practice in adult Chinese-English bilinguals' memory for simple multiplication (6 × 8 = 48) and addition (6 + 8 = 14) facts. The purpose was to determine whether they possessed distinct number-fact representations in both Chinese (L1) and English (L2). Participants repeatedly practiced multiplication problems (e.g., 4 × 5 = ?), answering a subset in L1 and another subset in L2. Then separate groups answered corresponding addition problems (4 + 5 = ?) and control addition problems in either L1 (N = 24) or L2 (N = 24). The results demonstrated language-specific negative transfer of multiplication practice to corresponding addition problems. Specifically, large simple addition problems (sum > 10) presented a significant response time cost (i.e., retrieval-induced forgetting) after their multiplication counterparts were practiced in the same language, relative to practice in the other language. The results indicate that our Chinese-English bilinguals had multiplication and addition facts represented in distinct language-specific memory stores.

  11. Transport synthetic acceleration for long-characteristics assembly-level transport problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zika, M.R.; Adams, M.L.

    2000-02-01

    The authors apply the transport synthetic acceleration (TSA) scheme to the long-characteristics spatial discretization for the two-dimensional assembly-level transport problem. This synthetic method employs a simplified transport operator as its low-order approximation. Thus, in the acceleration step, the authors take advantage of features of the long-characteristics discretization that make it particularly well suited to assembly-level transport problems. The main contribution is to address difficulties unique to the long-characteristics discretization and produce a computationally efficient acceleration scheme. The combination of the long-characteristics discretization, opposing reflecting boundary conditions (which are present in assembly-level transport problems), and TSA presents several challenges. The authorsmore » devise methods for overcoming each of them in a computationally efficient way. Since the boundary angular data exist on different grids in the high- and low-order problems, they define restriction and prolongation operations specific to the method of long characteristics to map between the two grids. They implement the conjugate gradient (CG) method in the presence of opposing reflection boundary conditions to solve the TSA low-order equations. The CG iteration may be applied only to symmetric positive definite (SPD) matrices; they prove that the long-characteristics discretization yields an SPD matrix. They present results of the acceleration scheme on a simple test problem, a typical pressurized water reactor assembly, and a typical boiling water reactor assembly.« less

  12. Simple additive manufacturing of an osteoconductive ceramic using suspension melt extrusion.

    PubMed

    Slots, Casper; Jensen, Martin Bonde; Ditzel, Nicholas; Hedegaard, Martin A B; Borg, Søren Wiatr; Albrektsen, Ole; Thygesen, Torben; Kassem, Moustapha; Andersen, Morten Østergaard

    2017-02-01

    Craniofacial bone trauma is a leading reason for surgery at most hospitals. Large pieces of destroyed or resected bone are often replaced with non-resorbable and stock implants, and these are associated with a variety of problems. This paper explores the use of a novel fatty acid/calcium phosphate suspension melt for simple additive manufacturing of ceramic tricalcium phosphate implants. A wide variety of non-aqueous liquids were tested to determine the formulation of a storable 3D printable tricalcium phosphate suspension ink, and only fatty acid-based inks were found to work. A heated stearic acid-tricalcium phosphate suspension melt was then 3D printed, carbonized and sintered, yielding implants with controllable macroporosities. Their microstructure, compressive strength and chemical purity were analyzed with electron microscopy, mechanical testing and Raman spectroscopy, respectively. Mesenchymal stem cell culture was used to assess their osteoconductivity as defined by collagen deposition, alkaline phosphatase secretion and de-novo mineralization. After a rapid sintering process, the implants retained their pre-sintering shape with open pores. They possessed clinically relevant mechanical strength and were chemically pure. They supported adhesion of mesenchymal stem cells, and these were able to deposit collagen onto the implants, secrete alkaline phosphatase and further mineralize the ceramic. The tricalcium phosphate/fatty acid ink described here and its 3D printing may be sufficiently simple and effective to enable rapid, on-demand and in-hospital fabrication of individualized ceramic implants that allow clinicians to use them for treatment of bone trauma. Copyright © 2016 The Academy of Dental Materials. Published by Elsevier Ltd. All rights reserved.

  13. Color Counts, Too!

    ERIC Educational Resources Information Center

    Sewell, Julia H.

    1983-01-01

    Students with undetected color blindness can have problems with specific teaching methods and materials. The problem should be ruled out in children with suspected learning disabilities and taken into account in career counseling. Nine examples of simple classroom modifications are described. (CL)

  14. The measurement of linear frequency drift in oscillators

    NASA Astrophysics Data System (ADS)

    Barnes, J. A.

    1985-04-01

    A linear drift in frequency is an important element in most stochastic models of oscillator performance. Quartz crystal oscillators often have drifts in excess of a part in ten to the tenth power per day. Even commercial cesium beam devices often show drifts of a few parts in ten to the thirteenth per year. There are many ways to estimate the drift rates from data samples (e.g., regress the phase on a quadratic; regress the frequency on a linear; compute the simple mean of the first difference of frequency; use Kalman filters with a drift term as one element in the state vector; and others). Although most of these estimators are unbiased, they vary in efficiency (i.e., confidence intervals). Further, the estimation of confidence intervals using the standard analysis of variance (typically associated with the specific estimating technique) can give amazingly optimistic results. The source of these problems is not an error in, say, the regressions techniques, but rather the problems arise from correlations within the residuals. That is, the oscillator model is often not consistent with constraints on the analysis technique or, in other words, some specific analysis techniques are often inappropriate for the task at hand. The appropriateness of a specific analysis technique is critically dependent on the oscillator model and can often be checked with a simple whiteness test on the residuals.

  15. Simple and Accurate Method for Central Spin Problems

    NASA Astrophysics Data System (ADS)

    Lindoy, Lachlan P.; Manolopoulos, David E.

    2018-06-01

    We describe a simple quantum mechanical method that can be used to obtain accurate numerical results over long timescales for the spin correlation tensor of an electron spin that is hyperfine coupled to a large number of nuclear spins. This method does not suffer from the statistical errors that accompany a Monte Carlo sampling of the exact eigenstates of the central spin Hamiltonian obtained from the algebraic Bethe ansatz, or from the growth of the truncation error with time in the time-dependent density matrix renormalization group (TDMRG) approach. As a result, it can be applied to larger central spin problems than the algebraic Bethe ansatz, and for longer times than the TDMRG algorithm. It is therefore an ideal method to use to solve central spin problems, and we expect that it will also prove useful for a variety of related problems that arise in a number of different research fields.

  16. Nick-free formation of reciprocal heteroduplexes: a simple solution to the topological problem.

    PubMed Central

    Wilson, J H

    1979-01-01

    Because the individual strands of DNA are intertwined, formation of heteroduplex structures between duplexes--as in presumed recombination intermediates--presents a topological puzzle, known as the winding problem. Previous approaches to this problem have assumed that single-strand breaks are required to permit formation of fully coiled heteroduplexes. This paper describes a simple, nick-free solution to the winding problem that satisfies all topological constraints. Homologous duplexes associated by their minor-groove surfaces can switch strand pairing to form reciprocal heteroduplexes that coil together into a compact, four-stranded helix throughout the region of pairing. Model building shows that this fused heteroduplex structure is plausible, being composed entirely of right-handed primary helices with Watson-Crick base pairing throughout. Its simplicity of formation, structural symmetry, and high degree of specificity are suggestive of a natural mechanism for alignment by base pairing between intact homologous duplexes. Implications for genetic recombination are discussed. Images PMID:291028

  17. Thinking in Terms of Sensors: Personification of Self as an Object in Physics Problem Solving

    ERIC Educational Resources Information Center

    Tabor-Morris, A. E.

    2015-01-01

    How can physics teachers help students develop consistent problem solving techniques for both simple and complicated physics problems, such as those that encompass objects undergoing multiple forces (mechanical or electrical) as individually portrayed in free-body diagrams and/or phenomenon involving multiple objects, such as Doppler effect…

  18. Helping Students with Emotional and Behavioral Disorders Solve Mathematics Word Problems

    ERIC Educational Resources Information Center

    Alter, Peter

    2012-01-01

    The author presents a strategy for helping students with emotional and behavioral disorders become more proficient at solving math word problems. Math word problems require students to go beyond simple computation in mathematics (e.g., adding, subtracting, multiplying, and dividing) and use higher level reasoning that includes recognizing relevant…

  19. Posing Problems to Understand Children's Learning of Fractions

    ERIC Educational Resources Information Center

    Cheng, Lu Pien

    2013-01-01

    In this study, ways in which problem posing activities aid our understanding of children's learning of addition of unlike fractions and product of proper fractions was examined. In particular, how a simple problem posing activity helps teachers take a second, deeper look at children's understanding of fraction concepts will be discussed. The…

  20. Fracture mechanics and parapsychology

    NASA Astrophysics Data System (ADS)

    Cherepanov, G. P.

    2010-08-01

    The problem of postcritical deformation of materials beyond the ultimate strength is considered a division of fracture mechanics. A simple example is used to show the relationship between this problem and parapsychology, which studies phenomena and processes where the causality principle fails. It is shown that the concept of postcritical deformation leads to problems with no solution

  1. Duality of Mathematical Thinking When Making Sense of Simple Word Problems: Theoretical Essay

    ERIC Educational Resources Information Center

    Polotskaia, Elena; Savard, Annie; Freiman, Viktor

    2015-01-01

    This essay proposes a reflection on the learning difficulties and teaching approaches associated with arithmetic word problem solving. We question the development of word problem solving skills in the early grades of elementary school. We are trying to revive the discussion because first, the knowledge in question--reversibility of arithmetic…

  2. Modular thermal analyzer routine, volume 1

    NASA Technical Reports Server (NTRS)

    Oren, J. A.; Phillips, M. A.; Williams, D. R.

    1972-01-01

    The Modular Thermal Analyzer Routine (MOTAR) is a general thermal analysis routine with strong capabilities for performing thermal analysis of systems containing flowing fluids, fluid system controls (valves, heat exchangers, etc.), life support systems, and thermal radiation situations. Its modular organization permits the analysis of a very wide range of thermal problems for simple problems containing a few conduction nodes to those containing complicated flow and radiation analysis with each problem type being analyzed with peak computational efficiency and maximum ease of use. The organization and programming methods applied to MOTAR achieved a high degree of computer utilization efficiency in terms of computer execution time and storage space required for a given problem. The computer time required to perform a given problem on MOTAR is approximately 40 to 50 percent that required for the currently existing widely used routines. The computer storage requirement for MOTAR is approximately 25 percent more than the most commonly used routines for the most simple problems but the data storage techniques for the more complicated options should save a considerable amount of space.

  3. Optimization of Regional Geodynamic Models for Mantle Dynamics

    NASA Astrophysics Data System (ADS)

    Knepley, M.; Isaac, T.; Jadamec, M. A.

    2016-12-01

    The SubductionGenerator program is used to construct high resolution, 3D regional thermal structures for mantle convection simulations using a variety of data sources, including sea floor ages and geographically referenced 3D slab locations based on seismic observations. The initial bulk temperature field is constructed using a half-space cooling model or plate cooling model, and related smoothing functions based on a diffusion length-scale analysis. In this work, we seek to improve the 3D thermal model and test different model geometries and dynamically driven flow fields using constraints from observed seismic velocities and plate motions. Through a formal adjoint analysis, we construct the primal-dual version of the multi-objective PDE-constrained optimization problem for the plate motions and seismic misfit. We have efficient, scalable preconditioners for both the forward and adjoint problems based upon a block preconditioning strategy, and a simple gradient update is used to improve the control residual. The full optimal control problem is formulated on a nested hierarchy of grids, allowing a nonlinear multigrid method to accelerate the solution.

  4. Multi-Constraint Multi-Variable Optimization of Source-Driven Nuclear Systems

    NASA Astrophysics Data System (ADS)

    Watkins, Edward Francis

    1995-01-01

    A novel approach to the search for optimal designs of source-driven nuclear systems is investigated. Such systems include radiation shields, fusion reactor blankets and various neutron spectrum-shaping assemblies. The novel approach involves the replacement of the steepest-descents optimization algorithm incorporated in the code SWAN by a significantly more general and efficient sequential quadratic programming optimization algorithm provided by the code NPSOL. The resulting SWAN/NPSOL code system can be applied to more general, multi-variable, multi-constraint shield optimization problems. The constraints it accounts for may include simple bounds on variables, linear constraints, and smooth nonlinear constraints. It may also be applied to unconstrained, bound-constrained and linearly constrained optimization. The shield optimization capabilities of the SWAN/NPSOL code system is tested and verified in a variety of optimization problems: dose minimization at constant cost, cost minimization at constant dose, and multiple-nonlinear constraint optimization. The replacement of the optimization part of SWAN with NPSOL is found feasible and leads to a very substantial improvement in the complexity of optimization problems which can be efficiently handled.

  5. Pore-scale modeling of moving contact line problems in immiscible two-phase flow

    NASA Astrophysics Data System (ADS)

    Kucala, Alec; Noble, David; Martinez, Mario

    2016-11-01

    Accurate modeling of moving contact line (MCL) problems is imperative in predicting capillary pressure vs. saturation curves, permeability, and preferential flow paths for a variety of applications, including geological carbon storage (GCS) and enhanced oil recovery (EOR). Here, we present a model for the moving contact line using pore-scale computational fluid dynamics (CFD) which solves the full, time-dependent Navier-Stokes equations using the Galerkin finite-element method. The MCL is modeled as a surface traction force proportional to the surface tension, dependent on the static properties of the immiscible fluid/solid system. We present a variety of verification test cases for simple two- and three-dimensional geometries to validate the current model, including threshold pressure predictions in flows through pore-throats for a variety of wetting angles. Simulations involving more complex geometries are also presented to be used in future simulations for GCS and EOR problems. Sandia is a multiprogram laboratory operated by Sandia Corporation, a Lockheed Martin Company, for the United States Department of Energy's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  6. The molecular matching problem

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.

    1993-01-01

    Molecular chemistry contains many difficult optimization problems that have begun to attract the attention of optimizers in the Operations Research community. Problems including protein folding, molecular conformation, molecular similarity, and molecular matching have been addressed. Minimum energy conformations for simple molecular structures such as water clusters, Lennard-Jones microclusters, and short polypeptides have dominated the literature to date. However, a variety of interesting problems exist and we focus here on a molecular structure matching (MSM) problem.

  7. Modular Approaches to Earth Science Scientific Computing: 3D Electromagnetic Induction Modeling as an Example

    NASA Astrophysics Data System (ADS)

    Tandon, K.; Egbert, G.; Siripunvaraporn, W.

    2003-12-01

    We are developing a modular system for three-dimensional inversion of electromagnetic (EM) induction data, using an object oriented programming approach. This approach allows us to modify the individual components of the inversion scheme proposed, and also reuse the components for variety of problems in earth science computing howsoever diverse they might be. In particular, the modularity allows us to (a) change modeling codes independently of inversion algorithm details; (b) experiment with new inversion algorithms; and (c) modify the way prior information is imposed in the inversion to test competing hypothesis and techniques required to solve an earth science problem. Our initial code development is for EM induction equations on a staggered grid, using iterative solution techniques in 3D. An example illustrated here is an experiment with the sensitivity of 3D magnetotelluric inversion to uncertainties in the boundary conditions required for regional induction problems. These boundary conditions should reflect the large-scale geoelectric structure of the study area, which is usually poorly constrained. In general for inversion of MT data, one fixes boundary conditions at the edge of the model domain, and adjusts the earth?s conductivity structure within the modeling domain. Allowing for errors in specification of the open boundary values is simple in principle, but no existing inversion codes that we are aware of have this feature. Adding a feature such as this is straightforward within the context of the modular approach. More generally, a modular approach provides an efficient methodology for setting up earth science computing problems to test various ideas. As a concrete illustration relevant to EM induction problems, we investigate the sensitivity of MT data near San Andreas Fault at Parkfield (California) to uncertainties in the regional geoelectric structure.

  8. In-place recalibration technique applied to a capacitance-type system for measuring rotor blade tip clearance

    NASA Technical Reports Server (NTRS)

    Barranger, J. P.

    1978-01-01

    The rotor blade tip clearance measurement system consists of a capacitance sensing probe with self contained tuning elements, a connecting coaxial cable, and remotely located electronics. Tests show that the accuracy of the system suffers from a strong dependence on probe tip temperature and humidity. A novel inplace recalibration technique was presented which partly overcomes this problem through a simple modification of the electronics that permits a scale factor correction. This technique, when applied to a commercial system significantly reduced errors under varying conditions of humidity and temperature. Equations were also found that characterize the important cable and probe design quantities.

  9. Development of a prototype commonality analysis tool for use in space programs

    NASA Technical Reports Server (NTRS)

    Yeager, Dorian P.

    1988-01-01

    A software tool to aid in performing commonality analyses, called Commonality Analysis Problem Solver (CAPS), was designed, and a prototype version (CAPS 1.0) was implemented and tested. The CAPS 1.0 runs in an MS-DOS or IBM PC-DOS environment. The CAPS is designed around a simple input language which provides a natural syntax for the description of feasibility constraints. It provides its users with the ability to load a database representing a set of design items, describe the feasibility constraints on items in that database, and do a comprehensive cost analysis to find the most economical substitution pattern.

  10. Flutter suppression and gust alleviation using active controls

    NASA Technical Reports Server (NTRS)

    Nissim, E.

    1975-01-01

    Application of the aerodynamic energy approach to some problems of flutter suppression and gust alleviation were considered. A simple modification of the control-law is suggested for achieving the required pitch control in the use of a leading edge - trailing edge activated strip. The possible replacement of the leading edge - trailing edge activated strip by a trailing edge - tab strip is also considered as an alternate solution. Parameters affecting the performance of the activated leading edge - trailing edge strip were tested on the Arava STOL Transport and the Westwind Executive Jet Transport and include strip location, control-law gains and a variation in the control-law itself.

  11. Quantifying Cyber-Resilience Against Resource-Exhaustion Attacks

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Fink, Glenn A.; Griswold, Richard L.; Beech, Zachary W.

    2014-07-11

    Resilience in the information sciences is notoriously difficult to define much less to measure. But in mechanical engi- neering, the resilience of a substance is mathematically defined as the area under the stress vs. strain curve. We took inspiration from mechanics in an attempt to define resilience precisely for information systems. We first examine the meaning of resilience in language and engineering terms and then translate these definitions to information sciences. Then we tested our definitions of resilience for a very simple problem in networked queuing systems. We discuss lessons learned and make recommendations for using this approach in futuremore » work.« less

  12. Development of Schema Knowledge in the Classroom: Effects upon Problem Representation and Problem Solution of Programming.

    ERIC Educational Resources Information Center

    Tsai, Shu-Er

    Students with a semester or more of instruction often display remarkable naivety about the language that they have been studying and often prove unable to manage simple programming problems. The main purpose of this study was to create a set of problem-plan-program types for the BASIC programming language to help high school students build plans…

  13. Pathological and Sub-Clinical Problem Gambling in a New Zealand Prison: A Comparison of the Eight and SOGS Gambling Screens

    ERIC Educational Resources Information Center

    Sullivan, Sean; Brown, Robert; Skinner, Bruce

    2008-01-01

    Prison populations have been identified as having elevated levels of problem gambling prevalence, and screening for problem gambling may provide an opportunity to identify and address a behavior that may otherwise lead to re-offending. A problem gambling screen for this purpose would need to be brief, simple to score, and be able to be…

  14. Ending up with Less: The Role of Working Memory in Solving Simple Subtraction Problems with Positive and Negative Answers

    ERIC Educational Resources Information Center

    Robert, Nicole D.; LeFevre, Jo-Anne

    2013-01-01

    Does solving subtraction problems with negative answers (e.g., 5-14) require different cognitive processes than solving problems with positive answers (e.g., 14-5)? In a dual-task experiment, young adults (N=39) combined subtraction with two working memory tasks, verbal memory and visual-spatial memory. All of the subtraction problems required…

  15. E. coli and water quality: Reevaluation of mug tests and development of radical new indole test. Final technical report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chang, G.W.

    1992-12-01

    The project was undertaken to address the problem of MUG (4-methylumbelliferyl-B-D-glucuronide)-negative E. coli in water testing, and to develop a new, more reliable indole-based test for E. coli. In a study involving 39 healthy human volunteers, it was found that 1/3 of E. coli isolated from fresh human fecal samples tested MUG-negative in lauryl tryptose broth with MUG. It was further discovered: (1) The presence of simple sugars can cause catabolite repression of beta-GUR in a small percentage of E coli. (2) In gene probe studies, almost all E. coli isolates have portions of the uidA (GUR) gene sequence. Basedmore » on these two findings, catabolite repression can only be a partial explanation for the high-rate of GUR-negative E. coli. The authors improved the E. coli confirmatory medium, EC + MUG by removing the lactose, which allows for a stronger MUG test and the inclusion of the more reliable indole test. They called this newly improved medium INDEC, for Indole and EC medium. They later developed Colitag 3, a one-day, single tube indole-based test for E. coli.« less

  16. Theoretical Analysis of Local Search and Simple Evolutionary Algorithms for the Generalized Travelling Salesperson Problem.

    PubMed

    Pourhassan, Mojgan; Neumann, Frank

    2018-06-22

    The generalized travelling salesperson problem is an important NP-hard combinatorial optimization problem for which meta-heuristics, such as local search and evolutionary algorithms, have been used very successfully. Two hierarchical approaches with different neighbourhood structures, namely a Cluster-Based approach and a Node-Based approach, have been proposed by Hu and Raidl (2008) for solving this problem. In this paper, local search algorithms and simple evolutionary algorithms based on these approaches are investigated from a theoretical perspective. For local search algorithms, we point out the complementary abilities of the two approaches by presenting instances where they mutually outperform each other. Afterwards, we introduce an instance which is hard for both approaches when initialized on a particular point of the search space, but where a variable neighbourhood search combining them finds the optimal solution in polynomial time. Then we turn our attention to analysing the behaviour of simple evolutionary algorithms that use these approaches. We show that the Node-Based approach solves the hard instance of the Cluster-Based approach presented in Corus et al. (2016) in polynomial time. Furthermore, we prove an exponential lower bound on the optimization time of the Node-Based approach for a class of Euclidean instances.

  17. Improved Analytical Sensitivity of Lateral Flow Assay using Sponge for HBV Nucleic Acid Detection.

    PubMed

    Tang, Ruihua; Yang, Hui; Gong, Yan; Liu, Zhi; Li, XiuJun; Wen, Ting; Qu, ZhiGuo; Zhang, Sufeng; Mei, Qibing; Xu, Feng

    2017-05-02

    Hepatitis B virus (HBV) infection is a serious public health problem, which can be transmitted through various routes (e.g., blood donation) and cause hepatitis, liver cirrhosis and liver cancer. Hence, it is necessary to do diagnostic screening for high-risk HBV patients in these transmission routes. Nowadays, protein-based technologies have been used for HBV testing, which however involve the issues of large sample volume, antibody instability and poor specificity. Nucleic acid hybridization-based lateral flow assay (LFA) holds great potential to address these limitations due to its low-cost, rapid, and simple features, but the poor analytical sensitivity of LFA restricts its application. In this study, we developed a low-cost, simple and easy-to-use method to improve analytical sensitivity by integrating sponge shunt into LFA to decrease the fluid flow rate. The thickness, length and hydrophobicity of the sponge shunt were sequentially optimized, and achieved 10-fold signal enhancement in nucleic acid testing of HBV as compared to the unmodified LFA. The enhancement was further confirmed by using HBV clinical samples, where we achieved the detection limit of 10 3 copies/ml as compared to 10 4 copies/ml in unmodified LFA. The improved LFA holds great potential for diseases diagnostics, food safety control and environment monitoring at point-of-care.

  18. A Simple Assay to Screen Antimicrobial Compounds Potentiating the Activity of Current Antibiotics

    PubMed Central

    Iqbal, Junaid; Kazmi, Shahana Urooj; Khan, Naveed Ahmed

    2013-01-01

    Antibiotic resistance continues to pose a significant problem in the management of bacterial infections, despite advances in antimicrobial chemotherapy and supportive care. Here, we suggest a simple, inexpensive, and easy-to-perform assay to screen antimicrobial compounds from natural products or synthetic chemical libraries for their potential to work in tandem with the available antibiotics against multiple drug-resistant bacteria. The aqueous extract of Juglans regia tree bark was tested against representative multiple drug-resistant bacteria in the aforementioned assay to determine whether it potentiates the activity of selected antibiotics. The aqueous extract of J. regia bark was added to Mueller-Hinton agar, followed by a lawn of multiple drug-resistant bacteria, Salmonella typhi or enteropathogenic E. coli. Next, filter paper discs impregnated with different classes of antibiotics were placed on the agar surface. Bacteria incubated with extract or antibiotics alone were used as controls. The results showed a significant increase (>30%) in the zone of inhibition around the aztreonam, cefuroxime, and ampicillin discs compared with bacteria incubated with the antibiotics/extract alone. In conclusion, our assay is able to detect either synergistic or additive action of J. regia extract against multiple drug-resistant bacteria when tested with a range of antibiotics. PMID:23865073

  19. Airtightness the simple(CS) way

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, S.

    Builders who might buck against such time consuming air sealing methods as polyethylene wrap and the airtight drywall approach (ADA) may respond better to current strategies. One such method, called SimpleCS, has proven especially effective. SimpleCS, pronounced simplex, stands for simple caulk and seal. A modification of the ADA, SimpleCS is an air-sealing management tool, a simplified systems approach to building tight homes. The system address the crucial question of when and by whom various air sealing steps should be done. It avoids the problems that often occur when later contractors cut open polyethylene wrap to drill holes in themore » drywall. The author describes how SimpleCS works, and the cost and training involved.« less

  20. Rediscovery in a Course for Nonscientists: Use of Molecular Models to Solve Classical Structural Problems

    ERIC Educational Resources Information Center

    Wood, Gordon W.

    1975-01-01

    Describes exercises using simple ball and stick models which students with no chemistry background can solve in the context of the original discovery. Examples include the tartaric acid and benzene problems. (GS)

  1. Improving performances of suboptimal greedy iterative biclustering heuristics via localization.

    PubMed

    Erten, Cesim; Sözdinler, Melih

    2010-10-15

    Biclustering gene expression data is the problem of extracting submatrices of genes and conditions exhibiting significant correlation across both the rows and the columns of a data matrix of expression values. Even the simplest versions of the problem are computationally hard. Most of the proposed solutions therefore employ greedy iterative heuristics that locally optimize a suitably assigned scoring function. We provide a fast and simple pre-processing algorithm called localization that reorders the rows and columns of the input data matrix in such a way as to group correlated entries in small local neighborhoods within the matrix. The proposed localization algorithm takes its roots from effective use of graph-theoretical methods applied to problems exhibiting a similar structure to that of biclustering. In order to evaluate the effectivenesss of the localization pre-processing algorithm, we focus on three representative greedy iterative heuristic methods. We show how the localization pre-processing can be incorporated into each representative algorithm to improve biclustering performance. Furthermore, we propose a simple biclustering algorithm, Random Extraction After Localization (REAL) that randomly extracts submatrices from the localization pre-processed data matrix, eliminates those with low similarity scores, and provides the rest as correlated structures representing biclusters. We compare the proposed localization pre-processing with another pre-processing alternative, non-negative matrix factorization. We show that our fast and simple localization procedure provides similar or even better results than the computationally heavy matrix factorization pre-processing with regards to H-value tests. We next demonstrate that the performances of the three representative greedy iterative heuristic methods improve with localization pre-processing when biological correlations in the form of functional enrichment and PPI verification constitute the main performance criteria. The fact that the random extraction method based on localization REAL performs better than the representative greedy heuristic methods under same criteria also confirms the effectiveness of the suggested pre-processing method. Supplementary material including code implementations in LEDA C++ library, experimental data, and the results are available at http://code.google.com/p/biclustering/ cesim@khas.edu.tr; melihsozdinler@boun.edu.tr Supplementary data are available at Bioinformatics online.

  2. Stimulating Mathematical Reasoning with Simple Open-Ended Tasks

    ERIC Educational Resources Information Center

    West, John

    2018-01-01

    The importance of mathematical reasoning is unquestioned and providing opportunities for students to become involved in mathematical reasoning is paramount. The open-ended tasks presented incorporate mathematical content explored through the contexts of problem solving and reasoning. This article presents a number of simple tasks that may be…

  3. Eye Movements Reveal Students' Strategies in Simple Equation Solving

    ERIC Educational Resources Information Center

    Susac, Ana; Bubic, Andreja; Kaponja, Jurica; Planinic, Maja; Palmovic, Marijan

    2014-01-01

    Equation rearrangement is an important skill required for problem solving in mathematics and science. Eye movements of 40 university students were recorded while they were rearranging simple algebraic equations. The participants also reported on their strategies during equation solving in a separate questionnaire. The analysis of the behavioral…

  4. A Simple View of Linguistic Complexity

    ERIC Educational Resources Information Center

    Pallotti, Gabriele

    2015-01-01

    Although a growing number of second language acquisition (SLA) studies take linguistic complexity as a dependent variable, the term is still poorly defined and often used with different meanings, thus posing serious problems for research synthesis and knowledge accumulation. This article proposes a simple, coherent view of the construct, which is…

  5. [Screening for psychiatric risk factors in a facial trauma patients. Validating a questionnaire].

    PubMed

    Foletti, J M; Bruneau, S; Farisse, J; Thiery, G; Chossegros, C; Guyot, L

    2014-12-01

    We recorded similarities between patients managed in the psychiatry department and in the maxillo-facial surgical unit. Our hypothesis was that some psychiatric conditions act as risk factors for facial trauma. We had for aim to test our hypothesis and to validate a simple and efficient questionnaire to identify these psychiatric disorders. Fifty-eight consenting patients with facial trauma, recruited prospectively in the 3 maxillo-facial surgery departments of the Marseille area during 3 months (December 2012-March 2013) completed a self-questionnaire based on the French version of 3 validated screening tests (Self Reported Psychopathy test, Rapid Alcohol Problem Screening test quantity-frequency, and Personal Health Questionnaire). This preliminary study confirmed that psychiatric conditions detected by our questionnaire, namely alcohol abuse and dependence, substance abuse, and depression, were risk factors for facial trauma. Maxillo-facial surgeons are often unaware of psychiatric disorders that may be the cause of facial trauma. The self-screening test we propose allows documenting the psychiatric history of patients and implementing earlier psychiatric care. Copyright © 2014 Elsevier Masson SAS. All rights reserved.

  6. Realistic loophole-free Bell test with atom-photon entanglement

    NASA Astrophysics Data System (ADS)

    Teo, C.; Araújo, M.; Quintino, M. T.; Minář, J.; Cavalcanti, D.; Scarani, V.; Terra Cunha, M.; França Santos, M.

    2013-07-01

    The establishment of nonlocal correlations, guaranteed through the violation of a Bell inequality, is not only important from a fundamental point of view but constitutes the basis for device-independent quantum information technologies. Although several nonlocality tests have been conducted so far, all of them suffered from either locality or detection loopholes. Among the proposals for overcoming these problems are the use of atom-photon entanglement and hybrid photonic measurements (for example, photodetection and homodyning). Recent studies have suggested that the use of atom-photon entanglement can lead to Bell inequality violations with moderate transmission and detection efficiencies. Here we combine these ideas and propose an experimental setup realizing a simple atom-photon entangled state that can be used to obtain nonlocality when considering realistic experimental parameters including detection efficiencies and losses due to required propagation distances.

  7. Rapid bacterial antibiotic susceptibility test based on simple surface-enhanced Raman spectroscopic biomarkers

    NASA Astrophysics Data System (ADS)

    Liu, Chia-Ying; Han, Yin-Yi; Shih, Po-Han; Lian, Wei-Nan; Wang, Huai-Hsien; Lin, Chi-Hung; Hsueh, Po-Ren; Wang, Juen-Kai; Wang, Yuh-Lin

    2016-03-01

    Rapid bacterial antibiotic susceptibility test (AST) and minimum inhibitory concentration (MIC) measurement are important to help reduce the widespread misuse of antibiotics and alleviate the growing drug-resistance problem. We discovered that, when a susceptible strain of Staphylococcus aureus or Escherichia coli is exposed to an antibiotic, the intensity of specific biomarkers in its surface-enhanced Raman scattering (SERS) spectra drops evidently in two hours. The discovery has been exploited for rapid AST and MIC determination of methicillin-susceptible S. aureus and wild-type E. coli as well as clinical isolates. The results obtained by this SERS-AST method were consistent with that by the standard incubation-based method, indicating its high potential to supplement or replace existing time-consuming methods and help mitigate the challenge of drug resistance in clinical microbiology.

  8. Study of the collector/heat pipe cooled externally configured thermionic diode

    NASA Technical Reports Server (NTRS)

    1973-01-01

    A collector/heat pipe cooled, externally configured (heated) thermionic diode module was designed for use in a laboratory test to demonstrate the applicability of this concept as the fuel element/converter module of an in-core thermionic electric power source. During the course of the program, this module evolved from a simple experimental mock-up into an advanced unit which was more reactor prototypical. Detailed analysis of all diode components led to their engineering design, fabrication, and assembly, with the exception of the collector/heat pipe. While several designs of high power annular wicked heat pipes were fabricated and tested, each exhibited unexpected performance difficulties. It was concluded that the basic cause of these problems was the formation of crud which interfered with the liquid flow in the annular passage of the evaporator region.

  9. Three-dimensional particle tracking velocimetry algorithm based on tetrahedron vote

    NASA Astrophysics Data System (ADS)

    Cui, Yutong; Zhang, Yang; Jia, Pan; Wang, Yuan; Huang, Jingcong; Cui, Junlei; Lai, Wing T.

    2018-02-01

    A particle tracking velocimetry algorithm based on tetrahedron vote, which is named TV-PTV, is proposed to overcome the limited selection problem of effective algorithms for 3D flow visualisation. In this new cluster-matching algorithm, tetrahedrons produced by the Delaunay tessellation are used as the basic units for inter-frame matching, which results in a simple algorithmic structure of only two independent preset parameters. Test results obtained using the synthetic test image data from the Visualisation Society of Japan show that TV-PTV presents accuracy comparable to that of the classical algorithm based on new relaxation method (NRX). Compared with NRX, TV-PTV possesses a smaller number of loops in programming and thus a shorter computing time, especially for large particle displacements and high particle concentration. TV-PTV is confirmed practically effective using an actual 3D wake flow.

  10. Confronting Decision Cliffs: Diagnostic Assessment of Multi-Objective Evolutionary Algorithms' Performance for Addressing Uncertain Environmental Thresholds

    NASA Astrophysics Data System (ADS)

    Ward, V. L.; Singh, R.; Reed, P. M.; Keller, K.

    2014-12-01

    As water resources problems typically involve several stakeholders with conflicting objectives, multi-objective evolutionary algorithms (MOEAs) are now key tools for understanding management tradeoffs. Given the growing complexity of water planning problems, it is important to establish if an algorithm can consistently perform well on a given class of problems. This knowledge allows the decision analyst to focus on eliciting and evaluating appropriate problem formulations. This study proposes a multi-objective adaptation of the classic environmental economics "Lake Problem" as a computationally simple but mathematically challenging MOEA benchmarking problem. The lake problem abstracts a fictional town on a lake which hopes to maximize its economic benefit without degrading the lake's water quality to a eutrophic (polluted) state through excessive phosphorus loading. The problem poses the challenge of maintaining economic activity while confronting the uncertainty of potentially crossing a nonlinear and potentially irreversible pollution threshold beyond which the lake is eutrophic. Objectives for optimization are maximizing economic benefit from lake pollution, maximizing water quality, maximizing the reliability of remaining below the environmental threshold, and minimizing the probability that the town will have to drastically change pollution policies in any given year. The multi-objective formulation incorporates uncertainty with a stochastic phosphorus inflow abstracting non-point source pollution. We performed comprehensive diagnostics using 6 algorithms: Borg, MOEAD, eMOEA, eNSGAII, GDE3, and NSGAII to ascertain their controllability, reliability, efficiency, and effectiveness. The lake problem abstracts elements of many current water resources and climate related management applications where there is the potential for crossing irreversible, nonlinear thresholds. We show that many modern MOEAs can fail on this test problem, indicating its suitability as a useful and nontrivial benchmarking problem.

  11. Children with mathematical learning disability fail in recruiting verbal and numerical brain regions when solving simple multiplication problems.

    PubMed

    Berteletti, Ilaria; Prado, Jérôme; Booth, James R

    2014-08-01

    Greater skill in solving single-digit multiplication problems requires a progressive shift from a reliance on numerical to verbal mechanisms over development. Children with mathematical learning disability (MD), however, are thought to suffer from a specific impairment in numerical mechanisms. Here we tested the hypothesis that this impairment might prevent MD children from transitioning toward verbal mechanisms when solving single-digit multiplication problems. Brain activations during multiplication problems were compared in MD and typically developing (TD) children (3rd to 7th graders) in numerical and verbal regions which were individuated by independent localizer tasks. We used small (e.g., 2 × 3) and large (e.g., 7 × 9) problems as these problems likely differ in their reliance on verbal versus numerical mechanisms. Results indicate that MD children have reduced activations in both the verbal (i.e., left inferior frontal gyrus and left middle temporal to superior temporal gyri) and the numerical (i.e., right superior parietal lobule including intra-parietal sulcus) regions suggesting that both mechanisms are impaired. Moreover, the only reliable activation observed for MD children was in the numerical region when solving small problems. This suggests that MD children could effectively engage numerical mechanisms only for the easier problems. Conversely, TD children showed a modulation of activation with problem size in the verbal regions. This suggests that TD children were effectively engaging verbal mechanisms for the easier problems. Moreover, TD children with better language skills were more effective at engaging verbal mechanisms. In conclusion, results suggest that the numerical- and language-related processes involved in solving multiplication problems are impaired in MD children. Published by Elsevier Ltd.

  12. Simple Example of Backtest Overfitting (SEBO)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    In the field of mathematical finance, a "backtest" is the usage of historical market data to assess the performance of a proposed trading strategy. It is a relatively simple matter for a present-day computer system to explore thousands, millions or even billions of variations of a proposed strategy, and pick the best performing variant as the "optimal" strategy "in sample" (i.e., on the input dataset). Unfortunately, such an "optimal" strategy often performs very poorly "out of sample" (i.e. on another dataset), because the parameters of the invest strategy have been oversit to the in-sample data, a situation known as "backtestmore » overfitting". While the mathematics of backtest overfitting has been examined in several recent theoretical studies, here we pursue a more tangible analysis of this problem, in the form of an online simulator tool. Given a input random walk time series, the tool develops an "optimal" variant of a simple strategy by exhaustively exploring all integer parameter values among a handful of parameters. That "optimal" strategy is overfit, since by definition a random walk is unpredictable. Then the tool tests the resulting "optimal" strategy on a second random walk time series. In most runs using our online tool, the "optimal" strategy derived from the first time series performs poorly on the second time series, demonstrating how hard it is not to overfit a backtest. We offer this online tool, "Simple Example of Backtest Overfitting (SEBO)", to facilitate further research in this area.« less

  13. Bayes multiple decision functions.

    PubMed

    Wu, Wensong; Peña, Edsel A

    2013-01-01

    This paper deals with the problem of simultaneously making many ( M ) binary decisions based on one realization of a random data matrix X . M is typically large and X will usually have M rows associated with each of the M decisions to make, but for each row the data may be low dimensional. Such problems arise in many practical areas such as the biological and medical sciences, where the available dataset is from microarrays or other high-throughput technology and with the goal being to decide which among of many genes are relevant with respect to some phenotype of interest; in the engineering and reliability sciences; in astronomy; in education; and in business. A Bayesian decision-theoretic approach to this problem is implemented with the overall loss function being a cost-weighted linear combination of Type I and Type II loss functions. The class of loss functions considered allows for use of the false discovery rate (FDR), false nondiscovery rate (FNR), and missed discovery rate (MDR) in assessing the quality of decision. Through this Bayesian paradigm, the Bayes multiple decision function (BMDF) is derived and an efficient algorithm to obtain the optimal Bayes action is described. In contrast to many works in the literature where the rows of the matrix X are assumed to be stochastically independent, we allow a dependent data structure with the associations obtained through a class of frailty-induced Archimedean copulas. In particular, non-Gaussian dependent data structure, which is typical with failure-time data, can be entertained. The numerical implementation of the determination of the Bayes optimal action is facilitated through sequential Monte Carlo techniques. The theory developed could also be extended to the problem of multiple hypotheses testing, multiple classification and prediction, and high-dimensional variable selection. The proposed procedure is illustrated for the simple versus simple hypotheses setting and for the composite hypotheses setting through simulation studies. The procedure is also applied to a subset of a microarray data set from a colon cancer study.

  14. Maximum likelihood identification and optimal input design for identifying aircraft stability and control derivatives

    NASA Technical Reports Server (NTRS)

    Stepner, D. E.; Mehra, R. K.

    1973-01-01

    A new method of extracting aircraft stability and control derivatives from flight test data is developed based on the maximum likelihood cirterion. It is shown that this new method is capable of processing data from both linear and nonlinear models, both with and without process noise and includes output error and equation error methods as special cases. The first application of this method to flight test data is reported for lateral maneuvers of the HL-10 and M2/F3 lifting bodies, including the extraction of stability and control derivatives in the presence of wind gusts. All the problems encountered in this identification study are discussed. Several different methods (including a priori weighting, parameter fixing and constrained parameter values) for dealing with identifiability and uniqueness problems are introduced and the results given. The method for the design of optimal inputs for identifying the parameters of linear dynamic systems is also given. The criterion used for the optimization is the sensitivity of the system output to the unknown parameters. Several simple examples are first given and then the results of an extensive stability and control dervative identification simulation for a C-8 aircraft are detailed.

  15. Ecotracer: analyzing concentration of contaminants and radioisotopes in an aquatic spatial-dynamic food web model.

    PubMed

    Walters, William J; Christensen, Villy

    2018-01-01

    Ecotracer is a tool in the Ecopath with Ecosim (EwE) software package used to simulate and analyze the transport of contaminants such as methylmercury or radiocesium through aquatic food webs. Ecotracer solves the contaminant dynamic equations simultaneously with the biomass dynamic equations in Ecosim/Ecospace. In this paper, we give a detailed description of the Ecotracer module and analyze the performance on two problems of differing complexity. Ecotracer was modified from previous versions to more accurately model contaminant excretion, and new numerical integration algorithms were implemented to increase accuracy and robustness. To test the mathematical robustness of the computational algorithm, Ecotracer was tested on a simple problem for which we know an analytical solution. These results demonstrated the effectiveness of the program numerics. A much more complex model, the release of the cesium radionuclide 137 Cs from the Fukushima Dai-ichi nuclear accident, was also modeled and analyzed. A comparison of the Ecotracer results to sampled 137 Cs measurements in the coastal ocean area around Fukushima show the promise of the tool but also highlight some important limitations. Copyright © 2017 Elsevier Ltd. All rights reserved.

  16. TESTING WIND AS AN EXPLANATION FOR THE SPIN PROBLEM IN THE CONTINUUM-FITTING METHOD

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    You, Bei; Czerny, Bożena; Sobolewska, Małgosia

    2016-04-20

    The continuum-fitting method is one of the two most advanced methods of determining the black hole spin in accreting X-ray binary systems. There are, however, still some unresolved issues with the underlying disk models. One of these issues manifests as an apparent decrease in spin for increasing source luminosity. Here, we perform a few simple tests to establish whether outflows from the disk close to the inner radius can address this problem. We employ four different parametric models to describe the wind and compare these to the apparent decrease in spin with luminosity measured in the sources LMC X-3 andmore » GRS 1915+105. Wind models in which parameters do not explicitly depend on the accretion rate cannot reproduce the spin measurements. Models with mass accretion rate dependent outflows, however, have spectra that emulate the observed ones. The assumption of a wind thus effectively removes the artifact of spin decrease. This solution is not unique; the same conclusion can be obtained using a truncated inner disk model. To distinguish among the valid models, we will need high-resolution X-ray data and a realistic description of the Comptonization in the wind.« less

  17. Quality Assessment of Mixed and Ceramic Recycled Aggregates from Construction and Demolition Wastes in the Concrete Manufacture According to the Spanish Standard †

    PubMed Central

    Rodríguez-Robles, Desirée; García-González, Julia; Juan-Valdés, Andrés; Pozo, Julia Mª Morán-del; Guerra-Romero, Manuel I

    2014-01-01

    Construction and demolition waste (CDW) constitutes an increasingly significant problem in society due to the volume generated, rendering sustainable management and disposal problematic. The aim of this study is to identify a possible reuse option in the concrete manufacturing for recycled aggregates with a significant ceramic content: mixed recycled aggregates (MixRA) and ceramic recycled aggregates (CerRA). In order to do so, several tests are conducted in accordance with the Spanish Code on Structural Concrete (EHE-08) to determine the composition in weight and physic-mechanical characteristics (particle size distributions, fine content, sand equivalent, density, water absorption, flakiness index, and resistance to fragmentation) of the samples for the partial inclusion of the recycled aggregates in concrete mixes. The results of these tests clearly support the hypothesis that this type of material may be suitable for such partial replacements if simple pretreatment is carried out. Furthermore, this measure of reuse is in line with European, national, and regional policies on sustainable development, and presents a solution to the environmental problem caused by the generation of CDW. PMID:28788164

  18. Potential Flow Theory and Operation Guide for the Panel Code PMARC. Version 14

    NASA Technical Reports Server (NTRS)

    Ashby, Dale L.

    1999-01-01

    The theoretical basis for PMARC, a low-order panel code for modeling complex three-dimensional bodies, in potential flow, is outlined. PMARC can be run on a wide variety of computer platforms, including desktop machines, workstations, and supercomputers. Execution times for PMARC vary tremendously depending on the computer resources used, but typically range from several minutes for simple or moderately complex cases to several hours for very large complex cases. Several of the advanced features currently included in the code, such as internal flow modeling, boundary layer analysis, and time-dependent flow analysis, including problems involving relative motion, are discussed in some detail. The code is written in Fortran77, using adjustable-size arrays so that it can be easily redimensioned to match problem requirements and computer hardware constraints. An overview of the program input is presented. A detailed description of the input parameters is provided in the appendices. PMARC results for several test cases are presented along with analytic or experimental data, where available. The input files for these test cases are given in the appendices. PMARC currently supports plotfile output formats for several commercially available graphics packages. The supported graphics packages are Plot3D, Tecplot, and PmarcViewer.

  19. SIGKit: Software for Introductory Geophysics Toolkit

    NASA Astrophysics Data System (ADS)

    Kruse, S.; Bank, C. G.; Esmaeili, S.; Jazayeri, S.; Liu, S.; Stoikopoulos, N.

    2017-12-01

    The Software for Introductory Geophysics Toolkit (SIGKit) affords students the opportunity to create model data and perform simple processing of field data for various geophysical methods. SIGkit provides a graphical user interface built with the MATLAB programming language, but can run even without a MATLAB installation. At this time SIGkit allows students to pick first arrivals and match a two-layer model to seismic refraction data; grid total-field magnetic data, extract a profile, and compare this to a synthetic profile; and perform simple processing steps (subtraction of a mean trace, hyperbola fit) to ground-penetrating radar data. We also have preliminary tools for gravity, resistivity, and EM data representation and analysis. SIGkit is being built by students for students, and the intent of the toolkit is to provide an intuitive interface for simple data analysis and understanding of the methods, and act as an entrance to more sophisticated software. The toolkit has been used in introductory courses as well as field courses. First reactions from students are positive. Think-aloud observations of students using the toolkit have helped identify problems and helped shape it. We are planning to compare the learning outcomes of students who have used the toolkit in a field course to students in a previous course to test its effectiveness.

  20. Molnets: An Artificial Chemistry Based on Neural Networks

    NASA Technical Reports Server (NTRS)

    Colombano, Silvano; Luk, Johnny; Segovia-Juarez, Jose L.; Lohn, Jason; Clancy, Daniel (Technical Monitor)

    2002-01-01

    The fundamental problem in the evolution of matter is to understand how structure-function relationships are formed and increase in complexity from the molecular level all the way to a genetic system. We have created a system where structure-function relationships arise naturally and without the need of ad hoc function assignments to given structures. The idea was inspired by neural networks, where the structure of the net embodies specific computational properties. In this system networks interact with other networks to create connections between the inputs of one net and the outputs of another. The newly created net then recomputes its own synaptic weights, based on anti-hebbian rules. As a result some connections may be cut, and multiple nets can emerge as products of a 'reaction'. The idea is to study emergent reaction behaviors, based on simple rules that constitute a pseudophysics of the system. These simple rules are parameterized to produce behaviors that emulate chemical reactions. We find that these simple rules show a gradual increase in the size and complexity of molecules. We have been building a virtual artificial chemistry laboratory for discovering interesting reactions and for testing further ideas on the evolution of primitive molecules. Some of these ideas include the potential effect of membranes and selective diffusion according to molecular size.

  1. A Fractional Differential Kinetic Equation and Applications to Modelling Bursts in Turbulent Nonlinear Space Plasmas

    NASA Astrophysics Data System (ADS)

    Watkins, N. W.; Rosenberg, S.; Sanchez, R.; Chapman, S. C.; Credgington, D.

    2008-12-01

    Since the 1960s Mandelbrot has advocated the use of fractals for the description of the non-Euclidean geometry of many aspects of nature. In particular he proposed two kinds of model to capture persistence in time (his Joseph effect, common in hydrology and with fractional Brownian motion as the prototype) and/or prone to heavy tailed jumps (the Noah effect, typical of economic indices, for which he proposed Lévy flights as an exemplar). Both effects are now well demonstrated in space plasmas, notably in the turbulent solar wind. Models have, however, typically emphasised one of the Noah and Joseph parameters (the Lévy exponent μ and the temporal exponent β) at the other's expense. I will describe recent work in which we studied a simple self-affine stable model-linear fractional stable motion, LFSM, which unifies both effects and present a recently-derived diffusion equation for LFSM. This replaces the second order spatial derivative in the equation of fBm with a fractional derivative of order μ, but retains a diffusion coefficient with a power law time dependence rather than a fractional derivative in time. I will also show work in progress using an LFSM model and simple analytic scaling arguments to study the problem of the area between an LFSM curve and a threshold. This problem relates to the burst size measure introduced by Takalo and Consolini into solar-terrestrial physics and further studied by Freeman et al [PRE, 2000] on solar wind Poynting flux near L1. We test how expressions derived by other authors generalise to the non-Gaussian, constant threshold problem. Ongoing work on extension of these LFSM results to multifractals will also be discussed.

  2. Kaizen practice in healthcare: a qualitative analysis of hospital employees' suggestions for improvement

    PubMed Central

    Mazzocato, Pamela; Stenfors-Hayes, Terese; von Thiele Schwarz, Ulrica; Hasson, Henna

    2016-01-01

    Objectives Kaizen, or continuous improvement, lies at the core of lean. Kaizen is implemented through practices that enable employees to propose ideas for improvement and solve problems. The aim of this study is to describe the types of issues and improvement suggestions that hospital employees feel empowered to address through kaizen practices in order to understand when and how kaizen is used in healthcare. Methods We analysed 186 structured kaizen documents containing improvement suggestions that were produced by 165 employees at a Swedish hospital. Directed content analysis was used to categorise the suggestions into following categories: type of situation (proactive or reactive) triggering an action; type of process addressed (technical/administrative, support and clinical); complexity level (simple or complex); and type of outcomes aimed for (operational or sociotechnical). Compliance to the kaizen template was calculated. Results 72% of the improvement suggestions were reactions to a perceived problem. Support, technical and administrative, and primary clinical processes were involved in 47%, 38% and 16% of the suggestions, respectively. The majority of the kaizen documents addressed simple situations and focused on operational outcomes. The degree of compliance to the kaizen template was high for several items concerning the identification of problems and the proposed solutions, and low for items related to the test and implementation of solutions. Conclusions There is a need to combine kaizen practices with improvement and innovation practices that help staff and managers to address complex issues, such as the improvement of clinical care processes. The limited focus on sociotechnical aspects and the partial compliance to kaizen templates may indicate a limited understanding of the entire kaizen process and of how it relates to the overall organisational goals. This in turn can hamper the sustainability of kaizen practices and results. PMID:27473953

  3. A Multiobjective Approach Applied to the Protein Structure Prediction Problem

    DTIC Science & Technology

    2002-03-07

    like a low energy search landscape . 2.1.1 Symbolic/Formalized Problem Domain Description. Every computer representable problem can also be embodied...method [60]. 3.4 Energy Minimization Methods The energy landscape algorithms are based on the idea that a protein’s final resting conformation is...in our GA used to search the PSP problem energy landscape ). 3.5.1 Simple GA. The main routine in a sGA, after encoding the problem, builds a

  4. Born-Oppenheimer approximation for a singular system

    NASA Astrophysics Data System (ADS)

    Akbas, Haci; Turgut, O. Teoman

    2018-01-01

    We discuss a simple singular system in one dimension, two heavy particles interacting with a light particle via an attractive contact interaction and not interacting among themselves. It is natural to apply the Born-Oppenheimer approximation to this problem. We present a detailed discussion of this approach; the advantage of this simple model is that one can estimate the error terms self-consistently. Moreover, a Fock space approach to this problem is presented where an expansion can be proposed to get higher order corrections. A slight modification of the same problem in which the light particle is relativistic is discussed in a later section by neglecting pair creation processes. Here, the second quantized description is more challenging, but with some care, one can recover the first order expression exactly.

  5. A new formulation for anisotropic radiative transfer problems. I - Solution with a variational technique

    NASA Technical Reports Server (NTRS)

    Cheyney, H., III; Arking, A.

    1976-01-01

    The equations of radiative transfer in anisotropically scattering media are reformulated as linear operator equations in a single independent variable. The resulting equations are suitable for solution by a variety of standard mathematical techniques. The operators appearing in the resulting equations are in general nonsymmetric; however, it is shown that every bounded linear operator equation can be embedded in a symmetric linear operator equation and a variational solution can be obtained in a straightforward way. For purposes of demonstration, a Rayleigh-Ritz variational method is applied to three problems involving simple phase functions. It is to be noted that the variational technique demonstrated is of general applicability and permits simple solutions for a wide range of otherwise difficult mathematical problems in physics.

  6. Problem Solvers: Problem--Light It up! and Solutions--Flags by the Numbers

    ERIC Educational Resources Information Center

    Hall, Shaun

    2009-01-01

    A simple circuit is created by the continuous flow of electricity through conductors (copper wires) from a source of electrical energy (batteries). "Completing a circuit" means that electricity flows from the energy source through the circuit and, in the case described in this month's problem, causes the light bulb tolight up. The presence of…

  7. Solving L-L Extraction Problems with Excel Spreadsheet

    ERIC Educational Resources Information Center

    Teppaitoon, Wittaya

    2016-01-01

    This work aims to demonstrate the use of Excel spreadsheets for solving L-L extraction problems. The key to solving the problems successfully is to be able to determine a tie line on the ternary diagram where the calculation must be carried out. This enables the reader to analyze the extraction process starting with a simple operation, the…

  8. The Potential of Automated Corrective Feedback to Remediate Cohesion Problems in Advanced Students' Writing

    ERIC Educational Resources Information Center

    Strobl, Carola

    2017-01-01

    This study explores the potential of a feedback environment using simple string-based pattern matching technology for the provision of automated corrective feedback on cohesion problems. Thirty-eight high-frequent problems, including non-target like use of connectives and co-references were addressed providing both direct and indirect feedback.…

  9. Research Reporting Sections, Annual Meeting of the National Council of Teachers of Mathematics (57th, Boston, Massachusetts, April 18-21, 1979).

    ERIC Educational Resources Information Center

    Higgins, Jon L., Ed.

    This document provides abstracts of 20 research reports. Topics covered include: children's comprehension of simple story problems; field independence and group instruction; problem-solving competence and memory; spatial visualization and the use of manipulative materials; effects of games on mathematical skills; problem-solving ability and right…

  10. Reflection on Solutions in the Form of Refutation Texts versus Problem Solving: The Case of 8th Graders Studying Simple Electric Circuits

    ERIC Educational Resources Information Center

    Safadi, Rafi; Safadi, Ekhlass; Meidav, Meir

    2017-01-01

    This study compared students' learning in troubleshooting and problem solving activities. The troubleshooting activities provided students with solutions to conceptual problems in the form of refutation texts; namely, solutions that portray common misconceptions, refute them, and then present the accepted scientific ideas. They required students…

  11. The black flies of Maine

    Treesearch

    L.S. Bauer; J. Granett

    1979-01-01

    Black flies have been long-time residents of Maine and cause extensive nuisance problems for people, domestic animals, and wildlife. The black fly problem has no simple solution because of the multitude of species present, the diverse and ecologically sensitive habitats in which they are found, and the problems inherent in measuring the extent of the damage they cause...

  12. Cone-Deciphered Modes of Problem Solving Action (MPSA Cone): Alternative Perspectives on Diversified Professions.

    ERIC Educational Resources Information Center

    Lai, Su-Huei

    A conceptual framework of the modes of problem-solving action has been developed on the basis of a simple relationship cone to assist individuals in diversified professions in inquiry and implementation of theory and practice in their professional development. The conceptual framework is referred to as the Cone-Deciphered Modes of Problem Solving…

  13. Developing Physics Concepts through Hands-On Problem Solving: A Perspective on a Technological Project Design

    ERIC Educational Resources Information Center

    Hong, Jon-Chao; Chen, Mei-Yung; Wong, Ashley; Hsu, Tsui-Fang; Peng, Chih-Chi

    2012-01-01

    In a contest featuring hands-on projects, college students were required to design a simple crawling worm using planning, self-monitoring and self-evaluation processes to solve contradictive problems. To enhance the efficiency of problem solving, one needs to practice meta-cognition based on an application of related scientific concepts. The…

  14. Implementation of ergonomics in the management of parking increasing the quality of living parking park in mall Robinson Denpasar city

    NASA Astrophysics Data System (ADS)

    Sutapa, I. K.; Sudiarsa, I. M.

    2018-01-01

    The problems that often arise in the area of Denpasar City mostly caused by parking problems at the centers of activities such as shopping centers. The problems that occur not only because of the large number of vehicles that parked but also the result of the condition of parking officers who have not received attention, there is no concern about the physical condition of parking attendants because doing night guard duty. To improve the quality of parking officer, ergonomic parking lot is improved through the application of appropriate technology with systemic, holistic, interdisciplinary and participatory approach. The general objective of the research is to know the implementation of ergonomics in parking management on the improvement of the quality of parking officer in Robinson shopping center. The indicator of the quality of the parking officer work is the decrease of musculoskeletal complaints, fatigue, workload, boredom and increasing work motivation. The study was conducted using the same subject design, involving 10 subjects as a simple random sample. Intervention is done by arrangement of ergonomic basement motorcycle parking. Measurements done before and after repair. Washing out (WO) for 14 days. The data obtained were analyzed descriptively, tested normality (shapirowilk) and homogeneity (Levene Test). For normal and homogeneous distribution data, different test with One Way Anova, different test between Period with Post Hoc. Normally distributed and non-homogeneous data, different test with Friedman Test, different test between periods using Wilcoxon test. Data were analyzed with significance level of 5%. The results showed that the implementation of ergonomic in the management of parking area of the court decreased musculoskeletal complaints by 15.10% (p <0.05), decreased fatigue rate by 22.06% (p <0.05), decreased workload by 21, 90 % (P <0,05), decrease boredom 15,85% (p <0,05) and motivation improvement 37, 68% (p <0,05). It is concluded that the implementation of ergonomics in parking management of the parking lot improves the quality of the parking officer work from: (1) decrease of musculoskeletal complaints, (2) decrease of melting rate, (3) decrease of parking workload, decreasing boredom and increasing work motivation.

  15. PHYSICS REQUIRES A SIMPLE LOW MACH NUMBER FLOW TO BE COMPRESSIBLE

    EPA Science Inventory

    Radial, laminar, plane, low velocity flow represents the simplest, non-linear fluid dynamics problem. Ostensibly this apparently trivial flow could be solved using the incompressible Navier-Stokes equations, universally believed to be adequate for such problems. Most researchers ...

  16. Assessment of a Group Activity Based Educational Method to Teach Research Methodology to Undergraduate Medical Students of a Rural Medical College in Gujarat, India.

    PubMed

    Kumar, Dinesh; Singh, Uday Shankar; Solanki, Rajanikant

    2015-07-01

    Early undergraduate exposure to research helps in producing physicians who are better equipped to meet their professional needs especially the analytical skills. To assess the effectiveness and acceptability of small group method in teaching research methodology. Sixth semester medical undergraduates (III MBBS-part1) of a self-financed rural medical college. The workshop was of two full days duration consisting of daily two sessions by faculty for 30 minutes, followed by group activity of about four hours and presentation by students at the end of the day. A simple 8 steps approach was used. These steps are Identify a Problem, Refine the Problem, Determine a Solution, Frame the Question, Develop a Protocol, Take Action, Write the Report and Share your Experience. A Pre-test and post-test assessment was carried out using a questionnaire followed by anonymous feedback at the end of the workshop. The responses were evaluated by blinded evaluator. There were 95 (94.8%) valid responses out of the 99 students, who attended the workshop. The mean Pre-test and post-test scores were 4.21 and 10.37 respectively and the differences were found to be significant using Wilcoxon Sign Rank test (p<0.001). The median feedback score regarding relevance, skill learning, quality of facilitation, gain in knowledge was four and that of experience of group learning was 5 on a Likert scale of 1-5.There were no significant differences between male and female students in terms of Pre-test, post-test scores and overall gain in scores. Participatory research methodology workshop can play a significant role in teaching research to undergraduate students in an interesting manner. However, the long term effect of such workshops needs to be evaluated.

  17. Automated symbolic calculations in nonequilibrium thermodynamics

    NASA Astrophysics Data System (ADS)

    Kröger, Martin; Hütter, Markus

    2010-12-01

    We cast the Jacobi identity for continuous fields into a local form which eliminates the need to perform any partial integration to the expense of performing variational derivatives. This allows us to test the Jacobi identity definitely and efficiently and to provide equations between different components defining a potential Poisson bracket. We provide a simple Mathematica TM notebook which allows to perform this task conveniently, and which offers some additional functionalities of use within the framework of nonequilibrium thermodynamics: reversible equations of change for fields, and the conservation of entropy during the reversible dynamics. Program summaryProgram title: Poissonbracket.nb Catalogue identifier: AEGW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 227 952 No. of bytes in distributed program, including test data, etc.: 268 918 Distribution format: tar.gz Programming language: Mathematica TM 7.0 Computer: Any computer running Mathematica TM 6.0 and later versions Operating system: Linux, MacOS, Windows RAM: 100 Mb Classification: 4.2, 5, 23 Nature of problem: Testing the Jacobi identity can be a very complex task depending on the structure of the Poisson bracket. The Mathematica TM notebook provided here solves this problem using a novel symbolic approach based on inherent properties of the variational derivative, highly suitable for the present tasks. As a by product, calculations performed with the Poisson bracket assume a compact form. Solution method: The problem is first cast into a form which eliminates the need to perform partial integration for arbitrary functionals at the expense of performing variational derivatives. The corresponding equations are conveniently obtained using the symbolic programming environment Mathematica TM. Running time: For the test cases and most typical cases in the literature, the running time is of the order of seconds or minutes, respectively.

  18. Score Big! Pinball Project Teaches Simple Machine Basics

    ERIC Educational Resources Information Center

    Freeman, Matthew K.

    2009-01-01

    This article presents a design brief for a pinball game. The design brief helps students get a better grasp on the operation and uses of simple machines. It also gives them an opportunity to develop their problem-solving skills and use design skills to complete an interesting, fun product. (Contains 2 tables and 3 photos.)

  19. Simple Spreadsheet Models For Interpretation Of Fractured Media Tracer Tests

    EPA Science Inventory

    An analysis of a gas-phase partitioning tracer test conducted through fractured media is discussed within this paper. The analysis employed matching eight simple mathematical models to the experimental data to determine transport parameters. All of the models tested; two porous...

  20. Population and Pollution in the United States

    ERIC Educational Resources Information Center

    Ridker, Ronald G.

    1972-01-01

    Analyzes a simple model relating environmental pollution to population and per capita income and concludes that no single cause is sufficient to explain.... environmental problems, and that there is little about the pollution problems.... of the next 50 years that is inevitable." (Author/AL)

  1. Speededness and Adaptive Testing

    ERIC Educational Resources Information Center

    van der Linden, Wim J.; Xiong, Xinhui

    2013-01-01

    Two simple constraints on the item parameters in a response--time model are proposed to control the speededness of an adaptive test. As the constraints are additive, they can easily be included in the constraint set for a shadow-test approach (STA) to adaptive testing. Alternatively, a simple heuristic is presented to control speededness in plain…

  2. Portfolio optimization using fuzzy linear programming

    NASA Astrophysics Data System (ADS)

    Pandit, Purnima K.

    2013-09-01

    Portfolio Optimization (PO) is a problem in Finance, in which investor tries to maximize return and minimize risk by carefully choosing different assets. Expected return and risk are the most important parameters with regard to optimal portfolios. In the simple form PO can be modeled as quadratic programming problem which can be put into equivalent linear form. PO problems with the fuzzy parameters can be solved as multi-objective fuzzy linear programming problem. In this paper we give the solution to such problems with an illustrative example.

  3. Method for universal detection of two-photon polarization entanglement

    NASA Astrophysics Data System (ADS)

    Bartkiewicz, Karol; Horodecki, Paweł; Lemr, Karel; Miranowicz, Adam; Życzkowski, Karol

    2015-03-01

    Detecting and quantifying quantum entanglement of a given unknown state poses problems that are fundamentally important for quantum information processing. Surprisingly, no direct (i.e., without quantum tomography) universal experimental implementation of a necessary and sufficient test of entanglement has been designed even for a general two-qubit state. Here we propose an experimental method for detecting a collective universal witness, which is a necessary and sufficient test of two-photon polarization entanglement. It allows us to detect entanglement for any two-qubit mixed state and to establish tight upper and lower bounds on its amount. A different element of this method is the sequential character of its main components, which allows us to obtain relatively complicated information about quantum correlations with the help of simple linear-optical elements. As such, this proposal realizes a universal two-qubit entanglement test within the present state of the art of quantum optics. We show the optimality of our setup with respect to the minimal number of measured quantities.

  4. Equilibria of perceptrons for simple contingency problems.

    PubMed

    Dawson, Michael R W; Dupuis, Brian

    2012-08-01

    The contingency between cues and outcomes is fundamentally important to theories of causal reasoning and to theories of associative learning. Researchers have computed the equilibria of Rescorla-Wagner models for a variety of contingency problems, and have used these equilibria to identify situations in which the Rescorla-Wagner model is consistent, or inconsistent, with normative models of contingency. Mathematical analyses that directly compare artificial neural networks to contingency theory have not been performed, because of the assumed equivalence between the Rescorla-Wagner learning rule and the delta rule training of artificial neural networks. However, recent results indicate that this equivalence is not as straightforward as typically assumed, suggesting a strong need for mathematical accounts of how networks deal with contingency problems. One such analysis is presented here, where it is proven that the structure of the equilibrium for a simple network trained on a basic contingency problem is quite different from the structure of the equilibrium for a Rescorla-Wagner model faced with the same problem. However, these structural differences lead to functionally equivalent behavior. The implications of this result for the relationships between associative learning, contingency theory, and connectionism are discussed.

  5. A New Look at Two Old Problems in Electrostatics, or Much Ado with Hemispheres

    ERIC Educational Resources Information Center

    DasGupta, Ananda

    2007-01-01

    In this paper, we take a look at two electrostatics problems concerning hemispheres. The first problem concerns the direction of the electric field on the flat cap of a uniformly charged hemisphere. We show that the symmetry and principle of superposition coupled with Gauss's law gives a delightfully simple solution and then go on to examine how…

  6. "Cast Your Net Widely": Three Steps to Expanding and Refining Your Problem before Action Learning Application

    ERIC Educational Resources Information Center

    Reese, Simon R.

    2015-01-01

    This paper reflects upon a three-step process to expand the problem definition in the early stages of an action learning project. The process created a community-powered problem-solving approach within the action learning context. The simple three steps expanded upon in the paper create independence, dependence, and inter-dependence to aid the…

  7. The King and Prisoner Puzzle: A Way of Introducing the Components of Logical Structures

    ERIC Educational Resources Information Center

    Roh, Kyeong Hah; Lee, Yong Hah; Tanner, Austin

    2016-01-01

    The purpose of this paper is to provide issues related to student understanding of logical components that arise when solving word problems. We designed a logic problem called the King and Prisoner Puzzle--a linguistically simple, yet logically challenging problem. In this paper, we describe various student solutions to the puzzle and discuss the…

  8. Development and validation of the Salzburg COPD-screening questionnaire (SCSQ): a questionnaire development and validation study.

    PubMed

    Weiss, Gertraud; Steinacher, Ina; Lamprecht, Bernd; Kaiser, Bernhard; Mikes, Romana; Sator, Lea; Hartl, Sylvia; Wagner, Helga; Studnicka, M

    2017-01-26

    Chronic obstructive pulmonary disease prevalence rates are still high. However, the majority of subjects are not diagnosed. Strategies have to be implemented to overcome the problem of under-diagnosis. Questionnaires could be used to pre-select subjects for spirometry and thereby help reducing under-diagnosis. We report a brief, simple, self-administrable and validated chronic obstructive pulmonary disease questionnaire to increase the pre-test probability for chronic obstructive pulmonary disease diagnosis in subjects undergoing confirmatory spirometry. In 2005, we completed the Austrian Burden of Obstructive Lung Disease-study in 1258 subjects aged >40 years. Post-bronchodilator spirometry was performed, and non-reversible airflow limitation defined by FEV 1 /FVC ratio below the lower limit of normal. Questions from the Salzburg chronic obstructive pulmonary disease screening-questionnaire were selected using a logistic regression model, and risk scores were based on regression-coefficients. A training sub-sample (n = 800) was used to create the score, and a test sub-sample (n = 458) was used to test it. In 2008, an external validation study was done, using the same protocol in 775 patients from primary care. The Salzburg chronic obstructive pulmonary disease screening questionnaire was composed of items related to "breathing problems", "wheeze", "cough", "limitation of physical activity", and "smoking". At the >=2 points cut-off of the Salzburg chronic obstructive pulmonary disease screening questionnaire, sensitivity was 69.1% [95%CI: 56.6%; 79.5%], specificity 60.0% [95%CI: 54.9%; 64.9%], the positive predictive value 23.2% [95%CI: 17.7%; 29.7%] and the negative predictive value 91.8% [95%CI: 87.5%; 95.7%] to detect post bronchodilator airflow limitation. The external validation study in primary care confirmed these findings. The Salzburg chronic obstructive pulmonary disease screening questionnaire was derived from the highly standardized Burden of Obstructive Lung Disease study. This validated and easy to use questionnaire can help to increase the efficiency of chronic obstructive pulmonary disease case-finding. QUESTIONNAIRE FOR PRE-SCREENING POTENTIAL SUFFERERS: Scientists in Austria have developed a brief, simple questionnaire to identify patients likely to have early-stage chronic lung disease. Chronic obstructive pulmonary disease (COPD) is notoriously difficult to diagnose, and the condition often causes irreversible lung damage before it is identified. Finding a simple, cost-effective method of pre-screening patients with suspected early-stage COPD could potentially improve treatment responses and limit the burden of extensive lung function ('spirometry') tests on health services. Gertraud Weiss at Paracelsus Medical University, Austria, and co-workers have developed and validated an easy-to-use, self-administered questionnaire that could prove effective for pre-screening patients. The team trialed the five-point Salzburg COPD-screening questionnaire on 1258 patients. Patients scoring 2 points or above on the questionnaire underwent spirometry tests. The questionnaire seems to provide a sensitive and cost-effective way of pre-selecting patients for spirometry referral.

  9. Detection of unknown targets from aerial camera and extraction of simple object fingerprints for the purpose of target reacquisition

    NASA Astrophysics Data System (ADS)

    Mundhenk, T. Nathan; Ni, Kang-Yu; Chen, Yang; Kim, Kyungnam; Owechko, Yuri

    2012-01-01

    An aerial multiple camera tracking paradigm needs to not only spot unknown targets and track them, but also needs to know how to handle target reacquisition as well as target handoff to other cameras in the operating theater. Here we discuss such a system which is designed to spot unknown targets, track them, segment the useful features and then create a signature fingerprint for the object so that it can be reacquired or handed off to another camera. The tracking system spots unknown objects by subtracting background motion from observed motion allowing it to find targets in motion, even if the camera platform itself is moving. The area of motion is then matched to segmented regions returned by the EDISON mean shift segmentation tool. Whole segments which have common motion and which are contiguous to each other are grouped into a master object. Once master objects are formed, we have a tight bound on which to extract features for the purpose of forming a fingerprint. This is done using color and simple entropy features. These can be placed into a myriad of different fingerprints. To keep data transmission and storage size low for camera handoff of targets, we try several different simple techniques. These include Histogram, Spatiogram and Single Gaussian Model. These are tested by simulating a very large number of target losses in six videos over an interval of 1000 frames each from the DARPA VIVID video set. Since the fingerprints are very simple, they are not expected to be valid for long periods of time. As such, we test the shelf life of fingerprints. This is how long a fingerprint is good for when stored away between target appearances. Shelf life gives us a second metric of goodness and tells us if a fingerprint method has better accuracy over longer periods. In videos which contain multiple vehicle occlusions and vehicles of highly similar appearance we obtain a reacquisition rate for automobiles of over 80% using the simple single Gaussian model compared with the null hypothesis of <20%. Additionally, the performance for fingerprints stays well above the null hypothesis for as much as 800 frames. Thus, a simple and highly compact single Gaussian model is useful for target reacquisition. Since the model is agnostic to view point and object size, it is expected to perform as well on a test of target handoff. Since some of the performance degradation is due to problems with the initial target acquisition and tracking, the simple Gaussian model may perform even better with an improved initial acquisition technique. Also, since the model makes no assumption about the object to be tracked, it should be possible to use it to fingerprint a multitude of objects, not just cars. Further accuracy may be obtained by creating manifolds of objects from multiple samples.

  10. Total peak shape analysis: detection and quantitation of concurrent fronting, tailing, and their effect on asymmetry measurements.

    PubMed

    Wahab, M Farooq; Patel, Darshan C; Armstrong, Daniel W

    2017-08-04

    Most peak shapes obtained in separation science depart from linearity for various reasons such as thermodynamic, kinetic, or flow based effects. An indication of the nature of asymmetry often helps in problem solving e.g. in column overloading, slurry packing, buffer mismatch, and extra-column band broadening. However, existing tests for symmetry/asymmetry only indicate the skewness in excess (tail or front) and not the presence of both. Two simple graphical approaches are presented to analyze peak shapes typically observed in gas, liquid, and supercritical fluid chromatography as well as capillary electrophoresis. The derivative test relies on the symmetry of the inflection points and the maximum and minimum values of the derivative. The Gaussian test is a constrained curve fitting approach and determines the residuals. The residual pattern graphically allows the user to assess the problematic regions in a given peak, e.g., concurrent tailing or fronting, something which cannot be easily done with other current methods. The template provided in MS Excel automates this process. The total peak shape analysis extracts the peak parameters from the upper sections (>80% height) of the peak rather than the half height as is done conventionally. A number of situations are presented and the utility of this approach in solving practical problems is demonstrated. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Angular-Rate Estimation Using Star Tracker Measurements

    NASA Technical Reports Server (NTRS)

    Azor, R.; Bar-Itzhack, I.; Deutschmann, Julie K.; Harman, Richard R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quatemion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quatemion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quatemion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.

  12. Angular-Rate Estimation using Star Tracker Measurements

    NASA Technical Reports Server (NTRS)

    Azor, R.; Bar-Itzhack, Itzhack Y.; Deutschmann, Julie K.; Harman, Richard R.

    1999-01-01

    This paper presents algorithms for estimating the angular-rate vector of satellites using quaternion measurements. Two approaches are compared, one that uses differentiated quaternion measurements to yield coarse rate measurements which are then fed into two different estimators. In the other approach the raw quaternion measurements themselves are fed directly into the two estimators. The two estimators rely on the ability to decompose the non-linear rate dependent part of the rotational dynamics equation of a rigid body into a product of an angular-rate dependent matrix and the angular-rate vector itself. This decomposition, which is not unique, enables the treatment of the nonlinear spacecraft dynamics model as a linear one and, consequently, the application of a Pseudo-Linear Kalman Filter (PSELIKA). It also enables the application of a special Kalman filter which is based on the use of the solution of the State Dependent Algebraic Riccati Equation (SDARE) in order to compute the Kalman gain matrix and thus eliminates the need to propagate and update the filter covariance matrix. The replacement of the elaborate rotational dynamics by a simple first order Markov model is also examined. In this paper a special consideration is given to the problem of delayed quaternion measurements. Two solutions to this problem are suggested and tested. Real Rossi X-Ray Timing Explorer (RXTE) data is used to test these algorithms, and results of these tests are presented.

  13. Clinical signs of early osteoarthritis: reproducibility and relation to x ray changes in 541 women in the general population.

    PubMed Central

    Hart, D J; Spector, T D; Brown, P; Wilson, P; Doyle, D V; Silman, A J

    1991-01-01

    The definition and classification of early clinically apparent osteoarthritis both in clinical situations and in epidemiological surveys remains a problem. Few data exist on the between-observer reproducibility of simple clinical methods of detecting hand and knee osteoarthritis in the population and their sensitivity and specificity as compared with radiography. Two observers first studied the reproducibility of a number of clinical signs in 41 middle aged women. Good rates of agreement were found for most of the clinical signs tested (kappa = 0.54-1.0). The more reproducible signs were then tested on a population of 541 women, aged 45-65, drawn from general practice, screening centres, and patients previously attending hospital for non-rheumatic problems. The major clinical signs used had a high specificity (87-99%) and lower sensitivity (20-49%) when compared with radiographs graded on the Kellgren and Lawrence scale (2+ = positive). When analysis was restricted to symptomatic radiographic osteoarthritis, levels of sensitivity were increased and specificity was lowered. These data show that certain physical signs of osteoarthritis are reproducible and may be used to identify clinical disease. They are not a substitute for radiographs, however, if radiographic change is regarded as the 'gold standard' of diagnosis. As the clinical signs tested seemed specific for osteoarthritis they may be of value in screening populations for clinical disease. PMID:1877852

  14. Translation, Cultural Adaptation and Validation of the Simple Shoulder Test to Spanish

    PubMed Central

    Arcuri, Francisco; Barclay, Fernando; Nacul, Ivan

    2015-01-01

    Background: The validation of widely used scales facilitates the comparison across international patient samples. Objective: The objective was to translate, culturally adapt and validate the Simple Shoulder Test into Argentinian Spanish. Methods: The Simple Shoulder Test was translated from English into Argentinian Spanish by two independent translators, translated back into English and evaluated for accuracy by an expert committee to correct the possible discrepancies. It was then administered to 50 patients with different shoulder conditions.Psycometric properties were analyzed including internal consistency, measured with Cronbach´s Alpha, test-retest reliability at 15 days with the interclass correlation coefficient. Results: The internal consistency, validation, was an Alpha of 0,808, evaluated as good. The test-retest reliability index as measured by intra-class correlation coefficient (ICC) was 0.835, evaluated as excellent. Conclusion: The Simple Shoulder Test translation and it´s cultural adaptation to Argentinian-Spanish demonstrated adequate internal reliability and validity, ultimately allowing for its use in the comparison with international patient samples.

  15. Determinants of Prelacteal Feeding in Rural Northern India

    PubMed Central

    Roy, Manas Pratim; Mohan, Uday; Singh, Shivendra Kumar; Singh, Vijay Kumar; Srivastava, Anand Kumar

    2014-01-01

    Background: Prelacteal feeding is an underestimated problem in a developing country like India, where infant mortality rate is quite high. The present study tried to find out the factors determining prelacteal feeding in rural areas of north India. Methods: A crosssectional study was conducted among recently delivered women of rural Uttar Pradesh, India. Multistage random sampling was used for selecting villages. From them, 352 recently delivered women were selected as the subjects, following systematic random sampling. Chi-square test and logistic regression were used to find out the predictors for prelacteal feeding. Results: Overall, 40.1% of mothers gave prelacteal feeding to their newborn. Factors significantly associated with such practice, after simple logistic regression, were age, caste, socioeconomic status, and place of delivery. At multivariate level, age (odds ratio (OR) = 1.76, 95% confidence interval (CI) = 1.13-2.74), caste and place of delivery (OR = 2.23, 95% CI = 1.21-4.10) were found to determine prelacteal feeding significantly, indicating that young age, high caste, and home deliveries could affect the practice positively. Conclusions: The problem of prelacteal feeding is still prevalent in rural India. Age, caste, and place of delivery were associated with the problem. For ensuring neonatal health, the problem should be addressed with due gravity, with emphasis on exclusive breast feeding. PMID:24932400

  16. Error analysis of mathematics students who are taught by using the book of mathematics learning strategy in solving pedagogical problems based on Polya’s four-step approach

    NASA Astrophysics Data System (ADS)

    Halomoan Siregar, Budi; Dewi, Izwita; Andriani, Ade

    2018-03-01

    The purpose of this study is to analyse the types of students errors and causes of them in solving of pedagogic problems. The type of this research is qualitative descriptive, conducted on 34 students of mathematics education in academic year 2017 to 2018. The data in this study is obtained through interviews and tests. Furthermore, the data is then analyzed through three stages: 1) data reduction, 2) data description, and 3) conclusions. The data is reduced by organizing and classifying them in order to obtain meaningful information. After reducing, then the data presented in a simple form of narrative, graphics, and tables to illustrate clearly the errors of students. Based on the information then drawn a conclusion. The results of this study indicate that the students made various errors: 1) they made a mistake in answer what being asked at the problem, because they misunderstood the problem, 2) they fail to plan the learning process based on constructivism, due to lack of understanding of how to design the learning, 3) they determine an inappropriate learning tool, because they did not understand what kind of learning tool is relevant to use.

  17. Integrated Aeroservoelastic Optimization: Status and Direction

    NASA Technical Reports Server (NTRS)

    Livne, Eli

    1999-01-01

    The interactions of lightweight flexible airframe structures, steady and unsteady aerodynamics, and wide-bandwidth active controls on modern airplanes lead to considerable multidisciplinary design challenges. More than 25 years of mathematical and numerical methods' development, numerous basic research studies, simulations and wind-tunnel tests of simple models, wind-tunnel tests of complex models of real airplanes, as well as flight tests of actively controlled airplanes, have all contributed to the accumulation of a substantial body of knowledge in the area of aeroservoelasticity. A number of analysis codes, with the capabilities to model real airplane systems under the assumptions of linearity, have been developed. Many tests have been conducted, and results were correlated with analytical predictions. A selective sample of references covering aeroservoelastic testing programs from the 1960s to the early 1980s, as well as more recent wind-tunnel test programs of real or realistic configurations, are included in the References section of this paper. An examination of references 20-29 will reveal that in the course of development (or later modification), of almost every modern airplane with a high authority active control system, there arose a need to face aeroservoelastic problems and aeroservoelastic design challenges.

  18. Efficient field testing for load rating railroad bridges

    NASA Astrophysics Data System (ADS)

    Schulz, Jeffrey L.; Brett C., Commander

    1995-06-01

    As the condition of our infrastructure continues to deteriorate, and the loads carried by our bridges continue to increase, an ever growing number of railroad and highway bridges require load limits. With safety and transportation costs at both ends of the spectrum. the need for accurate load rating is paramount. This paper describes a method that has been developed for efficient load testing and evaluation of short- and medium-span bridges. Through the use of a specially-designed structural testing system and efficient load test procedures, a typical bridge can be instrumented and tested at 64 points in less than one working day and with minimum impact on rail traffic. Various techniques are available to evaluate structural properties and obtain a realistic model. With field data, a simple finite element model is 'calibrated' and its accuracy is verified. Appropriate design and rating loads are applied to the resulting model and stress predictions are made. This technique has been performed on numerous structures to address specific problems and to provide accurate load ratings. The merits and limitations of this approach are discussed in the context of actual examples of both rail and highway bridges that were tested and evaluated.

  19. A new extrapolation cascadic multigrid method for three dimensional elliptic boundary value problems

    NASA Astrophysics Data System (ADS)

    Pan, Kejia; He, Dongdong; Hu, Hongling; Ren, Zhengyong

    2017-09-01

    In this paper, we develop a new extrapolation cascadic multigrid method, which makes it possible to solve three dimensional elliptic boundary value problems with over 100 million unknowns on a desktop computer in half a minute. First, by combining Richardson extrapolation and quadratic finite element (FE) interpolation for the numerical solutions on two-level of grids (current and previous grids), we provide a quite good initial guess for the iterative solution on the next finer grid, which is a third-order approximation to the FE solution. And the resulting large linear system from the FE discretization is then solved by the Jacobi-preconditioned conjugate gradient (JCG) method with the obtained initial guess. Additionally, instead of performing a fixed number of iterations as used in existing cascadic multigrid methods, a relative residual tolerance is introduced in the JCG solver, which enables us to obtain conveniently the numerical solution with the desired accuracy. Moreover, a simple method based on the midpoint extrapolation formula is proposed to achieve higher-order accuracy on the finest grid cheaply and directly. Test results from four examples including two smooth problems with both constant and variable coefficients, an H3-regular problem as well as an anisotropic problem are reported to show that the proposed method has much better efficiency compared to the classical V-cycle and W-cycle multigrid methods. Finally, we present the reason why our method is highly efficient for solving these elliptic problems.

  20. Approximating the 0-1 Multiple Knapsack Problem with Agent Decomposition and Market Negotiation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smolinski, B.

    The 0-1 multiple knapsack problem appears in many domains from financial portfolio management to cargo ship stowing. Methods for solving it range from approximate algorithms, such as greedy algorithms, to exact algorithms, such as branch and bound. Approximate algorithms have no bounds on how poorly they perform and exact algorithms can suffer from exponential time and space complexities with large data sets. This paper introduces a market model based on agent decomposition and market auctions for approximating the 0-1 multiple knapsack problem, and an algorithm that implements the model (M(x)). M(x) traverses the solution space rather than getting caught inmore » a local maximum, overcoming an inherent problem of many greedy algorithms. The use of agents ensures that infeasible solutions are not considered while traversing the solution space and that traversal of the solution space is not just random, but is also directed. M(x) is compared to a bound and bound algorithm (BB) and a simple greedy algorithm with a random shuffle (G(x)). The results suggest that M(x) is a good algorithm for approximating the 0-1 Multiple Knapsack problem. M(x) almost always found solutions that were close to optimal in a fraction of the time it took BB to run and with much less memory on large test data sets. M(x) usually performed better than G(x) on hard problems with correlated data.« less

Top