Sample records for standard test problems

  1. Comparison of Standardized Test Scores from Traditional Classrooms and Those Using Problem-Based Learning

    ERIC Educational Resources Information Center

    Needham, Martha Elaine

    2010-01-01

    This research compares differences between standardized test scores in problem-based learning (PBL) classrooms and a traditional classroom for 6th grade students using a mixed-method, quasi-experimental and qualitative design. The research shows that problem-based learning is as effective as traditional teaching methods on standardized tests. The…

  2. Standardized Definitions for Code Verification Test Problems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Doebling, Scott William

    This document contains standardized definitions for several commonly used code verification test problems. These definitions are intended to contain sufficient information to set up the test problem in a computational physics code. These definitions are intended to be used in conjunction with exact solutions to these problems generated using Exact- Pack, www.github.com/lanl/exactpack.

  3. Status and analysis of test standard for on-board charger

    NASA Astrophysics Data System (ADS)

    Hou, Shuai; Liu, Haiming; Jiang, Li; Chen, Xichen; Ma, Junjie; Zhao, Bing; Wu, Zaiyuan

    2018-05-01

    This paper analyzes the test standards of on -board charger (OBC). In the process of testing, we found that there exists some problems in test method and functional status, such as failed to follow up the latest test standards, estimated loosely, rectification uncertainty and consistency. Finally, putting forward some own viewpoints on these problems.

  4. Proficiency Standards and Cut-Scores for Language Proficiency Tests.

    ERIC Educational Resources Information Center

    Moy, Raymond H.

    The problem of standard setting on language proficiency tests is often approached by the use of norms derived from the group being tested, a process commonly known as "grading on the curve." One particular problem with this ad hoc method of standard setting is that it will usually result in a fluctuating standard dependent on the particular group…

  5. Research Problems Associated with Limiting the Applied Force in Vibration Tests and Conducting Base-Drive Modal Vibration Tests

    NASA Technical Reports Server (NTRS)

    Scharton, Terry D.

    1995-01-01

    The intent of this paper is to make a case for developing and conducting vibration tests which are both realistic and practical (a question of tailoring versus standards). Tests are essential for finding things overlooked in the analyses. The best test is often the most realistic test which can be conducted within the cost and budget constraints. Some standards are essential, but the author believes more in the individual's ingenuity to solve a specific problem than in the application of standards which reduce problems (and technology) to their lowest common denominator. Force limited vibration tests and base-drive modal tests are two examples of realistic, but practical testing approaches. Since both of these approaches are relatively new, a number of interesting research problems exist, and these are emphasized herein.

  6. Automated Hypothesis Tests and Standard Errors for Nonstandard Problems with Description of Computer Package: A Draft.

    ERIC Educational Resources Information Center

    Lord, Frederic M.; Stocking, Martha

    A general Computer program is described that will compute asymptotic standard errors and carry out significance tests for an endless variety of (standard and) nonstandard large-sample statistical problems, without requiring the statistician to derive asymptotic standard error formulas. The program assumes that the observations have a multinormal…

  7. Application of a Mixed Consequential Ethical Model to a Problem Regarding Test Standards.

    ERIC Educational Resources Information Center

    Busch, John Christian

    The work of the ethicist Charles Curran and the problem-solving strategy of the mixed consequentialist ethical model are applied to a traditional social science measurement problem--that of how to adjust a recommended standard in order to be fair to the test-taker and society. The focus is on criterion-referenced teacher certification tests.…

  8. Testing and the Testing Industry: A Third View.

    ERIC Educational Resources Information Center

    Williams, John D.

    Different viewpoints regarding educational testing are described. While some people advocate continuing reliance upon standardized tests, others favor the discontinuation of such achievement and intelligence tests. The author recommends a moderate view somewhere between these two extremes. Problems associated with standardized testing in the…

  9. The "Pedagogy of the Oppressed": The Necessity of Dealing with Problems in Students' Lives

    ERIC Educational Resources Information Center

    Reynolds, Patricia R.

    2007-01-01

    Students have problems in their lives, but can teachers help them? Should teachers help? The No Child Left Behind (NCLB) act and its emphasis on standardized test results have forced school systems to produce high scores, and in turn school administrators pressure teachers to prepare students for taking standardized tests. Teachers may want to…

  10. Recommended fine positioning test for the Development Test Flight (DTF-1) of the NASA Flight Telerobotic Servicer (FTS)

    NASA Technical Reports Server (NTRS)

    Dagalakis, N.; Wavering, A. J.; Spidaliere, P.

    1991-01-01

    Test procedures are proposed for the NASA DTF (Development Test Flight)-1 positioning tests of the FTS (Flight Telerobotic Servicer). The unique problems associated with the DTF-1 mission are discussed, standard robot performance tests and terminology are reviewed and a very detailed description of flight-like testing and analysis is presented. The major technical problem associated with DTF-1 is that only one position sensor can be used, which will be fixed at one location, with a working volume which is probably smaller than some of the robot errors to be measured. Radiation heating of the arm and the sensor could also cause distortions that would interfere with the test. Two robot performance testing committees have established standard testing procedures relevant to the DTF-1. Due to the technical problems associated with DTF-1, these procedures cannot be applied directly. These standard tests call for the use of several test positions at specific locations. Only one position, that of the position sensor, can be used by DTF-1. Off-line programming accuracy might be impossible to measure and in that case it will have to be replaced by forward kinetics accuracy.

  11. Standardized Tests as Outcome Measures for Evaluating Instructional Interventions in Mathematics and Science

    NASA Astrophysics Data System (ADS)

    Sussman, Joshua Michael

    This three-paper dissertation explores problems with the use of standardized tests as outcome measures for the evaluation of instructional interventions in mathematics and science. Investigators commonly use students' scores on standardized tests to evaluate the impact of instructional programs designed to improve student achievement. However, evidence suggests that the standardized tests may not measure, or may not measure well, the student learning caused by the interventions. This problem is special case of a basic problem in applied measurement related to understanding whether a particular test provides accurate and useful information about the impact of an educational intervention. The three papers explore different aspects of the issue and highlight the potential benefits of (a) using particular research methods and of (b) implementing changes to educational policy that would strengthen efforts to reform instructional intervention in mathematics and science. The first paper investigates measurement problems related to the use of standardized tests in applied educational research. Analysis of the research projects funded by the Institute of Education Sciences (IES) Mathematics and Science Education Program permitted me to address three main research questions. One, how often are standardized tests used to evaluate new educational interventions? Two, do the tests appear to measure the same thing that the intervention teaches? Three, do investigators establish validity evidence for the specific uses of the test? The research documents potential problems and actual problems related to the use of standardized tests in leading applied research, and suggests changes to policy that would address measurement issues and improve the rigor of applied educational research. The second paper explores the practical consequences of misalignment between an outcome measure and an educational intervention in the context of summative evaluation. Simulated evaluation data and a psychometric model of alignment grounded in item response modeling generate the results that address the following research question: how do differences between what a test measures and what an intervention teaches influence the results of an evaluation? The simulation derives a functional relationship between alignment, defined as the match between the test and the intervention, and treatment sensitivity, defined as the statistical power for detecting the impact of an intervention. The paper presents a new model of the effect of misalignment on the results of an evaluation and recommendations for outcome measure selection. The third paper documents the educational effectiveness of the Learning Mathematics through Representations (LMR) lesson sequence for students classified as English Learners (ELs). LMR is a research-based curricular unit designed to support upper elementary students' understandings of integers and fractions, areas considered foundational for the development of higher mathematics. The experimental evaluation contains a multilevel analysis of achievement data from two assessments: a standardized test and a researcher-developed assessment. The study coordinates the two sources of research data with a theoretical mechanism of action in order to rigorously document the effectiveness and educational equity of LMR for ELs using multiple sources of information.

  12. 42 CFR 493.1451 - Standard: Technical supervisor responsibilities.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... testing samples; and (vi) Assessment of problem solving skills; and (9) Evaluating and documenting the... analysis and reporting of test results; (5) Resolving technical problems and ensuring that remedial actions...

  13. Proficiency Standards and Cut-Scores for Language Proficiency Tests.

    ERIC Educational Resources Information Center

    Moy, Raymond H.

    1984-01-01

    Discusses the problems associated with "grading on a curve," the approach often used for standard setting on language proficiency tests. Proposes four main steps presented in the setting of a non-arbitrary cut-score. These steps not only establish a proficiency standard checked by external criteria, but also check to see that the test covers the…

  14. Assembling Appliances Standards from a Basket of Functions

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Siderious, Hans-Paul; Meier, Alan

    2014-08-11

    Rapid innovation in product design challenges the current methodology for setting standards and labels, especially for electronics, software and networking. Major problems include defining the product, measuring its energy consumption, and choosing the appropriate metric and level for the standard. Most governments have tried to solve these problems by defining ever more specific product subcategories, along with their corresponding test methods and metrics. An alternative approach would treat each energy-using product as something that delivers a basket of functions. Then separate standards would be constructed for the individual functions that can be defined, tested, and evaluated. Case studies of thermostats,more » displays and network equipment are presented to illustrate the problems with the classical approach for setting standards and indicate the merits and drawbacks of the alternative. The functional approach appears best suited to products whose primary purpose is processing information and that have multiple functions.« less

  15. Rescuing Computerized Testing by Breaking Zipf's Law.

    ERIC Educational Resources Information Center

    Wainer, Howard

    2000-01-01

    Suggests that because of the nonlinear relationship between item usage and item security, the problems of test security posed by continuous administration of standardized tests cannot be resolved merely by increasing the size of the item pool. Offers alternative strategies to overcome these problems, distributing test items so as to avoid the…

  16. Validation of tsunami inundation model TUNA-RP using OAR-PMEL-135 benchmark problem set

    NASA Astrophysics Data System (ADS)

    Koh, H. L.; Teh, S. Y.; Tan, W. K.; Kh'ng, X. Y.

    2017-05-01

    A standard set of benchmark problems, known as OAR-PMEL-135, is developed by the US National Tsunami Hazard Mitigation Program for tsunami inundation model validation. Any tsunami inundation model must be tested for its accuracy and capability using this standard set of benchmark problems before it can be gainfully used for inundation simulation. The authors have previously developed an in-house tsunami inundation model known as TUNA-RP. This inundation model solves the two-dimensional nonlinear shallow water equations coupled with a wet-dry moving boundary algorithm. This paper presents the validation of TUNA-RP against the solutions provided in the OAR-PMEL-135 benchmark problem set. This benchmark validation testing shows that TUNA-RP can indeed perform inundation simulation with accuracy consistent with that in the tested benchmark problem set.

  17. The impact of two multiple-choice question formats on the problem-solving strategies used by novices and experts.

    PubMed

    Coderre, Sylvain P; Harasym, Peter; Mandin, Henry; Fick, Gordon

    2004-11-05

    Pencil-and-paper examination formats, and specifically the standard, five-option multiple-choice question, have often been questioned as a means for assessing higher-order clinical reasoning or problem solving. This study firstly investigated whether two paper formats with differing number of alternatives (standard five-option and extended-matching questions) can test problem-solving abilities. Secondly, the impact of the alternatives number on psychometrics and problem-solving strategies was examined. Think-aloud protocols were collected to determine the problem-solving strategy used by experts and non-experts in answering Gastroenterology questions, across the two pencil-and-paper formats. The two formats demonstrated equal ability in testing problem-solving abilities, while the number of alternatives did not significantly impact psychometrics or problem-solving strategies utilized. These results support the notion that well-constructed multiple-choice questions can in fact test higher order clinical reasoning. Furthermore, it can be concluded that in testing clinical reasoning, the question stem, or content, remains more important than the number of alternatives.

  18. Qualitative Differences in Real-Time Solution of Standardized Figural Analogies.

    ERIC Educational Resources Information Center

    Schiano, Diane J.; And Others

    Performance on standardized figural analogy tests is considered highly predictive of academic success. While information-processing models of analogy solution attribute performance differences to quantitative differences in processing parameters, the problem-solving literature suggests that qualitative differences in problem representation and…

  19. Some Practical Solutions to Standard-Setting Problems: The Georgia Teacher Certification Test Experience.

    ERIC Educational Resources Information Center

    Cramer, Stephen E.

    A standard-setting procedure was developed for the Georgia Teacher Certification Testing Program as tests in 30 teaching fields were revised. A list of important characteristics of a standard-setting procedure was derived, drawing on the work of R. A. Berk (1986). The best method was found to be a highly formalized judgmental, empirical Angoff…

  20. Beyond Testing: Seven Assessments of Students and Schools More Effective than Standardized Tests

    ERIC Educational Resources Information Center

    Meier, Deborah; Knoester, Matthew

    2017-01-01

    The authors of the book argue that a fundamentally complex problem--how to assess the knowledge of a child--cannot be reduced to a simple test score. "Beyond Testing" describes seven forms of assessment that are more effective than standardized test results: (1) student self-assessments, (2) direct teacher observations of students and…

  1. Youth Top Problems: using idiographic, consumer-guided assessment to identify treatment needs and to track change during psychotherapy.

    PubMed

    Weisz, John R; Chorpita, Bruce F; Frye, Alice; Ng, Mei Yi; Lau, Nancy; Bearman, Sarah Kate; Ugueto, Ana M; Langer, David A; Hoagwood, Kimberly E

    2011-06-01

    To complement standardized measurement of symptoms, we developed and tested an efficient strategy for identifying (before treatment) and repeatedly assessing (during treatment) the problems identified as most important by caregivers and youths in psychotherapy. A total of 178 outpatient-referred youths, 7-13 years of age, and their caregivers separately identified the 3 problems of greatest concern to them at pretreatment and then rated the severity of those problems weekly during treatment. The Top Problems measure thus formed was evaluated for (a) whether it added to the information obtained through empirically derived standardized measures (e.g., the Child Behavior Checklist [CBCL; Achenbach & Rescorla, 2001] and the Youth Self-Report [YSR; Achenbach & Rescorla, 2001]) and (b) whether it met conventional psychometric standards. The problems identified were significant and clinically relevant; most matched CBCL/YSR items while adding specificity. The top problems also complemented the information yield of the CBCL/YSR; for example, for 41% of caregivers and 79% of youths, the identified top problems did not correspond to any items of any narrowband scales in the clinical range. Evidence on test-retest reliability, convergent and discriminant validity, sensitivity to change, slope reliability, and the association of Top Problems slopes with standardized measure slopes supported the psychometric strength of the measure. The Top Problems measure appears to be a psychometrically sound, client-guided approach that complements empirically derived standardized assessment; the approach can help focus attention and treatment planning on the problems that youths and caregivers consider most important and can generate evidence on trajectories of change in those problems during treatment. (PsycINFO Database Record (c) 2011 APA, all rights reserved).

  2. Progressing From Initially Ambiguous Functional Analyses: Three Case Examples

    PubMed Central

    Tiger, Jeffrey H.; Fisher, Wayne W.; Toussaint, Karen A.; Kodak, Tiffany

    2009-01-01

    Most often functional analyses are initiated using a standard set of test conditions, similar to those described by Iwata, Dorsey, Slifer, Bauman, and Richman (1982/1994). These test conditions involve the careful manipulation of motivating operations, discriminative stimuli, and reinforcement contingencies to determine the events related to the occurrence and maintenance of problem behavior. Some individuals display problem behavior that is occasioned and reinforced by idiosyncratic or otherwise unique combinations of environmental antecedents and consequences of behavior, which are unlikely to be detected using these standard assessment conditions. For these individuals, modifications to the standard test conditions or the inclusion of novel test conditions may result in clearer assessment outcomes. The current study provides three case examples of individuals whose functional analyses were initially undifferentiated; however, modifications to the standard conditions resulted in the identification of behavioral functions and the implementation of effective function-based treatments. PMID:19233611

  3. Special Problems and Procedures for Identifying Minority Gifted Students.

    ERIC Educational Resources Information Center

    Bernal, Ernest M.

    The author reviews the key problems associated with generally accepted practices for identifying the gifted from the perspective of minority gifted students, particularly the gifted bilingual child; and presents some alternative approaches for testing. Noted among the shortcomings of testing minority students are that standardized tests are not…

  4. Development of Finnish Elementary Pupils' Problem-Solving Skills in Mathematics

    ERIC Educational Resources Information Center

    Laine, Anu; Näveri, Liisa; Ahtee, Maija; Pehkonen, Erkki

    2014-01-01

    The purpose of this study is to determine how Finnish pupils' problem-solving skills develop from the 3rd to 5th grade. As research data, we use one non-standard problem from pre- and post-test material from a three-year follow-up study, in the area of Helsinki, Finland. The problems in both tests consisted of four questions related to each other.…

  5. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    Using automated and standardized computer tools to calculate the pertinent test result values has several advantages such as: 1. allowing high-fidelity solutions to complex nonlinear phenomena that would be impractical to express in written equation form, 2. eliminating errors associated with the interpretation and programing of analysis procedures from the text of test standards, 3. lessening the need for expertise in the areas of solid mechanics, fracture mechanics, numerical methods, and/or finite element modeling, to achieve sound results, 4. and providing one computer tool and/or one set of solutions for all users for a more "standardized" answer. In summary, this approach allows a non-expert with rudimentary training to get the best practical solution based on the latest understanding with minimum difficulty.Other existing ASTM standards that cover complicated phenomena use standard computer programs: 1. ASTM C1340/C1340M-10- Standard Practice for Estimation of Heat Gain or Loss Through Ceilings Under Attics Containing Radiant Barriers by Use of a Computer Program 2. ASTM F 2815 - Standard Practice for Chemical Permeation through Protective Clothing Materials: Testing Data Analysis by Use of a Computer Program 3. ASTM E2807 - Standard Specification for 3D Imaging Data Exchange, Version 1.0 The verification, validation, and round-robin processes required of a computer tool closely parallel the methods that are used to ensure the solution validity for equations included in test standard. The use of automated analysis tools allows the creation and practical implementation of advanced fracture mechanics test standards that capture the physics of a nonlinear fracture mechanics problem without adding undue burden or expense to the user. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  6. Using Quality Management Systems to Improve Test Development and Standards and to Promote Good Practice: A Case Study of Testing Italian as a Foreign Language

    ERIC Educational Resources Information Center

    Grego Bolli, Giuliana

    2014-01-01

    This article discusses the problem of quality in the production of language tests in the context of Italian language examinations. The concept of quality is closely related to the application of stated standards and related procedures. These standards, developed over the last thirty years, are mainly related to the concepts of the accountability…

  7. An assessment of RELAP5-3D using the Edwards-O'Brien Blowdown problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Tomlinson, E.T.; Aumiller, D.L.

    1999-07-01

    The RELAP5-3D (version bt) computer code was used to assess the United States Nuclear Regulatory Commission's Standard Problem 1 (Edwards-O'Brien Blowdown Test). The RELAP5-3D standard installation problem based on the Edwards-O'Brien Blowdown Test was modified to model the appropriate initial conditions and to represent the proper location of the instruments present in the experiment. The results obtained using the modified model are significantly different from the original calculation indicating the need to model accurately the experimental conditions if an accurate assessment of the calculational model is to be obtained.

  8. [Development of a software standardizing optical density with operation settings related to several limitations].

    PubMed

    Tu, Xiao-Ming; Zhang, Zuo-Heng; Wan, Cheng; Zheng, Yu; Xu, Jin-Mei; Zhang, Yuan-Yuan; Luo, Jian-Ping; Wu, Hai-Wei

    2012-12-01

    To develop a software that can be used to standardize optical density to normalize the procedures and results of standardization in order to effectively solve several problems generated during standardization of in-direct ELISA results. The software was designed based on the I-STOD method with operation settings to solve the problems that one might encounter during the standardization. Matlab GUI was used as a tool for the development. The software was tested with the results of the detection of sera of persons from schistosomiasis japonica endemic areas. I-STOD V1.0 (WINDOWS XP/WIN 7, 0.5 GB) was successfully developed to standardize optical density. A serial of serum samples from schistosomiasis japonica endemic areas were used to examine the operational effects of I-STOD V1.0 software. The results indicated that the software successfully overcame several problems including reliability of standard curve, applicable scope of samples and determination of dilution for samples outside the scope, so that I-STOD was performed more conveniently and the results of standardization were more consistent. I-STOD V1.0 is a professional software based on I-STOD. It can be easily operated and can effectively standardize the testing results of in-direct ELISA.

  9. Clinical interpretation of antinuclear antibody tests in systemic rheumatic diseases

    PubMed Central

    Mercado, Monica Vázquez-Del; Chan, Edward K. L.

    2010-01-01

    Autoantibody tests have been used extensively in diagnosis and follow-up of patients in rheumatology clinics. Immunofluorescent antinuclear antibody test using HEp-2 cells is still considered the gold standard for screening of autoantibodies, and most of specific autoantibodies are currently tested by ELISA as a next step. Among the many autoantibody specificities described, some have been established as clinically useful diagnostic markers and are included in the classification criteria of diseases. Despite a long history of routine tests and attempts to standardize such assays, there are still limitations and problems that clinicians need to be aware of. Clinicians should be able to use autoantibody tests more efficiently and effectively with a basic knowledge on the significance of and potential problems in autoantibody tests. PMID:19277826

  10. The Uses and Abuses of Educational Testing: Chicanos as a Case in Point. Chapter 8.

    ERIC Educational Resources Information Center

    Valencia, Richard R.; Aburto, Sofia

    A persistent problem in U.S. educational research has been how to explain the continuing low performance on standardized tests by certain racial and ethnic minority-group students, such as Chicanos. This chapter identifies abusive practices stemming from standardized testing that help to shape school failure among Chicano students, and discusses…

  11. Constructed-Response Problems

    ERIC Educational Resources Information Center

    Swinford, Ashleigh

    2016-01-01

    With rigor outlined in state and Common Core standards and the addition of constructed-response test items to most state tests, math constructed-response questions have become increasingly popular in today's classroom. Although constructed-response problems can present a challenge for students, they do offer a glimpse of students' learning through…

  12. When procedures discourage insight: epistemological consequences of prompting novice physics students to construct force diagrams

    NASA Astrophysics Data System (ADS)

    Kuo, Eric; Hallinen, Nicole R.; Conlin, Luke D.

    2017-05-01

    One aim of school science instruction is to help students become adaptive problem solvers. Though successful at structuring novice problem solving, step-by-step problem-solving frameworks may also constrain students' thinking. This study utilises a paradigm established by Heckler [(2010). Some consequences of prompting novice physics students to construct force diagrams. International Journal of Science Education, 32(14), 1829-1851] to test how cuing the first step in a standard framework affects undergraduate students' approaches and evaluation of solutions in physics problem solving. Specifically, prompting the construction of a standard diagram before problem solving increases the use of standard procedures, decreasing the use of a conceptual shortcut. Providing a diagram prompt also lowers students' ratings of informal approaches to similar problems. These results suggest that reminding students to follow typical problem-solving frameworks limits their views of what counts as good problem solving.

  13. The Best of Both Worlds

    ERIC Educational Resources Information Center

    Schneider, Jack; Feldman, Joe; French, Dan

    2016-01-01

    Relying on teachers' assessments for the information currently provided by standardized test scores would save instructional time, better capture the true abilities of diverse students, and reduce the problem of teaching to the test. A California high school is implementing standards-based reporting, ensuring that teacher-issued grades function as…

  14. Authentication: A Standard Problem or a Problem of Standards?

    PubMed

    Capes-Davis, Amanda; Neve, Richard M

    2016-06-01

    Reproducibility and transparency in biomedical sciences have been called into question, and scientists have been found wanting as a result. Putting aside deliberate fraud, there is evidence that a major contributor to lack of reproducibility is insufficient quality assurance of reagents used in preclinical research. Cell lines are widely used in biomedical research to understand fundamental biological processes and disease states, yet most researchers do not perform a simple, affordable test to authenticate these key resources. Here, we provide a synopsis of the problems we face and how standards can contribute to an achievable solution.

  15. An Experimental Copyright Moratorium: Study of a Proposed Solution to the Copyright Photocopying Problem. Final Report to the American Society for Testing and Materials (ASTM).

    ERIC Educational Resources Information Center

    Heilprin, Laurence B.

    The Committee to Investigate Copyright Problems (CICP), a non-profit organization dedicated to resolving the conflict known as the "copyright photocopying problem" was joined by the American Society for Testing and Materials (ASTM), a large national publisher of technical and scientific standards, in a plan to simulate a long-proposed…

  16. Creating School Communities through Music

    ERIC Educational Resources Information Center

    Marasco, Katelyn

    2011-01-01

    There are many problems facing educators today. Student retention, standardized test scores, and motivational issues are only a few. It seems that students are dropping out of school at higher rates and having more difficulty finding motivation to do well on their school work and standardized tests. This sought to investigate strategies that…

  17. Robust Confidence Interval for a Ratio of Standard Deviations

    ERIC Educational Resources Information Center

    Bonett, Douglas G.

    2006-01-01

    Comparing variability of test scores across alternate forms, test conditions, or subpopulations is a fundamental problem in psychometrics. A confidence interval for a ratio of standard deviations is proposed that performs as well as the classic method with normal distributions and performs dramatically better with nonnormal distributions. A simple…

  18. The Performance of Chinese Primary School Students on Realistic Arithmetic Word Problems

    ERIC Educational Resources Information Center

    Xin, Ziqiang; Lin, Chongde; Zhang, Li; Yan, Rong

    2007-01-01

    Compared with standard arithmetic word problems demanding only the direct use of number operations and computations, realistic problems are harder to solve because children need to incorporate "real-world" knowledge into their solutions. Using the realistic word problem testing materials developed by Verschaffel, De Corte, and Lasure…

  19. Visual field defects may not affect safe driving.

    PubMed

    Dow, Jamie

    2011-10-01

    In Quebec a driver whose acquired visual field defect renders them ineligible for a driver's permit renewal may request an exemption from the visual field standard by demonstrating safe driving despite the defect. For safety reasons it was decided to attempt to identify predictors of failure on the road test in order to avoid placing driving evaluators in potentially dangerous situations when evaluating drivers with visual field defects. During a 4-month period in 2009 all requests for exemptions from the visual field standard were collected and analyzed. All available medical and visual field data were collated for 103 individuals, of whom 91 successfully completed the evaluation process and obtained a waiver. The collated data included age, sex, type of visual field defect, visual field characteristics, and concomitant medical problems. No single factor, or combination of factors, could predict failure of the road test. All 5 failures of the road test had cognitive problems but 6 of the successful drivers also had known cognitive problems. Thus, cognitive problems influence the risk of failure but do not predict certain failure. Most of the applicants for an exemption were able to complete the evaluation process successfully, thereby demonstrating safe driving despite their handicap. Consequently, jurisdictions that have visual field standards for their driving permit should implement procedures to evaluate drivers with visual field defects that render them unable to meet the standard but who wish to continue driving.

  20. 42 CFR 493.1233 - Standard: Complaint investigations.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... SERVICES (CONTINUED) STANDARDS AND CERTIFICATION LABORATORY REQUIREMENTS Quality System for Nonwaived Testing General Laboratory Systems § 493.1233 Standard: Complaint investigations. The laboratory must have a system in place to ensure that it documents all complaints and problems reported to the laboratory...

  1. Automation and results of Adjacent Band Emission testing

    DOT National Transportation Integrated Search

    2015-03-01

    Problem Statement : Multiple groups conduct tests in various ways - Outcomes vary based on test setup and assumptions - No standard has been established to conduct such tests - Spectrum is scarce and the need for compliance testing will only increase...

  2. The Virginia History Standards and the Cold War

    ERIC Educational Resources Information Center

    Altschuler, Glenn C.; Rauchway, Eric

    2002-01-01

    President George W. Bush's approach to education policy has earned him cautious plaudits from otherwise hostile critics, who see much to admire in the implementation of standards for education. However useful such standards for testing students' technical skills like arithmetic and reading, they create problems for less-standardized processes like…

  3. Assessment for American Indian and Alaska Native Learners. ERIC Digest.

    ERIC Educational Resources Information Center

    Bordeaux, Roger

    This digest examines the use of standardized, nationally normed testing in assessing the progress of American Indian and Alaska Native (AI/AN) students and describes alternative forms of assessment. For years, researchers have criticized the overuse of standardized, nationally normed tests to assess learner and school success. Problems with such…

  4. Standards, Testing, and Accountability: Misguided Intentions

    ERIC Educational Resources Information Center

    Alexander, James

    2011-01-01

    There are many factors affecting student achievement. It is misguided and a waste of time and effort to pursue the failed policies of more standards, tests, and accountability. The primary problems relative to student achievement are mainly societal. Rather than more failed policies, what our nation needs is a discussion about national values,…

  5. Progressing from initially ambiguous functional analyses: three case examples.

    PubMed

    Tiger, Jeffrey H; Fisher, Wayne W; Toussaint, Karen A; Kodak, Tiffany

    2009-01-01

    Most often functional analyses are initiated using a standard set of test conditions, similar to those described by Iwata, Dorsey, Slifer, Bauman, and Richman [Iwata, B. A., Dorsey, M. F., Slifer, K. J., Bauman, K. E., & Richman, G. S. (1994). Toward a functional analysis of self-injury. Journal of Applied Behavior Analysis, 27, 197-209 (Reprinted from Analysis and Intervention in Developmental Disabilities, 2, 3-20, 1982)]. These test conditions involve the careful manipulation of motivating operations, discriminative stimuli, and reinforcement contingencies to determine the events related to the occurrence and maintenance of problem behavior. Some individuals display problem behavior that is occasioned and reinforced by idiosyncratic or otherwise unique combinations of environmental antecedents and consequences of behavior, which are unlikely to be detected using these standard assessment conditions. For these individuals, modifications to the standard test conditions or the inclusion of novel test conditions may result in clearer assessment outcomes. The current study provides three case examples of individuals whose functional analyses were initially undifferentiated; however, modifications to the standard conditions resulted in the identification of behavioral functions and the implementation of effective function-based treatments.

  6. Quantum annealing of the traveling-salesman problem.

    PubMed

    Martonák, Roman; Santoro, Giuseppe E; Tosatti, Erio

    2004-11-01

    We propose a path-integral Monte Carlo quantum annealing scheme for the symmetric traveling-salesman problem, based on a highly constrained Ising-like representation, and we compare its performance against standard thermal simulated annealing. The Monte Carlo moves implemented are standard, and consist in restructuring a tour by exchanging two links (two-opt moves). The quantum annealing scheme, even with a drastically simple form of kinetic energy, appears definitely superior to the classical one, when tested on a 1002-city instance of the standard TSPLIB.

  7. An Exploratory Study of Reading Comprehension in College Students After Acquired Brain Injury.

    PubMed

    Sohlberg, McKay Moore; Griffiths, Gina G; Fickas, Stephen

    2015-08-01

    This exploratory study builds on the small body of existing research investigating reading comprehension deficits in college students with acquired brain injury (ABI). Twenty-four community college students with ABI completed a battery of questionnaires and standardized tests to characterize self-perceptions of academic reading ability, performance on a standardized reading comprehension measure, and a variety of cognitive functions of this population. Half of the participants in the sample reported traumatic brain injury (n = 12) and half reported nontraumatic ABI (n = 12). College students with both traumatic and nontraumatic ABI cite problems with reading comprehension and academic performance postinjury. Mean performance on a standardized reading measure, the Nelson-Denny Reading Test (Brown, Fischo, & Hanna, 1993), was low to below average and was significantly correlated with performance on the Speed and Capacity of Language Processing Test (Baddeley, Emslie, & Nimmo-Smith, 1992). Injury status of traumatic versus nontraumatic ABI did not differentiate results. Regression analysis showed that measures of verbal attention and suppression obtained from the California Verbal Language Test-II (Delis, Kramer, Kaplan, & Ober, 2000) predicted total scores on the Nelson-Denny Reading Test. College students with ABI are vulnerable to reading comprehension problems. Results align with other research suggesting that verbal attention and suppression problems may be contributing factors.

  8. Below-Ambient and Cryogenic Thermal Testing

    NASA Technical Reports Server (NTRS)

    Fesmire, James E.

    2016-01-01

    Thermal insulation systems operating in below-ambient temperature conditions are inherently susceptible to moisture intrusion and vapor drive toward the cold side. The subsequent effects may include condensation, icing, cracking, corrosion, and other problems. Methods and apparatus for real-world thermal performance testing of below-ambient systems have been developed based on cryogenic boiloff calorimetry. New ASTM International standards on cryogenic testing and their extension to future standards for below-ambient testing of pipe insulation are reviewed.

  9. Applications of Automation Methods for Nonlinear Fracture Test Analysis

    NASA Technical Reports Server (NTRS)

    Allen, Phillip A.; Wells, Douglas N.

    2013-01-01

    As fracture mechanics material testing evolves, the governing test standards continue to be refined to better reflect the latest understanding of the physics of the fracture processes involved. The traditional format of ASTM fracture testing standards, utilizing equations expressed directly in the text of the standard to assess the experimental result, is self-limiting in the complexity that can be reasonably captured. The use of automated analysis techniques to draw upon a rich, detailed solution database for assessing fracture mechanics tests provides a foundation for a new approach to testing standards that enables routine users to obtain highly reliable assessments of tests involving complex, non-linear fracture behavior. Herein, the case for automating the analysis of tests of surface cracks in tension in the elastic-plastic regime is utilized as an example of how such a database can be generated and implemented for use in the ASTM standards framework. The presented approach forms a bridge between the equation-based fracture testing standards of today and the next generation of standards solving complex problems through analysis automation.

  10. Evaluation of a Problem-based Learning Workshop Using Pre- and Post-test Objective Structured Clinical Examinations and Standardized Patients.

    ERIC Educational Resources Information Center

    Davis, Paul; Kvern, Brent; Donen, Neil; Andrews, Elaine; Nixon, Olga

    2000-01-01

    Pre/posttest data on 40 physicians who completed problem-based clinical scenarios on osteoporosis revealed that 39 showed improvement or modest change in postworkshop scores, especially in terms of management of male patients, determination of risk factors, and use and interpretation of bone density tests. (SK)

  11. Why Massachusetts Should Abandon the PARCC Tests and the 2011 Coleman et al English Language Arts Standards on Which the MCAS Tests Are Based. Testimony

    ERIC Educational Resources Information Center

    Stotsky, Sandra

    2015-01-01

    In this testimony, the author first describes her qualifications, as well as the lack of relevant qualifications in Common Core's standards writers and in most of the members of Common Core's Validation Committee, on which she served in 2009-2010. The author then details some of the many problems in the 2011 Massachusetts ELA standards, written by…

  12. Application of IUS equipment and experience to orbit transfer vehicles of the 90's

    NASA Astrophysics Data System (ADS)

    Bangsund, E.; Keeney, J.; Cowgill, E.

    1985-10-01

    This paper relates experiences with the IUS program and the application of that experience to Future Orbit Transfer Vehicles. More specifically it includes the implementation of the U.S. Air Force Space Division high reliability parts standard (SMASO STD 73-2C) and the component/system test standard (MIL-STD-1540A). Test results from the parts and component level testing and the resulting system level test program for fourteen IUS flight vehicles are discussed. The IUS program has had the highest compliance with these standards and thus offers a benchmark of experience for future programs demanding extreme reliability. In summary, application of the stringent parts standard has resulted in fewer failures during testing and the stringent test standard has eliminated design problems in the hardware. Both have been expensive in costs and schedules, and should be applied with flexibility.

  13. Environmental testing of block 3 solar cell modules. Part 1: Qualification testing of standard production modules

    NASA Technical Reports Server (NTRS)

    Griffith, J. S.

    1979-01-01

    Qualification tests of solar cell modules are described. These modules continue to show improvement over earlier type modules tested. Cell cracking and delamination are less prevalent, and interconnect problems and electrical degradation from environmental testing are now rare.

  14. Digital combined instrument transformer for automated electric power supply control systems of mining companies

    NASA Astrophysics Data System (ADS)

    Topolsky, D. V.; Gonenko, T. V.; Khatsevskiy, V. F.

    2017-10-01

    The present paper discusses ways to solve the problem of enhancing operating efficiency of automated electric power supply control systems of mining companies. According to the authors, one of the ways to solve this problem is intellectualization of the electric power supply control system equipment. To enhance efficiency of electric power supply control and electricity metering, it is proposed to use specially designed digital combined instrument current and voltage transformers. This equipment conforms to IEC 61850 international standard and is adapted for integration into the digital substation structure. Tests were performed to check conformity of an experimental prototype of the digital combined instrument current and voltage transformer with IEC 61850 standard. The test results have shown that the considered equipment meets the requirements of the standard.

  15. Testing accommodation or modification? The effects of integrated object representation on enhancing geometry performance in children with and without geometry difficulties.

    PubMed

    Zhang, Dake; Wang, Qiu; Ding, Yi; Liu, Jeremy Jian

    2014-01-01

    According to the National Council of Teachers of Mathematics, geometry and spatial sense are fundamental components of mathematics learning. However, learning disabilities (LD) research has shown that many K-12 students encounter particular geometry difficulties (GD). This study examined the effect of an integrated object representation (IOR) accommodation on the test performance of students with GD compared to students without GD. Participants were 118 elementary students who took a researcher-developed geometry problem solving test under both a standard testing condition and an IOR accommodation condition. A total of 36 students who were classified with GD scored below 40% correct in the geometry problem solving test in the standard testing condition, and 82 students who were classified without GD scored equal to or above 40% correct in the same test and condition. All students were tested in both standard testing condition and IOR accommodation condition. The results from both ANOVA and regression discontinuity (RD) analyses suggested that students with GD benefited more than students without GD from the IOR accommodation. Implications of the study are discussed in terms of providing accommodations for students with mathematics learning difficulties and recommending RD design in LD research. © Hammill Institute on Disabilities 2013.

  16. An Introduction to Multilinear Formula Score Theory. Measurement Series 84-4.

    ERIC Educational Resources Information Center

    Levine, Michael V.

    Formula score theory (FST) associates each multiple choice test with a linear operator and expresses all of the real functions of item response theory as linear combinations of the operator's eigenfunctions. Hard measurement problems can then often be reformulated as easier, standard mathematical problems. For example, the problem of estimating…

  17. Developing and validating the Youth Conduct Problems Scale-Rwanda: a mixed methods approach.

    PubMed

    Ng, Lauren C; Kanyanganzi, Frederick; Munyanah, Morris; Mushashi, Christine; Betancourt, Theresa S

    2014-01-01

    This study developed and validated the Youth Conduct Problems Scale-Rwanda (YCPS-R). Qualitative free listing (n = 74) and key informant interviews (n = 47) identified local conduct problems, which were compared to existing standardized conduct problem scales and used to develop the YCPS-R. The YCPS-R was cognitive tested by 12 youth and caregiver participants, and assessed for test-retest and inter-rater reliability in a sample of 64 youth. Finally, a purposive sample of 389 youth and their caregivers were enrolled in a validity study. Validity was assessed by comparing YCPS-R scores to conduct disorder, which was diagnosed with the Mini International Neuropsychiatric Interview for Children, and functional impairment scores on the World Health Organization Disability Assessment Schedule Child Version. ROC analyses assessed the YCPS-R's ability to discriminate between youth with and without conduct disorder. Qualitative data identified a local presentation of youth conduct problems that did not match previously standardized measures. Therefore, the YCPS-R was developed solely from local conduct problems. Cognitive testing indicated that the YCPS-R was understandable and required little modification. The YCPS-R demonstrated good reliability, construct, criterion, and discriminant validity, and fair classification accuracy. The YCPS-R is a locally-derived measure of Rwandan youth conduct problems that demonstrated good psychometric properties and could be used for further research.

  18. Assessment Strategies for Minority Groups.

    ERIC Educational Resources Information Center

    Sharma, Sarla

    1986-01-01

    Far-reaching ramifications for minority children of psychological assessment warrants that it be accurate, fair, and valid. This article addresses: (1) problems inherent in standardized testing; (2) a moratorium on intelligence testing; (3) alternate approaches to testing; and (4) guidelines for assessing ethnic minority groups. (LHW)

  19. Seed germination test for toxicity evaluation of compost: Its roles, problems and prospects.

    PubMed

    Luo, Yuan; Liang, Jie; Zeng, Guangming; Chen, Ming; Mo, Dan; Li, Guoxue; Zhang, Difang

    2018-01-01

    Compost is commonly used for the growth of plants and the remediation of environmental pollution. It is important to evaluate the quality of compost and seed germination test is a powerful tool to examine the toxicity of compost, which is the most important aspect of the quality. Now the test is widely adopted, but the main problem is that the test results vary with different methods and seed species, which limits the development and application of it. The standardization of methods and the modelization of seeds can contribute to solving the problem. Additionally, according to the probabilistic theory of seed germination, the error caused by the analysis and judgment methods of the test results can be reduced. Here, we reviewed the roles, problems and prospects of the seed germination test in the studies of compost. Copyright © 2017 Elsevier Ltd. All rights reserved.

  20. Raising the Bar: Standards and Tests in California's High Schools. A Town Hall Meeting.

    ERIC Educational Resources Information Center

    Arnstine, Barbara; Futernick, Ken; Hodson, Timothy A.; Ostgaard, Kolleen

    In 1999, the LegiSchool Project planned to conduct the 12th in its series of televised Town Hall Meetings to provide a forum in which California high school students, educators, and legislators can engage in face-to-face dialogue about problems of mutual interest. For 1999, the topic is standards and tests in California high schools. This guide…

  1. ROC curves in clinical chemistry: uses, misuses, and possible solutions.

    PubMed

    Obuchowski, Nancy A; Lieber, Michael L; Wians, Frank H

    2004-07-01

    ROC curves have become the standard for describing and comparing the accuracy of diagnostic tests. Not surprisingly, ROC curves are used often by clinical chemists. Our aims were to observe how the accuracy of clinical laboratory diagnostic tests is assessed, compared, and reported in the literature; to identify common problems with the use of ROC curves; and to offer some possible solutions. We reviewed every original work using ROC curves and published in Clinical Chemistry in 2001 or 2002. For each article we recorded phase of the research, prospective or retrospective design, sample size, presence/absence of confidence intervals (CIs), nature of the statistical analysis, and major analysis problems. Of 58 articles, 31% were phase I (exploratory), 50% were phase II (challenge), and 19% were phase III (advanced) studies. The studies increased in sample size from phase I to III and showed a progression in the use of prospective designs. Most phase I studies were powered to assess diagnostic tests with ROC areas >/=0.70. Thirty-eight percent of studies failed to include CIs for diagnostic test accuracy or the CIs were constructed inappropriately. Thirty-three percent of studies provided insufficient analysis for comparing diagnostic tests. Other problems included dichotomization of the gold standard scale and inappropriate analysis of the equivalence of two diagnostic tests. We identify available software and make some suggestions for sample size determination, testing for equivalence in diagnostic accuracy, and alternatives to a dichotomous classification of a continuous-scale gold standard. More methodologic research is needed in areas specific to clinical chemistry.

  2. Quest for Quality.

    ERIC Educational Resources Information Center

    Wilson, Richard B.; Schmoker, Mike

    1992-01-01

    Unlike traditional school management, Toyota of America recognizes thinking employees and emphasizes problems and measurable approaches to improvement. Instead of meeting to discuss short-term goals, specific problems, and concrete successes, school leaders often alienate staff by leading year-end discussions of standardized test score data.…

  3. What Are the Signs of Alzheimer's Disease? | NIH MedlinePlus the Magazine

    MedlinePlus

    ... in behavior and personality Conduct tests of memory, problem solving, attention, counting, and language Carry out standard medical ... over and over having trouble paying bills or solving simple math problems getting lost losing things or putting them in ...

  4. OSI: Will It Ever See the Light of Day?

    ERIC Educational Resources Information Center

    Moloney, Peter

    1997-01-01

    Examines issues of viability and necessity regarding the Open System Interconnections (OSI) reference service model with a view on future developments. Discusses problems with the standards; conformance testing; OSI bureaucracy; standardized communications; security; the transport level; applications; the stakeholders (communications providers,…

  5. Harmonisation of microbial sampling and testing methods for distillate fuels

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hill, G.C.; Hill, E.C.

    1995-05-01

    Increased incidence of microbial infection in distillate fuels has led to a demand for organisations such as the Institute of Petroleum to propose standards for microbiological quality, based on numbers of viable microbial colony forming units. Variations in quality requirements, and in the spoilage significance of contaminating microbes plus a tendency for temporal and spatial changes in the distribution of microbes, makes such standards difficult to implement. The problem is compounded by a diversity in the procedures employed for sampling and testing for microbial contamination and in the interpretation of the data obtained. The following paper reviews these problems andmore » describes the efforts of The Institute of Petroleum Microbiology Fuels Group to address these issues and in particular to bring about harmonisation of sampling and testing methods. The benefits and drawbacks of available test methods, both laboratory based and on-site, are discussed.« less

  6. The Effect of Using Problem-Based Learning in Middle School Gifted Science Classes on Student Achievement and Students' Perceptions of Classroom Quality

    ERIC Educational Resources Information Center

    Horak, Anne Karen

    2013-01-01

    The purpose of this study was to explore the impact of the Problem Based Learning (PBL) units developed by a large suburban school district in the mid-Atlantic for the middle school gifted science curriculum on: a) students' performance on standardized tests in middle school Science, as measured by a sample of relevant test questions from a…

  7. 40 CFR 171.4 - Standards for certification of commercial applicators.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... (CONTINUED) PESTICIDE PROGRAMS CERTIFICATION OF PESTICIDE APPLICATORS § 171.4 Standards for certification of commercial applicators. (a) Determination of competency. Competence in the use and handling of pesticides... pesticides. Testing shall be based on examples of problems and situations appropriate to the particular...

  8. 40 CFR 171.4 - Standards for certification of commercial applicators.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... (CONTINUED) PESTICIDE PROGRAMS CERTIFICATION OF PESTICIDE APPLICATORS § 171.4 Standards for certification of commercial applicators. (a) Determination of competency. Competence in the use and handling of pesticides... pesticides. Testing shall be based on examples of problems and situations appropriate to the particular...

  9. 40 CFR 171.4 - Standards for certification of commercial applicators.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... (CONTINUED) PESTICIDE PROGRAMS CERTIFICATION OF PESTICIDE APPLICATORS § 171.4 Standards for certification of commercial applicators. (a) Determination of competency. Competence in the use and handling of pesticides... pesticides. Testing shall be based on examples of problems and situations appropriate to the particular...

  10. 40 CFR 171.4 - Standards for certification of commercial applicators.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... (CONTINUED) PESTICIDE PROGRAMS CERTIFICATION OF PESTICIDE APPLICATORS § 171.4 Standards for certification of commercial applicators. (a) Determination of competency. Competence in the use and handling of pesticides... pesticides. Testing shall be based on examples of problems and situations appropriate to the particular...

  11. 40 CFR 171.4 - Standards for certification of commercial applicators.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... (CONTINUED) PESTICIDE PROGRAMS CERTIFICATION OF PESTICIDE APPLICATORS § 171.4 Standards for certification of commercial applicators. (a) Determination of competency. Competence in the use and handling of pesticides... pesticides. Testing shall be based on examples of problems and situations appropriate to the particular...

  12. FEMFLOW3D; a finite-element program for the simulation of three-dimensional aquifers; version 1.0

    USGS Publications Warehouse

    Durbin, Timothy J.; Bond, Linda D.

    1998-01-01

    This document also includes model validation, source code, and example input and output files. Model validation was performed using four test problems. For each test problem, the results of a model simulation with FEMFLOW3D were compared with either an analytic solution or the results of an independent numerical approach. The source code, written in the ANSI x3.9-1978 FORTRAN standard, and the complete input and output of an example problem are listed in the appendixes.

  13. Making Benchmark Testing Work

    ERIC Educational Resources Information Center

    Herman, Joan L.; Baker, Eva L.

    2005-01-01

    Many schools are moving to develop benchmark tests to monitor their students' progress toward state standards throughout the academic year. Benchmark tests can provide the ongoing information that schools need to guide instructional programs and to address student learning problems. The authors discuss six criteria that educators can use to…

  14. The Talent Search Model: Past, Present, and Future

    ERIC Educational Resources Information Center

    Swiatek, Mary Ann

    2007-01-01

    Typical standardized achievement tests cannot provide accurate information about gifted students' abilities because they are not challenging enough for such students. Talent searches solve this problem through above-level testing--using tests designed for older students to raise the ceiling for younger, gifted students. Currently, talent search…

  15. JPL Test Effectiveness Analysis

    NASA Technical Reports Server (NTRS)

    Shreck, Stephanie; Sharratt, Stephen; Smith, Joseph F.; Strong, Edward

    2008-01-01

    1) The pilot study provided meaningful conclusions that are generally consistent with the earlier Test Effectiveness work done between 1992 and 1994: a) Analysis of pre-launch problem/failure reports is consistent with earlier work. b) Analysis of post-launch early mission anomaly reports indicates that there are more software issues in newer missions, and the no-test category for identification of post-launch failures is more significant than in the earlier analysis. 2) Future work includes understanding how differences in Missions effect these analyses: a) There are large variations in the number of problem reports and issues that are documented by the different Projects/Missions. b) Some missions do not have any reported environmental test anomalies, even though environmental tests were performed. 3) Each project/mission has different standards and conventions for filling out the PFR forms, the industry may wish to address this issue: a) Existing problem reporting forms are to document and track problems, failures, and issues (etc.) for the projects, to ensure high quality. b) Existing problem reporting forms are not intended for data mining.

  16. On the Performance Evaluation of 3D Reconstruction Techniques from a Sequence of Images

    NASA Astrophysics Data System (ADS)

    Eid, Ahmed; Farag, Aly

    2005-12-01

    The performance evaluation of 3D reconstruction techniques is not a simple problem to solve. This is not only due to the increased dimensionality of the problem but also due to the lack of standardized and widely accepted testing methodologies. This paper presents a unified framework for the performance evaluation of different 3D reconstruction techniques. This framework includes a general problem formalization, different measuring criteria, and a classification method as a first step in standardizing the evaluation process. Performance characterization of two standard 3D reconstruction techniques, stereo and space carving, is also presented. The evaluation is performed on the same data set using an image reprojection testing methodology to reduce the dimensionality of the evaluation domain. Also, different measuring strategies are presented and applied to the stereo and space carving techniques. These measuring strategies have shown consistent results in quantifying the performance of these techniques. Additional experiments are performed on the space carving technique to study the effect of the number of input images and the camera pose on its performance.

  17. College Admissions: Beyond Conventional Testing

    ERIC Educational Resources Information Center

    Sternberg, Robert J.

    2012-01-01

    Standardized admissions tests such as the SAT (originally stood for "Scholastic Aptitude Test") and the ACT measure only a narrow segment of the skills needed to become an active citizen and possibly a leader who makes a positive, meaningful, and enduring difference to the world. The problem with these tests is that they promised, under…

  18. A Unified Framework for Association Analysis with Multiple Related Phenotypes

    PubMed Central

    Stephens, Matthew

    2013-01-01

    We consider the problem of assessing associations between multiple related outcome variables, and a single explanatory variable of interest. This problem arises in many settings, including genetic association studies, where the explanatory variable is genotype at a genetic variant. We outline a framework for conducting this type of analysis, based on Bayesian model comparison and model averaging for multivariate regressions. This framework unifies several common approaches to this problem, and includes both standard univariate and standard multivariate association tests as special cases. The framework also unifies the problems of testing for associations and explaining associations – that is, identifying which outcome variables are associated with genotype. This provides an alternative to the usual, but conceptually unsatisfying, approach of resorting to univariate tests when explaining and interpreting significant multivariate findings. The method is computationally tractable genome-wide for modest numbers of phenotypes (e.g. 5–10), and can be applied to summary data, without access to raw genotype and phenotype data. We illustrate the methods on both simulated examples, and to a genome-wide association study of blood lipid traits where we identify 18 potential novel genetic associations that were not identified by univariate analyses of the same data. PMID:23861737

  19. Early Identification of At-Risk LPN-to-RN Students

    ERIC Educational Resources Information Center

    Hawthorne, Lisa K.

    2013-01-01

    Nurse education programs are implementing standardized assessments without evaluating their effectiveness. Graduates of associate degree nursing programs continue to be unsuccessful with licensure examinations, despite standardized testing and stronger admission criteria. This problem is also prevalent for LPN-to-RN education programs due to a…

  20. Framework for Assessing the ICT Competency in Teachers up to the Requirements of "Teacher" Occupational Standard

    ERIC Educational Resources Information Center

    Avdeeva, Svetlana; Zaichkina, Olga; Nikulicheva, Nataliya; Khapaeva, Svetlana

    2016-01-01

    The paper deals with problems of working out a test framework for the assessment of teachers' ICT competency in line with the requirements of "Teacher" occupational standard. The authors have analyzed the known approaches to assessing teachers' ICT competency--ISTE Standards and UNESCO ICT CFT and have suggested their own approach to…

  1. Impact of Gadget Based Learning of Grammar in English at Standard II

    ERIC Educational Resources Information Center

    Singaravelu, G.

    2014-01-01

    The study enlightens the impact of Gadget Based Learning of English Grammar at standard II. Objectives of the study is to find out the learning problems of the students of standard II in Learning English Grammar in Shri Vani Vilas Middle School and to find whether there is any significant difference in achievement mean score between pre test of…

  2. Designing Cognitive Complexity in Mathematical Problem-Solving Items

    ERIC Educational Resources Information Center

    Daniel, Robert C.; Embretson, Susan E.

    2010-01-01

    Cognitive complexity level is important for measuring both aptitude and achievement in large-scale testing. Tests for standards-based assessment of mathematics, for example, often include cognitive complexity level in the test blueprint. However, little research exists on how mathematics items can be designed to vary in cognitive complexity level.…

  3. The SPH consistency problem and some astrophysical applications

    NASA Astrophysics Data System (ADS)

    Klapp, Jaime; Sigalotti, Leonardo; Rendon, Otto; Gabbasov, Ruslan; Torres, Ayax

    2017-11-01

    We discuss the SPH kernel and particle consistency problem and demonstrate that SPH has a limiting second-order convergence rate. We also present a solution to the SPH consistency problem. We present examples of how SPH implementations that are not mathematically consistent may lead to erroneous results. The new formalism has been implemented into the Gadget 2 code, including an improved scheme for the artificial viscosity. We present results for the ``Standard Isothermal Test Case'' of gravitational collapse and fragmentation of protostellar molecular cores that produce a very different evolution than with the standard SPH theory. A further application of accretion onto a black hole is presented.

  4. Faux-Pas Test: A Proposal of a Standardized Short Version.

    PubMed

    Fernández-Modamio, Mar; Arrieta-Rodríguez, Marta; Bengochea-Seco, Rosario; Santacoloma-Cabero, Iciar; Gómez de Tojeiro-Roce, Juan; García-Polavieja, Bárbara; González-Fraile, Eduardo; Martín-Carrasco, Manuel; Griffin, Kim; Gil-Sanz, David

    2018-06-26

    Previous research on theory of mind suggests that people with schizophrenia have difficulties with complex mentalization tasks that involve the integration of cognition and affective mental states. One of the tools most commonly used to assess theory of mind is the Faux-Pas Test. However, it presents two main methodological problems: 1) the lack of a standard scoring system; 2) the different versions are not comparable due to a lack of information on the stories used. These methodological problems make it difficult to draw conclusions about performance on this test by people with schizophrenia. The aim of this study was to develop a reduced version of the Faux-Pas test with adequate psychometric properties. The test was administered to control and clinical groups. Interrater and test-retest reliability were analyzed for each story in order to select the set of 10 stories included in the final reduced version. The shortened version showed good psychometric properties for controls and patients: test-retest reliability of 0.97 and 0.78, inter-rater reliability of 0.95 and 0.87 and Cronbach's alpha of 0.82 and 0.72.

  5. Group Mirrors to Support Interaction Regulation in Collaborative Problem Solving

    ERIC Educational Resources Information Center

    Jermann, Patrick; Dillenbourg, Pierre

    2008-01-01

    Two experimental studies test the effect of group mirrors upon quantitative and qualitative aspects of participation in collaborative problem solving. Mirroring tools consist of a graphical representation of the group's actions which is dynamically updated and displayed to the collaborators. In addition, metacognitive tools display a standard for…

  6. Protocol Analysis of Aptitude Differences in Figural Analogy Problem Representation.

    ERIC Educational Resources Information Center

    Schiano, Diane J.

    Individual differences in performance on figural analogy tests are usually attributed to quantitative differences in processing parameters rather than to qualitative differences in the formation and use of representations. Yet aptitude-related differences in categorizing standardized figural analogy problems between high and low scorers have been…

  7. Pathways to Suicidal Behaviors in Childhood

    ERIC Educational Resources Information Center

    Greening, Leilani; Stoppelbein, Laura; Fite, Paula; Dhossche, Dirk; Erath, Stephen; Brown, Jacqueline; Cramer, Robert; Young, Laura

    2008-01-01

    Path analyses were applied to test a model that includes internalizing and externalizing behavior problems as predictors of suicidal behaviors in children. Parents of an inpatient sample of boys (N = 87; M age = 9.81 years) rated the frequency of suicidal ideation and completed standardized measures of behavior problems. Blind raters rated the…

  8. Assessing the Complexity of Students' Knowledge in Chemistry

    ERIC Educational Resources Information Center

    Bernholt, Sascha; Parchmann, Ilka

    2011-01-01

    Current reforms in the education policy of various countries are intended to produce a paradigm shift in the educational system towards an outcome orientation. After implementing educational standards as normative objectives, the development of test procedures that adequately reflect these targets and standards is a central problem. This paper…

  9. Going on a Science Trek!

    ERIC Educational Resources Information Center

    Kreider, Gail Yohe

    2008-01-01

    In this problem-based learning activity (PBL), students embark on a science trek to answer the question "Where is the science in my neighborhood?" The project serves as an excellent review of science curriculum in anticipation of Virginia's year-end standardized test--the Standards of Learning (SOL). This has proved to be an interesting…

  10. Designing Medical Tests: The Other Side of Bayes' Theorem

    ERIC Educational Resources Information Center

    Ross, Andrew M.

    2012-01-01

    To compute the probability of having a disease, given a positive test result, is a standard probability problem. The sensitivity and specificity of the test must be given and the prevalence of the disease. We ask how a test-maker might determine the tradeoff between sensitivity and specificity. Adding hypothetical costs for detecting or failing to…

  11. SCOPE (Standardized Curriculum-Oriented Pupil Evaluation) Mathematics. Test Book Grade Six.

    ERIC Educational Resources Information Center

    Northwest Territories Dept. of Education, Yellowknife. Programs and Evaluation Branch.

    The SCOPE Mathematics Achievement Test booklet for grade 6 presents 12 mathematical concepts with instructions for students to take the test with little or no teacher direction. Testing items are: dividing with remainder, up to a six digit dividend by a three digit divisor; using correct order of operations in three-step problems; applying…

  12. Measuring adult literacy students' reading skills using the Gray Oral Reading Test.

    PubMed

    Greenberg, Daphne; Pae, Hye Kyeong; Morris, Robin D; Calhoon, Mary Beth; Nanda, Alice O

    2009-12-01

    There are not enough reading tests standardized on adults who have very low literacy skills, and therefore tests standardized on children are frequently administered. This study addressed the complexities and problems of using a test normed on children to measure the reading comprehension skills of 193 adults who read at approximately third through fifth grade reading grade equivalency levels. Findings are reported from an analysis of the administration of Form A of the Gray Oral Reading Tests-Fourth Edition (Wiederholt & Bryant, 2001a, b). Results indicated that educators and researchers should be very cautious when interpreting test results of adults who have difficulty reading when children's norm-referenced tests are administered.

  13. Unified heuristics to solve routing problem of reverse logistics in sustainable supply chain

    NASA Astrophysics Data System (ADS)

    Anbuudayasankar, S. P.; Ganesh, K.; Lenny Koh, S. C.; Mohandas, K.

    2010-03-01

    A reverse logistics problem, motivated by many real-life applications, is examined where bottles/cans in which products are delivered from a processing depot to customers in one period are available for return to the depot in the following period. The picked-up bottles/cans need to be adjusted in the place of delivery load. This problem is termed as simultaneous delivery and pick-up problem with constrained capacity (SDPC). We develop three unified heuristics based on extended branch and bound heuristic, genetic algorithm and simulated annealing to solve SDPC. These heuristics are also designed to solve standard travelling salesman problem (TSP) and TSP with simultaneous delivery and pick-up (TSDP). We tested the heuristics on standard, derived and randomly generated datasets of TSP, TSDP and SDPC and obtained satisfying results with high convergence in reasonable time.

  14. Experimental investigations and guidelines for PCB design for a fuel injection ECU to meet automotive environmental, EMI/EMC and ESD standards

    NASA Astrophysics Data System (ADS)

    Kalyankar-Narwade, Supriya; Kumar, C. Ramesh; Patil, Sanjay A.

    2017-11-01

    Engine Management ECU plays a vital role in controlling different important features related to the engine performance. ECU is an embedded system which includes hardware and firmware platform for control logics. However, it is necessary to verify its smooth performance by its functionality testing in the Electromagnetic environment for approval. If these requirements are not known at earlier stages, then ECU may not fulfil functional requirements during required automotive electronic test standards. Hence, focusing on EMS ECU, this paper highlights hardware, layout and software guidelines for solving problems related with Electromagnetic Interference (EMI) to comply ISO 7637, CISPR 25 standard, Electromagnetic Compatibility (EMC) to comply ISO 11452-4,5 standard, Electrostatic Discharge (ESD) to comply ISO 10605 standard and Environmental Testing to comply standards as per IEC standards. This paper specifies initially the importance, need and guidelines for reducing the EMI effect on PCB i.e. making ECU more electromagnetically compatible as per automotive standards. The guidelines are useful for the designers to avoid pitfalls at the later stage. After mentioned modifications in the paper, ECU successfully passed the requirements for all standard tests.

  15. Standard and modified administrations of the Iowa Tests of Basic Skills with learning disabled students.

    PubMed

    Estes, R E; Baum, D L; Bray, N M

    1986-04-01

    The purpose of this study was to investigate the performance of junior high school learning disabled students on standard and modified administrations of selected subtests from the Iowa Tests of Basic Skills. No significant differences were noted for correlations between types of administration and teachers' ratings on any of the subtest comparisons. Grade placements for Vocabulary and Reading Comprehension using the modified administration were significantly higher than those using the standard administration and more closely aligned with teachers' ratings. Math Concept and Math Problem-solving grade-placement scores did not differ by type of administration; teachers' ratings were higher than those produced by either testing format.

  16. Is Test Security an Issue in a Multistation Clinical Assessment?--A Preliminary Study.

    ERIC Educational Resources Information Center

    Stillman, Paula L.; And Others

    1991-01-01

    A study investigated possible differences in standardized patient examination scores for three groups of undergraduate (n=176) and graduate (n=221) medical students assessed at different sites over two years. Results show no systematic change in scores over testing dates, suggesting no problems with breach of test security. (MSE)

  17. Counterbalance Assessment: The Chorizo Test

    ERIC Educational Resources Information Center

    Cabrera, Nolan L.; Cabrera, George A.

    2008-01-01

    Since the days of the reform movements of the 1980s and 1990s, standardized testing has increased greatly in the public schools. The problem has become particularly acute in the wake of the testing mandated for accountability under the No Child Left Behind (NCLB) Act. States simply can not afford to jeopardize their federal funding. This article…

  18. Designing a VOIP Based Language Test

    ERIC Educational Resources Information Center

    Garcia Laborda, Jesus; Magal Royo, Teresa; Otero de Juan, Nuria; Gimenez Lopez, Jose L.

    2015-01-01

    Assessing speaking is one of the most difficult tasks in computer based language testing. Many countries all over the world face the need to implement standardized language tests where speaking tasks are commonly included. However, a number of problems make them rather impractical such as the costs, the personnel involved, the length of time for…

  19. Rationale and Use of Content-Relevant Achievement Tests for the Evaluation of Instructional Programs.

    ERIC Educational Resources Information Center

    Patalino, Marianne

    Problems in current course evaluation methods are discussed and an alternative method is described for the construction, analysis, and interpretation of a test to evaluate instructional programs. The method presented represents a different approach to the traditional overreliance on standardized achievement tests and the total scores they provide.…

  20. Hypothesis testing of scientific Monte Carlo calculations.

    PubMed

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  1. Hypothesis testing of scientific Monte Carlo calculations

    NASA Astrophysics Data System (ADS)

    Wallerberger, Markus; Gull, Emanuel

    2017-11-01

    The steadily increasing size of scientific Monte Carlo simulations and the desire for robust, correct, and reproducible results necessitates rigorous testing procedures for scientific simulations in order to detect numerical problems and programming bugs. However, the testing paradigms developed for deterministic algorithms have proven to be ill suited for stochastic algorithms. In this paper we demonstrate explicitly how the technique of statistical hypothesis testing, which is in wide use in other fields of science, can be used to devise automatic and reliable tests for Monte Carlo methods, and we show that these tests are able to detect some of the common problems encountered in stochastic scientific simulations. We argue that hypothesis testing should become part of the standard testing toolkit for scientific simulations.

  2. A hybrid Jaya algorithm for reliability-redundancy allocation problems

    NASA Astrophysics Data System (ADS)

    Ghavidel, Sahand; Azizivahed, Ali; Li, Li

    2018-04-01

    This article proposes an efficient improved hybrid Jaya algorithm based on time-varying acceleration coefficients (TVACs) and the learning phase introduced in teaching-learning-based optimization (TLBO), named the LJaya-TVAC algorithm, for solving various types of nonlinear mixed-integer reliability-redundancy allocation problems (RRAPs) and standard real-parameter test functions. RRAPs include series, series-parallel, complex (bridge) and overspeed protection systems. The search power of the proposed LJaya-TVAC algorithm for finding the optimal solutions is first tested on the standard real-parameter unimodal and multi-modal functions with dimensions of 30-100, and then tested on various types of nonlinear mixed-integer RRAPs. The results are compared with the original Jaya algorithm and the best results reported in the recent literature. The optimal results obtained with the proposed LJaya-TVAC algorithm provide evidence for its better and acceptable optimization performance compared to the original Jaya algorithm and other reported optimal results.

  3. Standard and goodness-of-fit parameter estimation methods for the three-parameter lognormal distribution

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1982-01-01

    A class of goodness-of-fit estimators is found to provide a useful alternative in certain situations to the standard maximum likelihood method which has some undesirable estimation characteristics for estimation from the three-parameter lognormal distribution. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Filliben tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Robustness of the procedures are examined and example data sets analyzed.

  4. A new shock-capturing numerical scheme for ideal hydrodynamics

    NASA Astrophysics Data System (ADS)

    Fecková, Z.; Tomášik, B.

    2015-05-01

    We present a new algorithm for solving ideal relativistic hydrodynamics based on Godunov method with an exact solution of Riemann problem for an arbitrary equation of state. Standard numerical tests are executed, such as the sound wave propagation and the shock tube problem. Low numerical viscosity and high precision are attained with proper discretization.

  5. A new technique for measuring listening and reading literacy in developing countries

    NASA Astrophysics Data System (ADS)

    Greene, Barbara A.; Royer, James M.; Anzalone, Stephen

    1990-03-01

    One problem in evaluating educational interventions in developing countries is the absence of tests that adequately reflect the culture and curriculum. The Sentence Verification Technique is a new procedure for measuring reading and listening comprehension that allows for the development of tests based on materials indigenous to a given culture. The validity of using the Sentence Verification Technique to measure reading comprehension in Grenada was evaluated in the present study. The study involved 786 students at standards 3, 4 and 5. The tests for each standard consisted of passages that varied in difficulty. The students identified as high ability students in all three standards performed better than those identified as low ability. All students performed better with easier passages. Additionally, students in higher standards performed bettter than students in lower standards on a given passage. These results supported the claim that the Sentence Verification Technique is a valid measure of reading comprehension in Grenada.

  6. NBPTS Upgrades Profession, Most Agree, Despite Test-Score Letdown

    ERIC Educational Resources Information Center

    Keller, Bess

    2006-01-01

    Back when the National Board for Professional Teaching Standards was launched in 1987, most of the talk in its favor cited one overarching problem: the weakness of the teaching profession. If professional standards were better defined, if professional rewards were greater, the argument went, schools and learning would improve. These days…

  7. Summative and Formative Assessments in Mathematics Supporting the Goals of the Common Core Standards

    ERIC Educational Resources Information Center

    Schoenfeld, Alan H.

    2015-01-01

    Being proficient in mathematics involves having rich and connected mathematical knowledge, being a strategic and reflective thinker and problem solver, and having productive mathematical beliefs and dispositions. This broad set of mathematics goals is central to the Common Core State Standards for Mathematics. High-stakes testing often drives…

  8. Effects of Enhanced Anchored Instruction on Skills Aligned to Common Core Math Standards

    ERIC Educational Resources Information Center

    Bottge, Brian A.; Cho, Sun-Joo

    2013-01-01

    This study compared how students with learning difficulties in math (MLD) who were randomly assigned to two instructional conditions answered items on problem solving tests aligned to the Common Core State Standards Initiative for Mathematics. Posttest scores showed improvement in the math performance of students receiving Enhanced Anchored…

  9. Contracting for Computer Software in Standardized Computer Languages

    PubMed Central

    Brannigan, Vincent M.; Dayhoff, Ruth E.

    1982-01-01

    The interaction between standardized computer languages and contracts for programs which use these languages is important to the buyer or seller of software. The rationale for standardization, the problems in standardizing computer languages, and the difficulties of determining whether the product conforms to the standard are issues which must be understood. The contract law processes of delivery, acceptance testing, acceptance, rejection, and revocation of acceptance are applicable to the contracting process for standard language software. Appropriate contract language is suggested for requiring strict compliance with a standard, and an overview of remedies is given for failure to comply.

  10. Upgraded demonstration vehicle task report

    NASA Technical Reports Server (NTRS)

    Bryant, J.; Hardy, K.; Livingston, R.; Sandberg, J.

    1981-01-01

    Vehicle/battery performance capabilities and interface problems that occurred when upgraded developmental batteries were integrated with upgraded versions of comercially available electric vehicles were investigated. Developmental batteries used included nickel zinc batteries, a nickel iron battery, and an improved lead acid battery. Testing of the electric vehicles and upgraded batteries was performed in the complete vehicle system environment to characterize performance and identify problems unique to the vehicle/battery system. Constant speed tests and driving schedule range tests were performed on a chassis dynamometer. The results from these tests of the upgraded batteries and vehicles were compared to performance capabilities for the same vehicles equipped with standard batteries.

  11. Implementation of an Evidence-Based and Content Validated Standardized Ostomy Algorithm Tool in Home Care: A Quality Improvement Project.

    PubMed

    Bare, Kimberly; Drain, Jerri; Timko-Progar, Monica; Stallings, Bobbie; Smith, Kimberly; Ward, Naomi; Wright, Sandra

    Many nurses have limited experience with ostomy management. We sought to provide a standardized approach to ostomy education and management to support nurses in early identification of stomal and peristomal complications, pouching problems, and provide standardized solutions for managing ostomy care in general while improving utilization of formulary products. This article describes development and testing of an ostomy algorithm tool.

  12. 75 FR 67233 - Federal Motor Vehicle Safety Standards; Head Restraints

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-02

    ... backset, backset retention and displacement, height retention, non-use position, definition of rear... backset displacement limits.\\9\\ To address this testing problem, in the May 2007 final rule the agency... ability to be fixated during static testing of head restraint displacement. Expansion of Fixation Option...

  13. Automated Formal Testing of C API Using T2C Framework

    NASA Astrophysics Data System (ADS)

    Khoroshilov, Alexey V.; Rubanov, Vladimir V.; Shatokhin, Eugene A.

    A problem of automated test development for checking basic functionality of program interfaces (API) is discussed. Different technologies and corresponding tools are surveyed. And T2C technology developed in ISPRAS is presented. The technology and associated tools facilitate development of "medium quality" (and "medium cost") tests. An important feature of T2C technology is that it enforces that each check in a developed test is explicitly linked to the corresponding place in the standard. T2C tools provide convenient means to create such linkage. The results of using T2C are considered by example of a project for testing interfaces of Linux system libraries defined by the LSB standard.

  14. Conformance testing strategies for DICOM protocols in a heterogenous communications system

    NASA Astrophysics Data System (ADS)

    Meyer, Ralph; Hewett, Andrew J.; Cordonnier, Emmanuel; Piqueras, Joachim; Jensch, Peter F.

    1995-05-01

    The goal of the DICOM standard is to define a standard network interface and data model for imaging devices from various vendors. It shall facilitate the development and integration of information systems and picture archiving and communication systems (PACS) in a networked environment. Current activities in Oldenburg, Germany include projects to establish cooperative work applications for radiological purposes, comprising (joined) text, data, signal and image communications, based on narrowband ISDN and ATM communication for regional and Pan European applications. In such a growing and constantly changing environment it is vital to have a solid and implementable plan to bring standards in operation. A communication standard alone cannot ensure interoperability between different vendor implementations. Even DICOM does not specify implementation-specific requirements nor does it specify a testing procedure to assess an implementation's conformance to the standard. The conformance statements defined in the DICOM standard only allow a user to determine which optional components are supported by the implementation. The goal of our work is to build a conformance test suite for DICOM. Conformance testing can aid to simplify and solve problems with multivendor systems. It will check a vendor's implementation against the DICOM standard and state the found subset of functionality. The test suite will be built in respect to the ISO 9646 Standard (OSI-Conformance Testing Methodology and Framework) which is a standard devoted to the subject of conformance testing implementations of Open Systems Interconnection (OSI) standards. For our heterogeneous communication environments we must also consider ISO 9000 - 9004 (quality management and quality assurance) to give the users the confidence in evolving applications.

  15. Planning Model of Physics Learning In Senior High School To Develop Problem Solving Creativity Based On National Standard Of Education

    NASA Astrophysics Data System (ADS)

    Putra, A.; Masril, M.; Yurnetti, Y.

    2018-04-01

    One of the causes of low achievement of student’s competence in physics learning in high school is the process which they have not been able to develop student’s creativity in problem solving. This is shown that the teacher’s learning plan is not accordance with the National Eduction Standard. This study aims to produce a reconstruction model of physics learning that fullfil the competency standards, content standards, and assessment standards in accordance with applicable curriculum standards. The development process follows: Needs analysis, product design, product development, implementation, and product evaluation. The research process involves 2 peers judgment, 4 experts judgment and two study groups of high school students in Padang. The data obtained, in the form of qualitative and quantitative data that collected through documentation, observation, questionnaires, and tests. The result of this research up to the product development stage that obtained the physics learning plan model that meets the validity of the content and the validity of the construction in terms of the fulfillment of Basic Competence, Content Standards, Process Standards and Assessment Standards.

  16. Instructional Design Implications about Comprehension of Listening to Music before and during Reading

    ERIC Educational Resources Information Center

    Hinrichs, Amy F.

    2013-01-01

    Low reading levels and lack of comprehension are current problems in high school classrooms confirmed by low standardized test scores and employer feedback as comprehension problems move into the workplace with students who do not have the necessary reading skills on the job. Midwestern high school science club students served as participants in…

  17. Predictive Toxicology and In Vitro to In Vivo Extrapolation (AsiaTox2015)

    EPA Science Inventory

    A significant challenge in toxicology is the “too many chemicals” problem. Humans and environmental species are exposed to as many as tens of thousands of chemicals, few of which have been thoroughly tested using standard in vivo test methods. This talk will discuss several appro...

  18. Effects of Blended Instructional Models on Math Performance

    ERIC Educational Resources Information Center

    Bottge, Brian A.; Ma, Xin; Gassaway, Linda; Toland, Michael D.; Butler, Mark; Cho, Sun-Joo

    2014-01-01

    A pretest-posttest cluster-randomized trial involving 31 middle schools and 335 students with disabilities tested the effects of combining explicit and anchored instruction on fraction computation and problem solving. Results of standardized and researcher-developed tests showed that students who were taught with the blended units outscored…

  19. New ASTM Standards for Nondestructive Testing of Aerospace Composites

    NASA Technical Reports Server (NTRS)

    Waller, Jess M.; Saulsberry, Regor L.

    2010-01-01

    Problem: Lack of consensus standards containing procedural detail for NDE of polymer matrix composite materials: I. Flat panel composites. II. Composite components with more complex geometries a) Pressure vessels: 1) composite overwrapped pressure vessels (COPVs). 2) composite pressure vessels (CPVs). III. Sandwich core constructions. Metal and brittle matrix composites are a possible subject of future effort.

  20. The EMIR experience in the use of software control simulators to speed up the time to telescope

    NASA Astrophysics Data System (ADS)

    Lopez Ramos, Pablo; López-Ruiz, J. C.; Moreno Arce, Heidy; Rosich, Josefina; Perez Menor, José Maria

    2012-09-01

    One of the main problems facing development teams working on instrument control systems consists on the need to access mechanisms which are not available until well into the integration phase. The need to work with real hardware creates additional problems like, among others: certain faults cannot be tested due to the possibility of hardware damage, taking the system to the limit may shorten its operational lifespan and the full system may not be available during some periods due to maintenance and/or testing of individual components. These problems can be treated with the use of simulators and by applying software/hardware standards. Since information on the construction and performance of electro-mechanical systems is available at relatively early stages of the project, simulators are developed in advance (before the existence of the mechanism) or, if conventions and standards have been correctly followed, a previously developed simulator might be used. This article describes our experience in building software simulators and the main advantages we have identified, which are: the control software can be developed even in the absence of real hardware, critical tests can be prepared using the simulated systems, test system behavior for hardware failure situations that represent a risk of the real system, and the speed up of in house integration of the entire instrument. The use of simulators allows us to reduce development, testing and integration time.

  1. GCS programmer's manual

    NASA Technical Reports Server (NTRS)

    Lowman, Douglas S.; Withers, B. Edward; Shagnea, Anita M.; Dent, Leslie A.; Hayhurst, Kelly J.

    1990-01-01

    A variety of instructions to be used in the development of implementations of software for the Guidance and Control Software (GCS) project is described. This document fulfills the Radio Technical Commission for Aeronautics RTCA/DO-178A guidelines, 'Software Considerations in Airborne Systems and Equipment Certification' requirements for document No. 4, which specifies the information necessary for understanding and programming the host computer, and document No. 12, which specifies the software design and implementation standards that are applicable to the software development and testing process. Information on the following subjects is contained: activity recording, communication protocol, coding standards, change management, error handling, design standards, problem reporting, module testing logs, documentation formats, accuracy requirements, and programmer responsibilities.

  2. Bias in estimating accuracy of a binary screening test with differential disease verification

    PubMed Central

    Brinton, John T.; Ringham, Brandy M.; Glueck, Deborah H.

    2011-01-01

    SUMMARY Sensitivity, specificity, positive and negative predictive value are typically used to quantify the accuracy of a binary screening test. In some studies it may not be ethical or feasible to obtain definitive disease ascertainment for all subjects using a gold standard test. When a gold standard test cannot be used an imperfect reference test that is less than 100% sensitive and specific may be used instead. In breast cancer screening, for example, follow-up for cancer diagnosis is used as an imperfect reference test for women where it is not possible to obtain gold standard results. This incomplete ascertainment of true disease, or differential disease verification, can result in biased estimates of accuracy. In this paper, we derive the apparent accuracy values for studies subject to differential verification. We determine how the bias is affected by the accuracy of the imperfect reference test, the percent who receive the imperfect reference standard test not receiving the gold standard, the prevalence of the disease, and the correlation between the results for the screening test and the imperfect reference test. It is shown that designs with differential disease verification can yield biased estimates of accuracy. Estimates of sensitivity in cancer screening trials may be substantially biased. However, careful design decisions, including selection of the imperfect reference test, can help to minimize bias. A hypothetical breast cancer screening study is used to illustrate the problem. PMID:21495059

  3. The Use of Quality Control and Data Mining Techniques for Monitoring Scaled Scores: An Overview. Research Report. ETS RR-12-20

    ERIC Educational Resources Information Center

    von Davier, Alina A.

    2012-01-01

    Maintaining comparability of test scores is a major challenge faced by testing programs that have almost continuous administrations. Among the potential problems are scale drift and rapid accumulation of errors. Many standard quality control techniques for testing programs, which can effectively detect and address scale drift for small numbers of…

  4. Numerical Optimization Using Computer Experiments

    NASA Technical Reports Server (NTRS)

    Trosset, Michael W.; Torczon, Virginia

    1997-01-01

    Engineering design optimization often gives rise to problems in which expensive objective functions are minimized by derivative-free methods. We propose a method for solving such problems that synthesizes ideas from the numerical optimization and computer experiment literatures. Our approach relies on kriging known function values to construct a sequence of surrogate models of the objective function that are used to guide a grid search for a minimizer. Results from numerical experiments on a standard test problem are presented.

  5. Impact of Early Intervention on Psychopathology, Crime, and Weil-Being at Age 25

    PubMed Central

    2015-01-01

    Objective This randomized controlled trial tested the efficacy of early intervention to prevent adult psychopathology and improve well-being in early-starting conduct-problem children. Method Kindergarteners (N=9,594) in three cohorts (1991–1993) at 55 schools in four communities were screened for conduct problems, yielding 979 early starters. A total of 891 (91%) consented (51% African American, 47% European American; 69% boys). Children were randomly assigned by school cluster to a 10-year intervention or control. The intervention goal was to develop social competencies in children that would carry them throughout life, through social skills training, parent behavior-management training with home visiting, peer coaching, reading tutoring, and classroom social-emotional curricula. Manualization and supervision ensured program fidelity. Ninety-eight percent participated during grade 1, and 80% continued through grade 10. At age 25, arrest records were reviewed (N=817,92%), and condition-blinded adults psychiatrically interviewed participants (N=702; 81% of living participants) and a peer (N=535) knowledgeable about the participant. Results Intent-to-treat logistic regression analyses indicated that 69% of participants in the control arm displayed at least one externalizing, internalizing, or substance abuse psychiatric problem (based on self- or peer interview) at age 25, in contrast with 59% of those assigned to intervention (odds ratio=0.59, CI=0.43–0.81; number needed to treat=8). This pattern also held for self-interviews, peer interviews, scores using an “and” rule for self- and peer reports, and separate tests for externalizing problems, internalizing problems, and substance abuse problems, as well as for each of three cohorts, four sites, male participants, female participants, African Americans, European Americans, moderate-risk, and high-risk subgroups. Intervention participants also received lower severity-weighted violent (standardized estimate=-0.37) and drug (standardized estimate=-0.43) crime conviction scores, lower risky sexual behavior scores (standardized estimate=-0.24), and higher well-being scores (standardized estimate=0.19). Conclusions This study provides evidence for the efficacy of early intervention in preventing adult psychopathology among high-risk early-starting conduct-problem children. PMID:25219348

  6. Make Time for Breakfast

    MedlinePlus

    ... reading and standardized tests. They also have fewer behavior problems and are less likely to be tardy. Eating breakfast also can help children maintain a healthy weight. Unfortunately, studies show many children don’t eat breakfast every ...

  7. Gambling and problem gambling among young adolescents in Great Britain.

    PubMed

    Forrest, David; McHale, Ian G

    2012-12-01

    International evidence suggests that problem gambling tends to be 2-4 times higher among adolescents as among adults and this proves to be true of Great Britain according to the latest adolescent prevalence survey. 8,958 British children (11-15) were surveyed in 201 schools during late 2008 and 2009. The questionnaire included a standard screen, DSM-IV-MR-J, to test for problem gambling. Our regression models explore influences of demographic, home and school characteristics on probabilities (both unconditional and conditional on being a gambler) of a child testing positive for problem gambling. More than 20% of children participated in gambling and, of these, nearly 8% tested positive. Age-group prevalence of problem gambling was 1.9%, compared with 0.6-0.9% in the most recent official adult surveys. Boys were much more likely than girls to gamble and to exhibit symptoms of problem gambling if they did. Generally, home characteristics, particularly parental attitude and example, dominated school characteristics in accounting for risks. Unanticipated findings included significantly elevated probabilities of problem gambling among Asian children and among children who live in a home without siblings. Child income was also a potent predictor of gambling and problem gambling.

  8. DHS small-scale safety and thermal testing of improvised explosives-comparison of testing performance

    NASA Astrophysics Data System (ADS)

    Reynolds, J. G.; Sandstrom, M. M.; Brown, G. W.; Warner, K. F.; Phillips, J. J.; Shelley, T. J.; Reyes, J. A.; Hsu, P. C.

    2014-05-01

    One of the first steps in establishing safe handling procedures for explosives is small-scale safety and thermal (SSST) testing. To better understand the response of improvised materials or homemade explosives (HMEs) to SSST testing, 16 HME materials were compared to three standard military explosives in a proficiency-type round robin study among five laboratories-two DoD and three DOE-sponsored by DHS. The testing matrix has been designed to address problems encountered with improvised materials-powder mixtures, liquid suspensions, partially wetted solids, immiscible liquids, and reactive materials. More than 30 issues have been identified that indicate standard test methods may require modification when applied to HMEs to derive accurate sensitivity assessments needed for developing safe handling and storage practices. This paper presents a generalized comparison of the results among the testing participants, comparison of friction results from BAM (German Bundesanstalt für Materi-alprüfung) and ABL (Allegany Ballistics Laboratory) designed testing equipment, and an overview of the statistical results from the RDX (1,3,5-Trinitroperhydro-1,3,5-triazine) standard tested throughout the proficiency test.

  9. Detailed requirements document for the problem reporting data system (PDS). [space shuttle and batch processing

    NASA Technical Reports Server (NTRS)

    West, R. S.

    1975-01-01

    The system is described as a computer-based system designed to track the status of problems and corrective actions pertinent to space shuttle hardware. The input, processing, output, and performance requirements of the system are presented along with standard display formats and examples. Operational requirements, hardware, requirements, and test requirements are also included.

  10. A mechanical system for tensile testing of supported films at the nanoscale.

    PubMed

    Pantano, Maria F; Speranza, G; Galiotis, Costas; Pugno, Nicola M

    2018-06-27

    Standard tensile tests of materials are usually performed on freestanding specimens. However, such requirement is difficult to implement when the materials of interest are of nanoscopic dimensions due to problems related to their handling and manipulation. In the present paper, a new device is presented for tensile testing of thin nanomaterials, which allows tests to be carried out on specimens initially deposited onto a macroscopic pre-notched substrate. On loading, however, no substrate effects are introduced, allowing the films to be freely stretched. The results obtained from a variety of thin metal or polymeric films are very promising for the further development of this technique as a standard method for nanomaterial mechanical testing. © 2018 IOP Publishing Ltd.

  11. Pediatric Baseline Patch Test Series: Initial Findings of the Pediatric Contact Dermatitis Workgroup.

    PubMed

    Yu, JiaDe; Atwater, Amber Reck; Brod, Bruce; Chen, Jennifer K; Chisolm, Sarah S; Cohen, David E; de la Feld, Salma; Gaspari, Anthony A; Martin, Kari Lyn; Montanez-Wiscovich, Marjorie; Sheehan, Michael; Silverberg, Nanette; Lugo-Somolinos, Aida; Thakur, Binod K; Watsky, Kalman; Jacob, Sharon E

    2018-06-21

    Allergic contact dermatitis is a challenging diagnostic problem in children. Although epicutaneous patch testing is the diagnostic standard for confirmation of contact sensitization, it is less used in children by dermatologists treating children, pediatric dermatologists, and pediatricians, when compared with adult practitioners. The aim of the study was to create and evaluate standardization of a pediatric patch test series for children older than 6 years. We surveyed dermatologists and allergists conducting epicutaneous patch testing in children attending the 2017 American Contact Dermatitis Society meeting held in Washington, DC. This was followed by discussion of collected data and consensus review by a pediatric contact dermatitis working group at the conference. A baseline pediatric patch test panel was established through working group consensus.

  12. International Round-Robin Testing of Bulk Thermoelectrics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wang, Hsin; Porter, Wallace D; Bottner, Harold

    2011-11-01

    Two international round-robin studies were conducted on transport properties measurements of bulk thermoelectric materials. The study discovered current measurement problems. In order to get ZT of a material four separate transport measurements must be taken. The round-robin study showed that among the four properties Seebeck coefficient is the one can be measured consistently. Electrical resistivity has +4-9% scatter. Thermal diffusivity has similar +5-10% scatter. The reliability of the above three properties can be improved by standardizing test procedures and enforcing system calibrations. The worst problem was found in specific heat measurements using DSC. The probability of making measurement error ismore » great due to the fact three separate runs must be taken to determine Cp and the baseline shift is always an issue for commercial DSC. It is suggest the Dulong Petit limit be always used as a guide line for Cp. Procedures have been developed to eliminate operator and system errors. The IEA-AMT annex is developing standard procedures for transport properties testing.« less

  13. Using hybrid implicit Monte Carlo diffusion to simulate gray radiation hydrodynamics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cleveland, Mathew A., E-mail: cleveland7@llnl.gov; Gentile, Nick

    This work describes how to couple a hybrid Implicit Monte Carlo Diffusion (HIMCD) method with a Lagrangian hydrodynamics code to evaluate the coupled radiation hydrodynamics equations. This HIMCD method dynamically applies Implicit Monte Carlo Diffusion (IMD) [1] to regions of a problem that are opaque and diffusive while applying standard Implicit Monte Carlo (IMC) [2] to regions where the diffusion approximation is invalid. We show that this method significantly improves the computational efficiency as compared to a standard IMC/Hydrodynamics solver, when optically thick diffusive material is present, while maintaining accuracy. Two test cases are used to demonstrate the accuracy andmore » performance of HIMCD as compared to IMC and IMD. The first is the Lowrie semi-analytic diffusive shock [3]. The second is a simple test case where the source radiation streams through optically thin material and heats a thick diffusive region of material causing it to rapidly expand. We found that HIMCD proves to be accurate, robust, and computationally efficient for these test problems.« less

  14. Standards on the permanence of recording materials

    NASA Astrophysics Data System (ADS)

    Adelstein, Peter Z.

    1996-02-01

    The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tape has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.

  15. Standards on the permanence of recording materials

    NASA Astrophysics Data System (ADS)

    Adelstein, Peter Z.

    1996-01-01

    The permanence of recording materials is dependent upon many factors, and these differ for photographic materials, magnetic tape and optical disks. Photographic permanence is affected by the (1) stability of the material, (2) the photographic processing, and (3) the storage conditions. American National Standards on the material and the processing have been published for different types of film and standard test methods have been established for color film. The third feature of photographic permanence is the storage requirements and these have been established for photographic film, prints, and plates. Standardization on the permanence of electronic recording materials is more complicated. As with photographic materials, stability is dependent upon (1) the material itself and (2) the storage environment. In addition, retention of the necessary (3) hardware and (4) software is also a prerequisite. American National Standards activity in these areas has been underway for the past six years. A test method for the material which determines the life expectancy of CD-ROMs has been standardized. The problems of determining the expected life of magnetic tape have been more formidable but the critical physical properties have been determined. A specification for the storage environment of magnetic tapes has been finalized and one on the storage of optical disks is being worked on. Critical but unsolved problems are the obsolescence of both the hardware and the software necessary to read digital images.

  16. PBL in the Era of Reform Standards: Challenges and Benefits Perceived by Teachers in One Elementary School

    ERIC Educational Resources Information Center

    Nariman, Nahid; Chrispeels, Janet

    2016-01-01

    We explore teachers' efforts to implement problem-based learning (PBL) in an elementary school serving predominantly English learners. Teachers had an opportunity to implement the Next Generation Science Standards (NGSS) using PBL in a summer school setting with no test-pressures. To understand the challenges and benefits of PBL implementation, a…

  17. The Impact of Problem-Based Learning with Computer Simulation on Middle Level Educators' Instructional Practices and Understanding of the Nature of Middle Level Learners

    ERIC Educational Resources Information Center

    Huelskamp, Lisa M.

    2009-01-01

    The need for effective teachers is growing while national and state standards are putting ever-increasing demands on teachers and raising expectations for student achievement. Low science and mathematics standardized test scores, particularly in the middle grades, reflect unprepared adolescents, perhaps because of ineffective teaching strategies…

  18. HTS Data and In Silico Models for High-Throughout Risk Assessment (FutureTox II)

    EPA Science Inventory

    A significant challenge in toxicology is the “too many chemicals” problem. Humans and environmental species are exposed to as many as tens of thousands of chemicals, few of which have been thoroughly tested using standard in vivo test methods. This talk will discuss several appro...

  19. Validation of the Hwalek-Sengstock Elder Abuse Screening Test.

    ERIC Educational Resources Information Center

    Neale, Anne Victoria; And Others

    Elder abuse is recognized as an under-detected and under-reported social problem. Difficulties in detecting elder abuse are compounded by the lack of a standardized, psychometrically valid instrument for case finding. The development of the Hwalek-Sengstock Elder Abuse Screening Test (H-S/EAST) followed a larger effort to identify indicators and…

  20. Testing Cases under Title VII.

    ERIC Educational Resources Information Center

    Rothschild, Michael; Werden, Gregory J.

    This paper discusses Congressional and judicial attempts to deal with the problem of employment practices which lead to discriminatory outcomes but which may not be discriminatory in intent. The use of paper and pencil tests as standards for hiring and promotion is focused on as an example of this type of employment practice. An historical account…

  1. Bell-Curve Genetic Algorithm for Mixed Continuous and Discrete Optimization Problems

    NASA Technical Reports Server (NTRS)

    Kincaid, Rex K.; Griffith, Michelle; Sykes, Ruth; Sobieszczanski-Sobieski, Jaroslaw

    2002-01-01

    In this manuscript we have examined an extension of BCB that encompasses a mix of continuous and quasi-discrete, as well as truly-discrete applications. FVe began by testing two refinements to the discrete version of BCB. The testing of midpoint versus fitness (Tables 1 and 2) proved inconclusive. The testing of discrete normal tails versus standard mutation showed was conclusive and demonstrated that the discrete normal tails are better. Next, we implemented these refinements in a combined continuous and discrete BCB and compared the performance of two discrete distance on the hub problem. Here we found when "order does matter" it pays to take it into account.

  2. Assessment of capillary suction time (CST) test methodologies.

    PubMed

    Sawalha, O; Scholz, M

    2007-12-01

    The capillary suction time (CST) test is a commonly used method to measure the filterability and the easiness of removing moisture from slurry and sludge in numerous environmental and industrial applications. This study assessed several novel alterations of both the test methodology and the current standard capillary suction time (CST) apparatus. Twelve different papers including the standard Whatman No. 17 chromatographic paper were tested. The tests were run using four different types of sludge including a synthetic sludge, which was specifically developed for benchmarking purposes. The standard apparatus was altered by the introduction of a novel rectangular funnel instead of a standard circular one. A stirrer was also introduced to solve the problem of test inconsistency (e.g. high CST variability) particularly for heavy types of sludge. Results showed that several alternative papers, which are cheaper than the standard paper, can be used to estimate CST values accurately, and that the test repeatability can be improved in many cases and for different types of sludge. The introduction of the rectangular funnel demonstrated an obvious enhancement of test repeatability. The use of a stirrer to avoid sedimentation of heavy sludge did not have statistically significant impact on the CST values or the corresponding data variability. The application of synthetic sludge can support the testing of experimental methodologies and should be used for subsequent benchmarking purposes.

  3. Solving Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) using BRKGA with local search

    NASA Astrophysics Data System (ADS)

    Prasetyo, H.; Alfatsani, M. A.; Fauza, G.

    2018-05-01

    The main issue in vehicle routing problem (VRP) is finding the shortest route of product distribution from the depot to outlets to minimize total cost of distribution. Capacitated Closed Vehicle Routing Problem with Time Windows (CCVRPTW) is one of the variants of VRP that accommodates vehicle capacity and distribution period. Since the main problem of CCVRPTW is considered a non-polynomial hard (NP-hard) problem, it requires an efficient and effective algorithm to solve the problem. This study was aimed to develop Biased Random Key Genetic Algorithm (BRKGA) that is combined with local search to solve the problem of CCVRPTW. The algorithm design was then coded by MATLAB. Using numerical test, optimum algorithm parameters were set and compared with the heuristic method and Standard BRKGA to solve a case study on soft drink distribution. Results showed that BRKGA combined with local search resulted in lower total distribution cost compared with the heuristic method. Moreover, the developed algorithm was found to be successful in increasing the performance of Standard BRKGA.

  4. What No Child Left Behind Leaves Behind: The Roles of IQ and Self-Control in Predicting Standardized Achievement Test Scores and Report Card Grades

    PubMed Central

    Duckworth, Angela L.; Quinn, Patrick D.; Tsukayama, Eli

    2013-01-01

    The increasing prominence of standardized testing to assess student learning motivated the current investigation. We propose that standardized achievement test scores assess competencies determined more by intelligence than by self-control, whereas report card grades assess competencies determined more by self-control than by intelligence. In particular, we suggest that intelligence helps students learn and solve problems independent of formal instruction, whereas self-control helps students study, complete homework, and behave positively in the classroom. Two longitudinal, prospective studies of middle school students support predictions from this model. In both samples, IQ predicted changes in standardized achievement test scores over time better than did self-control, whereas self-control predicted changes in report card grades over time better than did IQ. As expected, the effect of self-control on changes in report card grades was mediated in Study 2 by teacher ratings of homework completion and classroom conduct. In a third study, ratings of middle school teachers about the content and purpose of standardized achievement tests and report card grades were consistent with the proposed model. Implications for pedagogy and public policy are discussed. PMID:24072936

  5. Packing Boxes into Multiple Containers Using Genetic Algorithm

    NASA Astrophysics Data System (ADS)

    Menghani, Deepak; Guha, Anirban

    2016-07-01

    Container loading problems have been studied extensively in the literature and various analytical, heuristic and metaheuristic methods have been proposed. This paper presents two different variants of a genetic algorithm framework for the three-dimensional container loading problem for optimally loading boxes into multiple containers with constraints. The algorithms are designed so that it is easy to incorporate various constraints found in real life problems. The algorithms are tested on data of standard test cases from literature and are found to compare well with the benchmark algorithms in terms of utilization of containers. This, along with the ability to easily incorporate a wide range of practical constraints, makes them attractive for implementation in real life scenarios.

  6. The inverse problem of estimating the gravitational time dilation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gusev, A. V., E-mail: avg@sai.msu.ru; Litvinov, D. A.; Rudenko, V. N.

    2016-11-15

    Precise testing of the gravitational time dilation effect suggests comparing the clocks at points with different gravitational potentials. Such a configuration arises when radio frequency standards are installed at orbital and ground stations. The ground-based standard is accessible directly, while the spaceborne one is accessible only via the electromagnetic signal exchange. Reconstructing the current frequency of the spaceborne standard is an ill-posed inverse problem whose solution depends significantly on the characteristics of the stochastic electromagnetic background. The solution for Gaussian noise is known, but the nature of the standards themselves is associated with nonstationary fluctuations of a wide class ofmore » distributions. A solution is proposed for a background of flicker fluctuations with a spectrum (1/f){sup γ}, where 1 < γ < 3, and stationary increments. The results include formulas for the error in reconstructing the frequency of the spaceborne standard and numerical estimates for the accuracy of measuring the relativistic redshift effect.« less

  7. Design and Implementation of USAF Avionics Integration Support Facilities

    DTIC Science & Technology

    1981-12-01

    specification for taking the bbranch Vt -Routing indicator (No activity): Allocate Node: All’ocation of resources: R= Allocation rule. Res Resource type number...problems, and the integration and testing of the ECS. The purpose of this investigation is to establish a standard software development system...Corrections to equipment problems. -Compensation for equipment degradation. -New Developments . This approach is intended to centralize essential

  8. Strategic Placement of Treatments (SPOTS): Maximizing the Effectiveness of Fuel and Vegetation Treatments on Problem Fire Behavior and Effects

    Treesearch

    Diane M. Gercke; Susan A. Stewart

    2006-01-01

    In 2005, eight U.S. Forest Service and Bureau of Land Management interdisciplinary teams participated in a test of strategic placement of treatments (SPOTS) techniques to maximize the effectiveness of fuel treatments in reducing problem fire behavior, adverse fire effects, and suppression costs. This interagency approach to standardizing the assessment of risks and...

  9. Applying a Genetic Algorithm to Reconfigurable Hardware

    NASA Technical Reports Server (NTRS)

    Wells, B. Earl; Weir, John; Trevino, Luis; Patrick, Clint; Steincamp, Jim

    2004-01-01

    This paper investigates the feasibility of applying genetic algorithms to solve optimization problems that are implemented entirely in reconfgurable hardware. The paper highlights the pe$ormance/design space trade-offs that must be understood to effectively implement a standard genetic algorithm within a modem Field Programmable Gate Array, FPGA, reconfgurable hardware environment and presents a case-study where this stochastic search technique is applied to standard test-case problems taken from the technical literature. In this research, the targeted FPGA-based platform and high-level design environment was the Starbridge Hypercomputing platform, which incorporates multiple Xilinx Virtex II FPGAs, and the Viva TM graphical hardware description language.

  10. Chemistry Notes

    ERIC Educational Resources Information Center

    School Science Review, 1972

    1972-01-01

    Short articles describing the construction of a self-testing device for learning ionic formulae, problems with standard'' experiments in crystallizing sulfur, preparative details for a cold-setting adhesive and vermillion dye, and providing data related to the industrial manufacture of sulphuric acid. (AL)

  11. In Vitro and Modeling Approaches to Risk Assessment from the U.S. Environmental Protection Agency ToxCast Program

    EPA Science Inventory

    A significant challenge in toxicology is the “too many chemicals” problem. Humans and environmental species are exposed to as many as tens of thousands of chemicals, only a small percentage of which have been tested thoroughly using standard in vivo test methods. This paper revie...

  12. Doing Well in School: Repertoires of Success at the End of Elementary School

    ERIC Educational Resources Information Center

    Link, Holly K.

    2016-01-01

    In spite of over a decade of U.S. school reform emphasizing test preparation and performance, students from minoritized backgrounds continue to underachieve on standardized testing. With an abundance of research on the achievement gap, we are now more than ever aware of this problem. But to avoid reproducing longstanding school inequities, testing…

  13. High-Rank Stakeholders' Perspectives on High-Stakes University Entrance Examinations Reform: Priorities and Problems

    ERIC Educational Resources Information Center

    Kiany, Gholam Reza; Shayestefar, Parvaneh; Samar, Reza Ghafar; Akbari, Ramin

    2013-01-01

    A steady stream of studies on high-stakes tests such as University Entrance Examinations (UEEs) suggests that high-stakes tests reforms serve as the leverage for promoting quality of learning, standards of teaching, and credible forms of accountability. However, such remediation is often not as effective as hoped and success is not necessarily…

  14. High-Stakes Testing and Student Achievement: Problems for the No Child Left Behind Act. Executive Summary

    ERIC Educational Resources Information Center

    Nichols, Sharon L.; Glass, Gene V.; Berliner, David C.

    2005-01-01

    Under the federal No Child Left Behind Act of 2001 (NCLB), standardized test scores are the indicator used to hold schools and school districts accountable for student achievement. Each state is responsible for constructing an accountability system, attaching consequences--or stakes--for student performance. The theory of action implied by this…

  15. High-Stakes Testing and Student Achievement: Problems for the No Child Left Behind Act

    ERIC Educational Resources Information Center

    Nichols, Sharon L.; Glass, Gene V.; Berliner, David C.

    2005-01-01

    Under the federal No Child Left Behind Act of 2001 (NCLB), standardized test scores are the indicator used to hold schools and school districts accountable for student achievement. Each state is responsible for constructing an accountability system, attaching consequences--or stakes--for student performance. The theory of action implied by this…

  16. Behaviour of 4- to 5-year-old nondisabled ELBW children: Outcomes following group-based physiotherapy intervention.

    PubMed

    Brown, L; Burns, Y R; Watter, P; Gray, P H; Gibbons, K S

    2018-03-01

    Extreme prematurity or extremely low birth weight (ELBW) can adversely affect behaviour. Nondisabled ELBW children are at risk of behavioural problems, which may become a particular concern after commencement of formal education. This study explored the frequency of behavioural and emotional problems amongst nondisabled ELBW children at 4 to 5 years of age and whether intervention had a positive influence on behaviour. The relationship between behaviour, gender, and other areas of performance at 5 years was explored. Fifty 4-year-old children (born <28 weeks gestation or birth weight <1,000 g) with minimal/mild motor impairment were randomly allocated to intervention (n = 24) or standard care (n = 26). Intervention was 6 group-based physiotherapy weekly sessions and home programme. Standard care was best practice advice. The Child Behavior Checklist (CBCL) for preschool children was completed at baseline and at 1-year post-baseline. Other measures at follow-up included Movement Assessment Battery for Children Second Edition, Beery Visual-Motor Integration Test 5th Edition, and Peabody Picture Vocabulary Test 4th Edition. The whole cohort improved on CBCL total problems score between baseline (mean 50.0, SD 11.1) and 1-year follow-up (mean 45.2, SD 10.3), p = .004. There were no significant differences between groups over time on CBCL internalizing, externalizing, or total problems scores. The intervention group showed a mean difference in total problems score of -3.8 (CI [1.5, 9.1]) between times, with standard care group values being -4.4 (CI [1.6, 7.1]). Males had higher total problems scores than females (p = .026), although still performed within the "normal" range. CBCL scores did not correlate with other scores. The behaviour of nondisabled ELBW children was within the "normal" range at 4 to 5 years, and both intervention and standard care may have contributed to improved behavioural outcomes. Behaviour was not related to performance in other developmental domains. © 2017 John Wiley & Sons Ltd.

  17. Physical Characteristics of Laboratory Tested Concrete as a Substituion of Gravel on Normal Concrete

    NASA Astrophysics Data System (ADS)

    Butar-butar, Ronald; Suhairiani; Wijaya, Kinanti; Sebayang, Nono

    2018-03-01

    Concrete technology is highly potential in the field of construction for structural and non-structural construction. The amount uses of this concrete material raise the problem of solid waste in the form of concrete remaining test results in the laboratory. This waste is usually just discarded and not economically valuable. In solving the problem, this experiment was made new materials by using recycle material in the form of recycled aggregate which aims to find out the strength characteristics of the used concrete as a gravel substitution material on the normal concrete and obtain the value of the substitution composition of gravel and used concrete that can achieve the strength of concrete according to the standard. Testing of concrete characteristic is one of the requirements before starting the concrete mixture. This test using SNI method (Indonesian National Standard) with variation of comparison (used concrete : gravel) were 15: 85%, 25: 75%, 35:65%, 50:50 %, 75: 25%. The results of physical tests obtained the mud content value of the mixture gravel and used concrete is 0.03 larger than the standard of SNI 03-4142-1996 that is equal to 1.03%. so the need watering or soaking before use. The water content test results show an increase in the water content value if the composition of the used concrete increases. While the specific gravity value for variation 15: 85% until 35: 65% fulfilled the requirements of SNI 03-1969-1990. the other variasion show the specifics gravity value included on the type of light materials.

  18. A comparison of fitness-case sampling methods for genetic programming

    NASA Astrophysics Data System (ADS)

    Martínez, Yuliana; Naredo, Enrique; Trujillo, Leonardo; Legrand, Pierrick; López, Uriel

    2017-11-01

    Genetic programming (GP) is an evolutionary computation paradigm for automatic program induction. GP has produced impressive results but it still needs to overcome some practical limitations, particularly its high computational cost, overfitting and excessive code growth. Recently, many researchers have proposed fitness-case sampling methods to overcome some of these problems, with mixed results in several limited tests. This paper presents an extensive comparative study of four fitness-case sampling methods, namely: Interleaved Sampling, Random Interleaved Sampling, Lexicase Selection and Keep-Worst Interleaved Sampling. The algorithms are compared on 11 symbolic regression problems and 11 supervised classification problems, using 10 synthetic benchmarks and 12 real-world data-sets. They are evaluated based on test performance, overfitting and average program size, comparing them with a standard GP search. Comparisons are carried out using non-parametric multigroup tests and post hoc pairwise statistical tests. The experimental results suggest that fitness-case sampling methods are particularly useful for difficult real-world symbolic regression problems, improving performance, reducing overfitting and limiting code growth. On the other hand, it seems that fitness-case sampling cannot improve upon GP performance when considering supervised binary classification.

  19. Preventing Early Child Maltreatment: Implications from a Longitudinal Study of Maternal Abuse History, Substance Use Problems, and Offspring Victimization

    PubMed Central

    Appleyard, Karen; Berlin, Lisa J.; Rosanbalm, Katherine D.; Dodge, Kenneth A.

    2013-01-01

    In the interest of improving child maltreatment prevention science, this longitudinal, community based study of 499 mothers and their infants tested the hypothesis that mothers’ childhood history of maltreatment would predict maternal substance use problems, which in turn would predict offspring victimization. Mothers (35% White/non-Latina, 34% Black/non-Latina, 23% Latina, 7% other) were recruited and interviewed during pregnancy, and child protective services records were reviewed for the presence of the participants’ target infants between birth and age 26 months. Mediating pathways were examined through structural equation modeling and tested using the products of the coefficients approach. The mediated pathway from maternal history of sexual abuse to substance use problems to offspring victimization was significant (standardized mediated path [ab]=.07, 95% CI [.02, .14]; effect size=.26), as was the mediated pathway from maternal history of physical abuse to substance use problems to offspring victimization (standardized mediated path [ab]=.05, 95% CI [.01, .11]; effect size =.19). There was no significant mediated pathway from maternal history of neglect. Findings are discussed in terms of specific implications for child maltreatment prevention, including the importance of assessment and early intervention for maternal history of maltreatment and substance use problems, targeting women with maltreatment histories for substance use services, and integrating child welfare and parenting programs with substance use treatment. PMID:21240556

  20. Preventing early child maltreatment: implications from a longitudinal study of maternal abuse history, substance use problems, and offspring victimization.

    PubMed

    Appleyard, Karen; Berlin, Lisa J; Rosanbalm, Katherine D; Dodge, Kenneth A

    2011-06-01

    In the interest of improving child maltreatment prevention science, this longitudinal, community based study of 499 mothers and their infants tested the hypothesis that mothers' childhood history of maltreatment would predict maternal substance use problems, which in turn would predict offspring victimization. Mothers (35% White/non-Latina, 34% Black/non-Latina, 23% Latina, 7% other) were recruited and interviewed during pregnancy, and child protective services records were reviewed for the presence of the participants' target infants between birth and age 26 months. Mediating pathways were examined through structural equation modeling and tested using the products of the coefficients approach. The mediated pathway from maternal history of sexual abuse to substance use problems to offspring victimization was significant (standardized mediated path [ab] = .07, 95% CI [.02, .14]; effect size = .26), as was the mediated pathway from maternal history of physical abuse to substance use problems to offspring victimization (standardized mediated path [ab] = .05, 95% CI [.01, .11]; effect size = .19). There was no significant mediated pathway from maternal history of neglect. Findings are discussed in terms of specific implications for child maltreatment prevention, including the importance of assessment and early intervention for maternal history of maltreatment and substance use problems, targeting women with maltreatment histories for substance use services, and integrating child welfare and parenting programs with substance use treatment.

  1. Assessing local instrument reliability and validity: a field-based example from northern Uganda.

    PubMed

    Betancourt, Theresa S; Bass, Judith; Borisova, Ivelina; Neugebauer, Richard; Speelman, Liesbeth; Onyango, Grace; Bolton, Paul

    2009-08-01

    This paper presents an approach for evaluating the reliability and validity of mental health measures in non-Western field settings. We describe this approach using the example of our development of the Acholi psychosocial assessment instrument (APAI), which is designed to assess depression-like (two tam, par and kumu), anxiety-like (ma lwor) and conduct problems (kwo maraco) among war-affected adolescents in northern Uganda. To examine the criterion validity of this measure in the absence of a traditional gold standard, we derived local syndrome terms from qualitative data and used self reports of these syndromes by indigenous people as a reference point for determining caseness. Reliability was examined using standard test-retest and inter-rater methods. Each of the subscale scores for the depression-like syndromes exhibited strong internal reliability ranging from alpha = 0.84-0.87. Internal reliability was good for anxiety (0.70), conduct problems (0.83), and the pro-social attitudes and behaviors (0.70) subscales. Combined inter-rater reliability and test-retest reliability were good for most subscales except for the conduct problem scale and prosocial scales. The pattern of significant mean differences in the corresponding APAI problem scale score between self-reported cases vs. noncases on local syndrome terms was confirmed in the data for all of the three depression-like syndromes, but not for the anxiety-like syndrome ma lwor or the conduct problem kwo maraco.

  2. Behavioural and Emotional Problems in Children and Educational Outcomes: A Dynamic Panel Data Analysis.

    PubMed

    Khanam, Rasheda; Nghiem, Son

    2018-05-01

    This study investigates the effects of behavioural and emotional problems in children on their educational outcomes using data from the Longitudinal Survey of Australian Children (LSAC). We contribute to the extant literature using a dynamic specification to test the hypothesis of knowledge accumulation. Further, we apply the system generalised method of moments (GMM) estimator to minimise biases due to unobserved factors. We find that mental disorders in children has a negative effect on the National Assessment Program-Literacy and Numeracy (NAPLAN) test scores. Among all mental disorders, having emotional problems is found to be the most influential with one standard deviation (SD) increase in emotional problems being associated with 0.05 SD reduction in NAPLAN reading, writing and spelling; 0.04 SD reduction in matrix reasoning and grammar; and 0.03 SD reduction in NAPLAN numeracy.

  3. Modeling visual problem solving as analogical reasoning.

    PubMed

    Lovett, Andrew; Forbus, Kenneth

    2017-01-01

    We present a computational model of visual problem solving, designed to solve problems from the Raven's Progressive Matrices intelligence test. The model builds on the claim that analogical reasoning lies at the heart of visual problem solving, and intelligence more broadly. Images are compared via structure mapping, aligning the common relational structure in 2 images to identify commonalities and differences. These commonalities or differences can themselves be reified and used as the input for future comparisons. When images fail to align, the model dynamically rerepresents them to facilitate the comparison. In our analysis, we find that the model matches adult human performance on the Standard Progressive Matrices test, and that problems which are difficult for the model are also difficult for people. Furthermore, we show that model operations involving abstraction and rerepresentation are particularly difficult for people, suggesting that these operations may be critical for performing visual problem solving, and reasoning more generally, at the highest level. (PsycINFO Database Record (c) 2016 APA, all rights reserved).

  4. Restructuring of international council for standardization in haematology (ICSH) in Asia.

    PubMed

    Tatsumi, N; Lewis, S M

    2002-08-01

    Standardization and harmonization in Laboratory testing are a key issue in the midst of globalization era, because most of laboratory testing has been currently achieved with various kinds of automated systems. In the developed countries, automated systems with highly-regulated principles are commonly used in the routine laboratory. However, there are so many undeveloped and developing countries in Asia that diversity of testing levels can be observed in the area. Some laboratories use glass chamber method for blood cell counting, while other laboratory use semi-automated or fully automated analyzers for complete blood count. International standardization on Hematology is focused on the developed system but not for the developing system. Established standardized documents therefore whould not be unsuitable for Asian societies. In the context, International Council for Standardization in Hematology (ICSH) changed its rules to adjust our Asian Societies and ICSH started to restructure the body. International ICSH society is divided into 5 region sub-groups. Asian area is able to possess one new sub-society, ICSH-Asia. Its reconstruction work has been just started with Asain colleagues, and we are now extending the new societies to discuss Asian problems on the quality of hematology testing.

  5. Procurement Without Problems: Preparing the RFP.

    ERIC Educational Resources Information Center

    Epstein, Susan Baerg

    1983-01-01

    Discussion of factors contributing to successful procurement of automated library system focuses on preparation of Request for Proposal (RFP) and elements included in the RFP--administrative requirements, functional requirements, performance requirements, reliability requirements, testing procedures, standardized response language, location table,…

  6. Multidimensional assessment of self-regulated learning with middle school math students.

    PubMed

    Callan, Gregory L; Cleary, Timothy J

    2018-03-01

    This study examined the convergent and predictive validity of self-regulated learning (SRL) measures situated in mathematics. The sample included 100 eighth graders from a diverse, urban school district. Four measurement formats were examined including, 2 broad-based (i.e., self-report questionnaire and teacher ratings) and 2 task-specific measures (i.e., SRL microanalysis and behavioral traces). Convergent validity was examined across task-difficulty, and the predictive validity was examined across 3 mathematics outcomes: 2 measures of mathematical problem solving skill (i.e., practice session math problems, posttest math problems) and a global measure of mathematical skill (i.e., standardized math test). Correlation analyses were used to examine convergent validity and revealed medium correlations between measures within the same category (i.e., broad-based or task-specific). Relations between measurement classes were not statistically significant. Separate regressions examined the predictive validity of the SRL measures. While controlling all other predictors, a SRL microanalysis metacognitive-monitoring measure emerged as a significant predictor of all 3 outcomes and teacher ratings accounted for unique variance on 2 of the outcomes (i.e., posttest math problems and standardized math test). Results suggest that a multidimensional assessment approach should be considered by school psychologists interested in measuring SRL. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. A finite-volume Eulerian-Lagrangian Localized Adjoint Method for solution of the advection-dispersion equation

    USGS Publications Warehouse

    Healy, R.W.; Russell, T.F.

    1993-01-01

    A new mass-conservative method for solution of the one-dimensional advection-dispersion equation is derived and discussed. Test results demonstrate that the finite-volume Eulerian-Lagrangian localized adjoint method (FVELLAM) outperforms standard finite-difference methods, in terms of accuracy and efficiency, for solute transport problems that are dominated by advection. For dispersion-dominated problems, the performance of the method is similar to that of standard methods. Like previous ELLAM formulations, FVELLAM systematically conserves mass globally with all types of boundary conditions. FVELLAM differs from other ELLAM approaches in that integrated finite differences, instead of finite elements, are used to approximate the governing equation. This approach, in conjunction with a forward tracking scheme, greatly facilitates mass conservation. The mass storage integral is numerically evaluated at the current time level, and quadrature points are then tracked forward in time to the next level. Forward tracking permits straightforward treatment of inflow boundaries, thus avoiding the inherent problem in backtracking, as used by most characteristic methods, of characteristic lines intersecting inflow boundaries. FVELLAM extends previous ELLAM results by obtaining mass conservation locally on Lagrangian space-time elements. Details of the integration, tracking, and boundary algorithms are presented. Test results are given for problems in Cartesian and radial coordinates.

  8. Hardware synthesis from DDL. [Digital Design Language for computer aided design and test of LSI

    NASA Technical Reports Server (NTRS)

    Shah, A. M.; Shiva, S. G.

    1981-01-01

    The details of the digital systems can be conveniently input into the design automation system by means of Hardware Description Languages (HDL). The Computer Aided Design and Test (CADAT) system at NASA MSFC is used for the LSI design. The Digital Design Language (DDL) has been selected as HDL for the CADAT System. DDL translator output can be used for the hardware implementation of the digital design. This paper addresses problems of selecting the standard cells from the CADAT standard cell library to realize the logic implied by the DDL description of the system.

  9. Using information technology for an improved pharmaceutical care delivery in developing countries. Study case: Benin.

    PubMed

    Edoh, Thierry Oscar; Teege, Gunnar

    2011-10-01

    One of the problems in health care in developing countries is the bad accessibility of medicine in pharmacies for patients. Since this is mainly due to a lack of organization and information, it should be possible to improve the situation by introducing information and communication technology. However, for several reasons, standard solutions are not applicable here. In this paper, we describe a case study in Benin, a West African developing country. We identify the problem and the existing obstacles for applying standard ECommerce solutions. We develop an adapted system approach and describe a practical test which has shown that the approach has the potential of actually improving the pharmaceutical care delivery. Finally, we consider the security aspects of the system and propose an organizational solution for some specific security problems.

  10. Methodological Issues in Antifungal Susceptibility Testing of Malassezia pachydermatis

    PubMed Central

    Peano, Andrea; Pasquetti, Mario; Tizzani, Paolo; Chiavassa, Elisa; Guillot, Jacques; Johnson, Elizabeth

    2017-01-01

    Reference methods for antifungal susceptibility testing of yeasts have been developed by the Clinical and Laboratory Standards Institute (CLSI) and the European Committee on Antibiotic Susceptibility Testing (EUCAST). These methods are intended to test the main pathogenic yeasts that cause invasive infections, namely Candida spp. and Cryptococcus neoformans, while testing other yeast species introduces several additional problems in standardization not addressed by these reference procedures. As a consequence, a number of procedures have been employed in the literature to test the antifungal susceptibility of Malassezia pachydermatis. This has resulted in conflicting results. The aim of the present study is to review the procedures and the technical parameters (growth media, inoculum preparation, temperature and length of incubation, method of reading) employed for susceptibility testing of M. pachydermatis, and when possible, to propose recommendations for or against their use. Such information may be useful for the future development of a reference assay. PMID:29371554

  11. An efficient approach to improve the usability of e-learning resources: the role of heuristic evaluation.

    PubMed

    Davids, Mogamat Razeen; Chikte, Usuf M E; Halperin, Mitchell L

    2013-09-01

    Optimizing the usability of e-learning materials is necessary to maximize their potential educational impact, but this is often neglected when time and other resources are limited, leading to the release of materials that cannot deliver the desired learning outcomes. As clinician-teachers in a resource-constrained environment, we investigated whether heuristic evaluation of our multimedia e-learning resource by a panel of experts would be an effective and efficient alternative to testing with end users. We engaged six inspectors, whose expertise included usability, e-learning, instructional design, medical informatics, and the content area of nephrology. They applied a set of commonly used heuristics to identify usability problems, assigning severity scores to each problem. The identification of serious problems was compared with problems previously found by user testing. The panel completed their evaluations within 1 wk and identified a total of 22 distinct usability problems, 11 of which were considered serious. The problems violated the heuristics of visibility of system status, user control and freedom, match with the real world, intuitive visual layout, consistency and conformity to standards, aesthetic and minimalist design, error prevention and tolerance, and help and documentation. Compared with user testing, heuristic evaluation found most, but not all, of the serious problems. Combining heuristic evaluation and user testing, with each involving a small number of participants, may be an effective and efficient way of improving the usability of e-learning materials. Heuristic evaluation should ideally be used first to identify the most obvious problems and, once these are fixed, should be followed by testing with typical end users.

  12. Conquering the SAT: How Parents Can Help Teens Overcome the Pressure and Succeed

    ERIC Educational Resources Information Center

    Johnson, Ned; Eskelsen, Emily Warner

    2006-01-01

    This insightful and practical guide for parents shows how they often undermine rather than encourage their teens' success on one of the most stressful standardized tests--the SAT--and what strategies will remedy the problem. In recent years this test has taken on fearsome proportions, matched only by the growing competition for slots at major…

  13. Effectiveness of E-TLM in Learning Vocabulary in English

    ERIC Educational Resources Information Center

    Singaravelu, G.

    2011-01-01

    The study enlightens the effectiveness of e-TLM in Learning Vocabulary in English at standard VI. Objectives of the study: 1. To find out the problems of conventional TLM in learning vocabulary in English. 2. To find out the significant difference in achievement mean score between the pre test of control group and the post test of control group.…

  14. The Effect of a Reading Accommodation on Standardized Test Scores of Learning Disabled and Non Learning Disabled Students.

    ERIC Educational Resources Information Center

    Meloy, Linda L.; Deville, Craig; Frisbie, David

    The effect of the Read Aloud accommodation on the performances of learning disabled in reading (LD-R) and non-learning disabled (non LD) middle school students was studied using selected texts from the Iowa Tests of Basic Skills (ITBS) achievement battery. Science, Usage and Expression, Math Problem Solving and Data Interpretation, and Reading…

  15. Clay Improvement with Burned Olive Waste Ash

    PubMed Central

    Mutman, Utkan

    2013-01-01

    Olive oil is concentrated in the Mediterranean basin countries. Since the olive oil industries are incriminated for a high quantity of pollution, it has become imperative to solve this problem by developing optimized systems for the treatment of olive oil wastes. This study proposes a solution to the problem. Burned olive waste ash is evaluated for using it as clay stabilizer. In a laboratory, bentonite clay is used to improve olive waste ash. Before the laboratory, the olive waste is burned at 550°C in the high temperature oven. The burned olive waste ash was added to bentonite clay with increasing 1% by weight from 1% to 10%. The study consisted of the following tests on samples treated with burned olive waste ash: Atterberg Limits, Standard Proctor Density, and Unconfined Compressive Strength Tests. The test results show promise for this material to be used as stabilizer and to solve many of the problems associated with its accumulation. PMID:23766671

  16. Penetration of rod projectiles in semi-infinite targets : a validation test for Eulerian X-FEM in ALEGRA.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Park, Byoung Yoon; Leavy, Richard Brian; Niederhaus, John Henry J.

    2013-03-01

    The finite-element shock hydrodynamics code ALEGRA has recently been upgraded to include an X-FEM implementation in 2D for simulating impact, sliding, and release between materials in the Eulerian frame. For validation testing purposes, the problem of long-rod penetration in semi-infinite targets is considered in this report, at velocities of 500 to 3000 m/s. We describe testing simulations done using ALEGRA with and without the X-FEM capability, in order to verify its adequacy by showing X-FEM recovers the good results found with the standard ALEGRA formulation. The X-FEM results for depth of penetration differ from previously measured experimental data by lessmore » than 2%, and from the standard formulation results by less than 1%. They converge monotonically under mesh refinement at first order. Sensitivities to domain size and rear boundary condition are investigated and shown to be small. Aside from some simulation stability issues, X-FEM is found to produce good results for this classical impact and penetration problem.« less

  17. Software production methodology tested project

    NASA Technical Reports Server (NTRS)

    Tausworthe, R. C.

    1976-01-01

    The history and results of a 3 1/2-year study in software development methodology are reported. The findings of this study have become the basis for DSN software development guidelines and standard practices. The article discusses accomplishments, discoveries, problems, recommendations and future directions.

  18. Ethical and Professional Issues in Career Assessment on the Internet.

    ERIC Educational Resources Information Center

    Barak, Azy

    2003-01-01

    Concerns about Internet-based career assessments include the following: users' technical skill level, lack of screening, psychological factors, cultural bias, unprotected results, technical failures, lack of standardization, digital divide, interpretation problems, outdated tests, copyright violations, administrator qualifications, and hidden…

  19. Measuring Gains in Critical Thinking in Food Science and Human Nutrition Courses: The Cornell Critical Thinking Test, Problem-Based Learning Activities, and Student Journal Entries

    ERIC Educational Resources Information Center

    Iwaoka, Wayne T.; Li, Yong; Rhee, Walter Y.

    2010-01-01

    The Cornell Critical Thinking Test (CCTT) is one of the many multiple-choice tests with validated questions that have been reported to measure general critical thinking (CT) ability. One of the IFT Education Standards for undergraduate degrees in Food Science is the emphasis on the development of critical thinking. While this skill is easy to list…

  20. Microchip problems plague DOD

    NASA Astrophysics Data System (ADS)

    Smith, R. J.

    1984-10-01

    The major issues in the controversy over the discovery of millions of defective microchips sold to the DOD by the Texas Instruments (TI) corporation are outlined. Defects in the microcircuits are blamed on inadequate testing procedures performed by TI during manufacture, and on inadequate testing procedures used by a subcontractor especially contracted to test the chips. Because the problem persisted over a period of years, defects might be possible in as many as 100 million chips used in a broad range of military applications including the Trident submarine, the B-52, B-1B, F-15, F-111, F-4, A-6, and A-7 aircraft, the Harpoon and HARM missile systems, and the Space Shuttles Discovery and Challenger. It is pointed out that although TI has accepted responsibility for the defective chips, little will be done by the DOD to compel the company to replace them, or to upgrade testing procedures. It is concluded that the serious nature of the problem could renew interest in recommendations for the standardization of military microcircuits.

  1. Fatigue Crack Closure Analysis Using Digital Image Correlation

    NASA Technical Reports Server (NTRS)

    Leser, William P.; Newman, John A.; Johnston, William M.

    2010-01-01

    Fatigue crack closure during crack growth testing is analyzed in order to evaluate the critieria of ASTM Standard E647 for measurement of fatigue crack growth rates. Of specific concern is remote closure, which occurs away from the crack tip and is a product of the load history during crack-driving-force-reduction fatigue crack growth testing. Crack closure behavior is characterized using relative displacements determined from a series of high-magnification digital images acquired as the crack is loaded. Changes in the relative displacements of features on opposite sides of the crack are used to generate crack closure data as a function of crack wake position. For the results presented in this paper, remote closure did not affect fatigue crack growth rate measurements when ASTM Standard E647 was strictly followed and only became a problem when testing parameters (e.g., load shed rate, initial crack driving force, etc.) greatly exceeded the guidelines of the accepted standard.

  2. Revised standards for statistical evidence.

    PubMed

    Johnson, Valen E

    2013-11-26

    Recent advances in Bayesian hypothesis testing have led to the development of uniformly most powerful Bayesian tests, which represent an objective, default class of Bayesian hypothesis tests that have the same rejection regions as classical significance tests. Based on the correspondence between these two classes of tests, it is possible to equate the size of classical hypothesis tests with evidence thresholds in Bayesian tests, and to equate P values with Bayes factors. An examination of these connections suggest that recent concerns over the lack of reproducibility of scientific studies can be attributed largely to the conduct of significance tests at unjustifiably high levels of significance. To correct this problem, evidence thresholds required for the declaration of a significant finding should be increased to 25-50:1, and to 100-200:1 for the declaration of a highly significant finding. In terms of classical hypothesis tests, these evidence standards mandate the conduct of tests at the 0.005 or 0.001 level of significance.

  3. New methods to quantify the cracking performance of cementitious systems made with internal curing

    NASA Astrophysics Data System (ADS)

    Schlitter, John L.

    The use of high performance concretes that utilize low water-cement ratios have been promoted for use in infrastructure based on their potential to increase durability and service life because they are stronger and less porous. Unfortunately, these benefits are not always realized due to the susceptibility of high performance concrete to undergo early age cracking caused by shrinkage. This problem is widespread and effects federal, state, and local budgets that must maintain or replace deterioration caused by cracking. As a result, methods to reduce or eliminate early age shrinkage cracking have been investigated. Internal curing is one such method in which a prewetted lightweight sand is incorporated into the concrete mixture to provide internal water as the concrete cures. This action can significantly reduce or eliminate shrinkage and in some cases causes a beneficial early age expansion. Standard laboratory tests have been developed to quantify the shrinkage cracking potential of concrete. Unfortunately, many of these tests may not be appropriate for use with internally cured mixtures and only provide limited amounts of information. Most standard tests are not designed to capture the expansive behavior of internally cured mixtures. This thesis describes the design and implementation of two new testing devices that overcome the limitations of current standards. The first device discussed in this thesis is called the dual ring. The dual ring is a testing device that quantifies the early age restrained shrinkage performance of cementitious mixtures. The design of the dual ring is based on the current ASTM C 1581-04 standard test which utilizes one steel ring to restrain a cementitious specimen. The dual ring overcomes two important limitations of the standard test. First, the standard single ring test cannot restrain the expansion that takes place at early ages which is not representative of field conditions. The dual ring incorporates a second restraining ring which is located outside of the sample to provide restraint against expansion. Second, the standard ring test is a passive test that only relies on the autogenous and drying shrinkage of the mixture to induce cracking. The dual ring test can be an active test because it has the ability to vary the temperature of the specimen in order to induce thermal stress and produce cracking. This ability enables the study of the restrained cracking capacity as the mixture ages in order to quantify crack sensitive periods of time. Measurements made with the dual ring quantify the benefits from using larger amounts of internal curing. Mixtures that resupplied internal curing water to match that of chemical shrinkage could sustain three times the magnitude of thermal change before cracking. The second device discussed in this thesis is a large scale slab testing device. This device tests the cracking potential of 15' long by 4" thick by 24" wide slab specimens in an environmentally controlled chamber. The current standard testing devices can be considered small scale and encounter problems when linking their results to the field due to size effects. Therefore, the large scale slab testing device was developed in order to calibrate the results of smaller scale tests to real world field conditions such as a pavement or bridge deck. Measurements made with the large scale testing device showed that the cracking propensity of the internally cured mixtures was reduced and that a significant benefit could be realized.

  4. Holographic interferometry of transparent media with reflection from imbedded test objects

    NASA Technical Reports Server (NTRS)

    Prikryl, I.; Vest, C. M.

    1981-01-01

    In applying holographic interferometry, opaque objects blocking a portion of the optical beam used to form the interferogram give rise to incomplete data for standard computer tomography algorithms. An experimental technique for circumventing the problem of data blocked by opaque objects is presented. The missing data are completed by forming an interferogram using light backscattered from the opaque object, which is assumed to be diffuse. The problem of fringe localization is considered.

  5. Geometrically derived difference formulae for the numerical integration of trajectory problems

    NASA Technical Reports Server (NTRS)

    Mcleod, R. J. Y.; Sanz-Serna, J. M.

    1981-01-01

    The term 'trajectory problem' is taken to include problems that can arise, for instance, in connection with contour plotting, or in the application of continuation methods, or during phase-plane analysis. Geometrical techniques are used to construct difference methods for these problems to produce in turn explicit and implicit circularly exact formulae. Based on these formulae, a predictor-corrector method is derived which, when compared with a closely related standard method, shows improved performance. It is found that this latter method produces spurious limit cycles, and this behavior is partly analyzed. Finally, a simple variable-step algorithm is constructed and tested.

  6. [User's requests (from a practitioner's perspective)].

    PubMed

    Ohnishi, T

    1997-08-01

    As a practitioner, I have to rely on outside clinical laboratories and affiliated hospitals to perform laboratory tests. In this abstract, I describe specific problems I have encountered with third-party laboratories, and propose solutions for these problems to optimize use of laboratory tests. BLOOD TESTS: The most frequent problem in ordering blood tests is the lack of detailed information regarding sampling conditions. I often have to call laboratories to check whether the sample should be serum or plasma, what volume is needed, whether the sample should be cooled, etc. I propose that clinical laboratories should provide practitioners' manuals that describe specific sampling information. Most laboratories do not keep the data from ultrasonographic tests. The lack of these is most problematic when test results are interpreted differently by laboratories and by practitioners. Retaining the data would also help private laboratories improve the quality of the test by enabling them to compare their interpretations with others'. ANNUAL MEDICAL SCREENING: Even if an abnormal finding is detected at medical screening clinics, the final diagnosis is usually not sent back to the screening facilities. This is highly recommended to establish an official system that mediates the feedback to screening centers. MRI: Due to miscommunication between practitioners and radiologists, the test is sometimes performed inappropriately. A thorough consultation should occur before the test to clarify specific goals for each patient. PATHOLOGICAL TESTS: Interpretation of results is often inconsistent among laboratories. Independent clinical laboratories tend to report results without indicating sample problems, while pathology departments at affiliated hospitals tend to emphasize sample problems instead of diagnosis or suggesting ways to improve sample quality. Mutual communication among laboratories would help standardize the quality of pathological tests.

  7. ACCESS: integration and pre-flight performance

    NASA Astrophysics Data System (ADS)

    Kaiser, Mary Elizabeth; Morris, Matthew J.; Aldoroty, Lauren N.; Pelton, Russell; Kurucz, Robert; Peacock, Grant O.; Hansen, Jason; McCandliss, Stephan R.; Rauscher, Bernard J.; Kimble, Randy A.; Kruk, Jeffrey W.; Wright, Edward L.; Orndorff, Joseph D.; Feldman, Paul D.; Moos, H. Warren; Riess, Adam G.; Gardner, Jonathan P.; Bohlin, Ralph; Deustua, Susana E.; Dixon, W. V.; Sahnow, David J.; Perlmutter, Saul

    2017-09-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. ACCESS, "Absolute Color Calibration Experiment for Standard Stars", is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35 - 1.7μm bandpass. This paper describes the sub-system testing, payload integration, avionics operations, and data transfer for the ACCESS instrument.

  8. Interoperable and standard e-Health solution over Bluetooth.

    PubMed

    Martinez, I; Del Valle, P; Munoz, P; Trigo, J D; Escayola, J; Martínez-Espronceda, M; Muñoz, A; Serrano, L; Garcia, J

    2010-01-01

    The new paradigm of e-Health demands open sensors and middleware components that permit transparent integration and end-to-end interoperability of new personal health devices. The use of standards seems to be the internationally adopted way to solve these problems. This paper presents the implementation of an end-to-end standards-based e-Health solution. This includes ISO/IEEE11073 standard for the interoperability of the medical devices in the patient environment and EN13606 standard for the interoperable exchange of the Electronic Healthcare Record. The design strictly fulfills all the technical features of the most recent versions of both standards. The implemented prototype has been tested in a laboratory environment to demonstrate its feasibility for its further transfer to the healthcare system.

  9. Testing the statistical compatibility of independent data sets

    NASA Astrophysics Data System (ADS)

    Maltoni, M.; Schwetz, T.

    2003-08-01

    We discuss a goodness-of-fit method which tests the compatibility between statistically independent data sets. The method gives sensible results even in cases where the χ2 minima of the individual data sets are very low or when several parameters are fitted to a large number of data points. In particular, it avoids the problem that a possible disagreement between data sets becomes diluted by data points which are insensitive to the crucial parameters. A formal derivation of the probability distribution function for the proposed test statistics is given, based on standard theorems of statistics. The application of the method is illustrated on data from neutrino oscillation experiments, and its complementarity to the standard goodness-of-fit is discussed.

  10. IMPROVED PERFORMANCES IN SUBSONIC FLOWS OF AN SPH SCHEME WITH GRADIENTS ESTIMATED USING AN INTEGRAL APPROACH

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Valdarnini, R., E-mail: valda@sissa.it

    In this paper, we present results from a series of hydrodynamical tests aimed at validating the performance of a smoothed particle hydrodynamics (SPH) formulation in which gradients are derived from an integral approach. We specifically investigate the code behavior with subsonic flows, where it is well known that zeroth-order inconsistencies present in standard SPH make it particularly problematic to correctly model the fluid dynamics. In particular, we consider the Gresho–Chan vortex problem, the growth of Kelvin–Helmholtz instabilities, the statistics of driven subsonic turbulence and the cold Keplerian disk problem. We compare simulation results for the different tests with those obtained,more » for the same initial conditions, using standard SPH. We also compare the results with the corresponding ones obtained previously with other numerical methods, such as codes based on a moving-mesh scheme or Godunov-type Lagrangian meshless methods. We quantify code performances by introducing error norms and spectral properties of the particle distribution, in a way similar to what was done in other works. We find that the new SPH formulation exhibits strongly reduced gradient errors and outperforms standard SPH in all of the tests considered. In fact, in terms of accuracy, we find good agreement between the simulation results of the new scheme and those produced using other recently proposed numerical schemes. These findings suggest that the proposed method can be successfully applied for many astrophysical problems in which the presence of subsonic flows previously limited the use of SPH, with the new scheme now being competitive in these regimes with other numerical methods.« less

  11. The current situation and development of medical device testing institutes in China.

    PubMed

    Yang, Xiaofang; Mu, Ruihong; Fan, Yubo; Wang, Chunren; Li, Deyu

    2017-04-01

    This article analyses the current situation and development of Chinese medical device testing institutes from the perspectives of the two most important functions - testing functions and medical device standardization functions. Areas Covered: The objective of the Chinese government regulations for medical device industry is to ensure the safety and effectiveness of medical devices for Chinese patients. To support the regulation system, the Chinese government has established medical device testing institutes at different levels for example, the national, provincial, and municipal levels. These testing institutes also play an important role in technical support during medical device premarket registration and post market surveillance, they are also the vital practitioners of Chinese medical device standardization. Expert Commentary: Chinese medical device testing institutes are technical departments established by government, and serve the regulatory functions of government agency. In recent years, with the rapid development of medical device industry as well as constantly increasing international and domestic medical device market, the importance of medical device testing institute is more prominent, However, there are still some problems unsolved, such as their overall capacity remains to be improved, construction of standardization is to be strengthened, etc.

  12. Fitting a three-parameter lognormal distribution with applications to hydrogeochemical data from the National Uranium Resource Evaluation Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kane, V.E.

    1979-10-01

    The standard maximum likelihood and moment estimation procedures are shown to have some undesirable characteristics for estimating the parameters in a three-parameter lognormal distribution. A class of goodness-of-fit estimators is found which provides a useful alternative to the standard methods. The class of goodness-of-fit tests considered include the Shapiro-Wilk and Shapiro-Francia tests which reduce to a weighted linear combination of the order statistics that can be maximized in estimation problems. The weighted-order statistic estimators are compared to the standard procedures in Monte Carlo simulations. Bias and robustness of the procedures are examined and example data sets analyzed including geochemical datamore » from the National Uranium Resource Evaluation Program.« less

  13. Identifying the Gaps in Practice for Combating Lead in Drinking Water in Hong Kong

    PubMed Central

    Lee, Wai Ling; Jia, Jie; Bao, Yani

    2016-01-01

    Excessive lead has been found in drinking water in Hong Kong in tests carried out in 2015. Investigations have identified that the problem in public rental housing estates was caused by the problematic solders used in the plumbing, and recommendations on enhancing the quality control system and strengthening the relevant water quality standards have been proposed. The cause for the same problem happening in other premises where soldering has not been adopted for water pipe connections is left unidentified. Considering the unidentified cause and the recommendations made, this study aims to identify the gaps in practice followed in Hong Kong for safeguarding the water quality of new installations. A holistic review of governing ordinances and regulations, products and materials used and the testing and commissioning requirements adopted in Hong Kong and elsewhere in the world were conducted. Based on international practices and parametric analysis, it was found that there are gaps in practices followed in Hong Kong, which are directly and indirectly leading to the lead-in-water crisis. Recommendations for improvement in the quality control system, and the water quality standards including the allowable lead content and leaching limit for products and materials and the testing and commissioning requirements on plumbing installations have been made. The review and the identified gaps would become useful reference for countries in strengthening their relevant water quality standards. PMID:27706062

  14. Identifying the Gaps in Practice for Combating Lead in Drinking Water in Hong Kong.

    PubMed

    Lee, Wai Ling; Jia, Jie; Bao, Yani

    2016-09-30

    Excessive lead has been found in drinking water in Hong Kong in tests carried out in 2015. Investigations have identified that the problem in public rental housing estates was caused by the problematic solders used in the plumbing, and recommendations on enhancing the quality control system and strengthening the relevant water quality standards have been proposed. The cause for the same problem happening in other premises where soldering has not been adopted for water pipe connections is left unidentified. Considering the unidentified cause and the recommendations made, this study aims to identify the gaps in practice followed in Hong Kong for safeguarding the water quality of new installations. A holistic review of governing ordinances and regulations, products and materials used and the testing and commissioning requirements adopted in Hong Kong and elsewhere in the world were conducted. Based on international practices and parametric analysis, it was found that there are gaps in practices followed in Hong Kong, which are directly and indirectly leading to the lead-in-water crisis. Recommendations for improvement in the quality control system, and the water quality standards including the allowable lead content and leaching limit for products and materials and the testing and commissioning requirements on plumbing installations have been made. The review and the identified gaps would become useful reference for countries in strengthening their relevant water quality standards.

  15. Adaptation and Validation of the Kannada Version of the Singing Voice Handicap Index.

    PubMed

    Gunjawate, Dhanshree R; Aithal, Venkataraja U; Guddattu, Vasudeva; Bellur, Rajashekhar

    2017-07-01

    The present study aimed to adapt and validate the Singing Voice Handicap Index (SVHI) into Kannada language using standard procedures. This is a cross-sectional study. The original English version of SVHI was translated into Kannada. It was administered on 106 Indian classical singers, of whom 22 complained of voice problems. Its internal consistency was determined using Cronbach's alpha coefficient (α), test-retest reliability using Pearson's product moment correlation and paired t test, and the difference in mean scores by independent sample t test. The results revealed that the Kannada SVHI exhibited an excellent internal consistency (α = 0.96) with a high item-to-total correlation. Further, excellent test-retest reliability (r = 0.99) and significant differences in SVHI scores were also obtained by singers with and without a voice problem (t = 12.93, df = 104, P = 0.005). The Kannada SVHI is a valid and reliable tool for self-reported assessment of singers with voice problems. It will provide a valuable insight into the singing-related voice problems as perceived by the singers themselves. Copyright © 2017 The Voice Foundation. Published by Elsevier Inc. All rights reserved.

  16. [Medical safety management in the setting of a clinical reference laboratory--risk management efforts in clinical testing].

    PubMed

    Seki, Akira; Miya, Tetsumasa

    2011-03-01

    As a result of recurring medical accidents, risk management in the medical setting has been given much attention. The announcement in August, 2000 by the Ministry of Health committee for formulating a standard manual for risk management, of a "Risk management manual formulation guideline" has since been accompanied by the efforts of numerous medical testing facilities to develop such documents. In 2008, ISO/TS 22367:2008 on "Medical laboratories-Reduction of error through risk management and continual improvement" was published. However, at present, risk management within a medical testing facility stresses the implementation of provisional actions in response to a problem after it has occurred. Risk management is basically a planned process and includes "corrective actions" as well as "preventive actions." A corrective action is defined as identifying the root cause of the problem and removing it, and is conducted to prevent the problem from recurring. A preventive action is defined as identifying of the any potential problem and removing it, and is conducted to prevent a problem before it occurs. Presently, I shall report on the experiences of our laboratory regarding corrective and preventive actions taken in response to accidents and incidents, respectively.

  17. Determining Learning Disabilities in Mathematics.

    ERIC Educational Resources Information Center

    Dunlap, William P.; And Others

    1979-01-01

    To determine the generalizability of reading expectancy formulas in ascertaining mathematics expectancy levels, correlation coefficients were computed between the scores of 150 Ss (7 to 12 years old) with learning problems on standardized mathematics and reading tests and expectancy scores. Formulas correlated higher with Ss' actual mathematics…

  18. CURVILINEAR FINITE ELEMENT MODEL FOR SIMULATING TWO-WELL TRACER TESTS AND TRANSPORT IN STRATIFIED AQUIFERS

    EPA Science Inventory

    The problem of solute transport in steady nonuniform flow created by a recharging and discharging well pair is investigated. Numerical difficulties encountered with the standard Galerkin formulations in Cartesian coordinates are illustrated. An improved finite element solution st...

  19. Section 504 of the Rehabilitation Act: Emerging Issues for Colleges and Universities.

    ERIC Educational Resources Information Center

    Rothstein, Laura F.

    1986-01-01

    Issues relating to handicapped students on college campuses include: who is handicapped; standardized tests and students with health impairments; and reasonable accommodations for students (including readers and interpreters, and students with AIDS, alcohol or drug addiction, or emotional problems). (Author/MLW)

  20. Solution of nonlinear time-dependent PDEs through componentwise approximation of matrix functions

    NASA Astrophysics Data System (ADS)

    Cibotarica, Alexandru; Lambers, James V.; Palchak, Elisabeth M.

    2016-09-01

    Exponential propagation iterative (EPI) methods provide an efficient approach to the solution of large stiff systems of ODEs, compared to standard integrators. However, the bulk of the computational effort in these methods is due to products of matrix functions and vectors, which can become very costly at high resolution due to an increase in the number of Krylov projection steps needed to maintain accuracy. In this paper, it is proposed to modify EPI methods by using Krylov subspace spectral (KSS) methods, instead of standard Krylov projection methods, to compute products of matrix functions and vectors. Numerical experiments demonstrate that this modification causes the number of Krylov projection steps to become bounded independently of the grid size, thus dramatically improving efficiency and scalability. As a result, for each test problem featured, as the total number of grid points increases, the growth in computation time is just below linear, while other methods achieved this only on selected test problems or not at all.

  1. Developing Seventh Grade Students' Understanding of Complex Environmental Problems with Systems Tools and Representations: a Quasi-experimental Study

    NASA Astrophysics Data System (ADS)

    Doganca Kucuk, Zerrin; Saysel, Ali Kerem

    2017-03-01

    A systems-based classroom intervention on environmental education was designed for seventh grade students; the results were evaluated to see its impact on the development of systems thinking skills and standard science achievement and whether the systems approach is a more effective way to teach environmental issues that are dynamic and complex. A quasi-experimental methodology was used to compare performances of the participants in various dimensions, including systems thinking skills, competence in dynamic environmental problem solving and success in science achievement tests. The same pre-, post- and delayed tests were used with both the comparison and experimental groups in the same public middle school in Istanbul. Classroom activities designed for the comparison group (N = 20) followed the directives of the Science and Technology Curriculum, while the experimental group (N = 22) covered the same subject matter through activities benefiting from systems tools and representations such as behaviour over time graphs, causal loop diagrams, stock-flow structures and hands-on dynamic modelling. After a one-month systems-based instruction, the experimental group demonstrated significantly better systems thinking and dynamic environmental problem solving skills. Achievement in dynamic problem solving was found to be relatively stable over time. However, standard science achievement did not improve at all. This paper focuses on the quantitative analysis of the results, the weaknesses of the curriculum and educational implications.

  2. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems.

    PubMed

    Glover, Jack L; Hudson, Lawrence T

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard.

  3. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    PubMed Central

    Glover, Jack L.; Hudson, Lawrence T.

    2016-01-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in a US national aviation security standard. PMID:27499586

  4. An objectively-analyzed method for measuring the useful penetration of x-ray imaging systems

    NASA Astrophysics Data System (ADS)

    Glover, Jack L.; Hudson, Lawrence T.

    2016-06-01

    The ability to detect wires is an important capability of the cabinet x-ray imaging systems that are used in aviation security as well as the portable x-ray systems that are used by domestic law enforcement and military bomb squads. A number of national and international standards describe methods for testing this capability using the so called useful penetration test metric, where wires are imaged behind different thicknesses of blocking material. Presently, these tests are scored based on human judgments of wire visibility, which are inherently subjective. We propose a new method in which the useful penetration capabilities of an x-ray system are objectively evaluated by an image processing algorithm operating on digital images of a standard test object. The algorithm advantageously applies the Radon transform for curve parameter detection that reduces the problem of wire detection from two dimensions to one. The sensitivity of the wire detection method is adjustable and we demonstrate how the threshold parameter can be set to give agreement with human-judged results. The method was developed to be used in technical performance standards and is currently under ballot for inclusion in an international aviation security standard.

  5. A problem-solving task specialized for functional neuroimaging: validation of the Scarborough adaptation of the Tower of London (S-TOL) using near-infrared spectroscopy

    PubMed Central

    Ruocco, Anthony C.; Rodrigo, Achala H.; Lam, Jaeger; Di Domenico, Stefano I.; Graves, Bryanna; Ayaz, Hasan

    2014-01-01

    Problem-solving is an executive function subserved by a network of neural structures of which the dorsolateral prefrontal cortex (DLPFC) is central. Whereas several studies have evaluated the role of the DLPFC in problem-solving, few standardized tasks have been developed specifically for use with functional neuroimaging. The current study adapted a measure with established validity for the assessment of problem-solving abilities to design a test more suitable for functional neuroimaging protocols. The Scarborough adaptation of the Tower of London (S-TOL) was administered to 38 healthy adults while hemodynamic oxygenation of the PFC was measured using 16-channel continuous-wave functional near-infrared spectroscopy (fNIRS). Compared to a baseline condition, problems that required two or three steps to achieve a goal configuration were associated with higher activation in the left DLPFC and deactivation in the medial PFC. Individuals scoring higher in trait deliberation showed consistently higher activation in the left DLPFC regardless of task difficulty, whereas individuals lower in this trait displayed less activation when solving simple problems. Based on these results, the S-TOL may serve as a standardized task to evaluate problem-solving abilities in functional neuroimaging studies. PMID:24734017

  6. Eigensensitivity analysis of rotating clamped uniform beams with the asymptotic numerical method

    NASA Astrophysics Data System (ADS)

    Bekhoucha, F.; Rechak, S.; Cadou, J. M.

    2016-12-01

    In this paper, free vibrations of a rotating clamped Euler-Bernoulli beams with uniform cross section are studied using continuation method, namely asymptotic numerical method. The governing equations of motion are derived using Lagrange's method. The kinetic and strain energy expression are derived from Rayleigh-Ritz method using a set of hybrid variables and based on a linear deflection assumption. The derived equations are transformed in two eigenvalue problems, where the first is a linear gyroscopic eigenvalue problem and presents the coupled lagging and stretch motions through gyroscopic terms. While the second is standard eigenvalue problem and corresponds to the flapping motion. Those two eigenvalue problems are transformed into two functionals treated by continuation method, the Asymptotic Numerical Method. New method proposed for the solution of the linear gyroscopic system based on an augmented system, which transforms the original problem to a standard form with real symmetric matrices. By using some techniques to resolve these singular problems by the continuation method, evolution curves of the natural frequencies against dimensionless angular velocity are determined. At high angular velocity, some singular points, due to the linear elastic assumption, are computed. Numerical tests of convergence are conducted and the obtained results are compared to the exact values. Results obtained by continuation are compared to those computed with discrete eigenvalue problem.

  7. Null but not void: considerations for hypothesis testing.

    PubMed

    Shaw, Pamela A; Proschan, Michael A

    2013-01-30

    Standard statistical theory teaches us that once the null and alternative hypotheses have been defined for a parameter, the choice of the statistical test is clear. Standard theory does not teach us how to choose the null or alternative hypothesis appropriate to the scientific question of interest. Neither does it tell us that in some cases, depending on which alternatives are realistic, we may want to define our null hypothesis differently. Problems in statistical practice are frequently not as pristinely summarized as the classic theory in our textbooks. In this article, we present examples in statistical hypothesis testing in which seemingly simple choices are in fact rich with nuance that, when given full consideration, make the choice of the right hypothesis test much less straightforward. Published 2012. This article is a US Government work and is in the public domain in the USA.

  8. Enabling global exchange of groundwater data: GroundWaterML2 (GWML2)

    NASA Astrophysics Data System (ADS)

    Brodaric, Boyan; Boisvert, Eric; Chery, Laurence; Dahlhaus, Peter; Grellet, Sylvain; Kmoch, Alexander; Létourneau, François; Lucido, Jessica; Simons, Bruce; Wagner, Bernhard

    2018-05-01

    GWML2 is an international standard for the online exchange of groundwater data that addresses the problem of data heterogeneity. This problem makes groundwater data hard to find and use because the data are diversely structured and fragmented into numerous data silos. Overcoming data heterogeneity requires a common data format; however, until the development of GWML2, an appropriate international standard has been lacking. GWML2 represents key hydrogeological entities such as aquifers and water wells, as well as related measurements and groundwater flows. It is developed and tested by an international consortium of groundwater data providers from North America, Europe, and Australasia, and facilitates many forms of data exchange, information representation, and the development of online web portals and tools.

  9. A modified form of conjugate gradient method for unconstrained optimization problems

    NASA Astrophysics Data System (ADS)

    Ghani, Nur Hamizah Abdul; Rivaie, Mohd.; Mamat, Mustafa

    2016-06-01

    Conjugate gradient (CG) methods have been recognized as an interesting technique to solve optimization problems, due to the numerical efficiency, simplicity and low memory requirements. In this paper, we propose a new CG method based on the study of Rivaie et al. [7] (Comparative study of conjugate gradient coefficient for unconstrained Optimization, Aus. J. Bas. Appl. Sci. 5(2011) 947-951). Then, we show that our method satisfies sufficient descent condition and converges globally with exact line search. Numerical results show that our proposed method is efficient for given standard test problems, compare to other existing CG methods.

  10. Type testing of the Siemens Plessey electronic personal dosemeter.

    PubMed

    Hirning, C R; Yuen, P S

    1995-07-01

    This paper presents the results of a laboratory assessment of the performance of a new type of personal dosimeter, the Electronic Personal Dosemeter made by Siemens Plessey Controls Limited. Twenty pre-production dosimeters and a reader were purchased by Ontario Hydro for the assessment. Tests were performed on radiological performance, including reproducibility, accuracy, linearity, detection threshold, energy response, angular response, neutron response, and response time. There were also tests on the effects of a variety of environmental factors, such as temperature, humidity, pulsed magnetic and electric fields, low- and high-frequency electromagnetic fields, light exposure, drop impact, vibration, and splashing. Other characteristics that were tested were alarm volume, clip force, and battery life. The test results were compared with the relevant requirements of three standards: an Ontario Hydro standard for personal alarming dosimeters, an International Electrotechnical Commission draft standard for direct reading personal dose monitors, and an International Electrotechnical Commission standard for thermoluminescence dosimetry systems for personal monitoring. In general, the performance of the Electronic Personal Dosemeter was found to be quite acceptable: it met most of the relevant requirements of the three standards. However, the following deficiencies were found: slow response time; sensitivity to high-frequency electromagnetic fields; poor resistance to dropping; and an alarm that was not loud enough. In addition, the response of the electronic personal dosimeter to low-energy beta rays may be too low for some applications. Problems were experienced with the reliability of operation of the pre-production dosimeters used in these tests.

  11. An Empirical Comparison of Seven Iterative and Evolutionary Function Optimization Heuristics

    NASA Technical Reports Server (NTRS)

    Baluja, Shumeet

    1995-01-01

    This report is a repository of the results obtained from a large scale empirical comparison of seven iterative and evolution-based optimization heuristics. Twenty-seven static optimization problems, spanning six sets of problem classes which are commonly explored in genetic algorithm literature, are examined. The problem sets include job-shop scheduling, traveling salesman, knapsack, binpacking, neural network weight optimization, and standard numerical optimization. The search spaces in these problems range from 2368 to 22040. The results indicate that using genetic algorithms for the optimization of static functions does not yield a benefit, in terms of the final answer obtained, over simpler optimization heuristics. Descriptions of the algorithms tested and the encodings of the problems are described in detail for reproducibility.

  12. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities

    PubMed Central

    Foerster, Rebecca M.; Poth, Christian H.; Behler, Christian; Botsch, Mario; Schneider, Werner X.

    2016-01-01

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen’s visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions. PMID:27869220

  13. Using the virtual reality device Oculus Rift for neuropsychological assessment of visual processing capabilities.

    PubMed

    Foerster, Rebecca M; Poth, Christian H; Behler, Christian; Botsch, Mario; Schneider, Werner X

    2016-11-21

    Neuropsychological assessment of human visual processing capabilities strongly depends on visual testing conditions including room lighting, stimuli, and viewing-distance. This limits standardization, threatens reliability, and prevents the assessment of core visual functions such as visual processing speed. Increasingly available virtual reality devices allow to address these problems. One such device is the portable, light-weight, and easy-to-use Oculus Rift. It is head-mounted and covers the entire visual field, thereby shielding and standardizing the visual stimulation. A fundamental prerequisite to use Oculus Rift for neuropsychological assessment is sufficient test-retest reliability. Here, we compare the test-retest reliabilities of Bundesen's visual processing components (visual processing speed, threshold of conscious perception, capacity of visual working memory) as measured with Oculus Rift and a standard CRT computer screen. Our results show that Oculus Rift allows to measure the processing components as reliably as the standard CRT. This means that Oculus Rift is applicable for standardized and reliable assessment and diagnosis of elementary cognitive functions in laboratory and clinical settings. Oculus Rift thus provides the opportunity to compare visual processing components between individuals and institutions and to establish statistical norm distributions.

  14. ISO WD 1856. Guideline for radiation exposure of nonmetallic materials. Present status

    NASA Astrophysics Data System (ADS)

    Briskman, B. A.

    In the framework of the International Organization for Standardization (ISO) activity we started development of international standard series for space environment simulation at on-ground tests of materials. The proposal was submitted to ISO Technical Committee 20 (Aircraft and Space Vehicles), Subcommittee 14 (Space Systems and Operations) and was approved as Working Draft 15856 at the Los-Angeles meeting (1997). A draft of the first international standard "Space Environment Simulation for Radiation Tests of Materials" (1st version) was presented at the 7th International Symposium on Materials in Space Environment (Briskman et al, 1997). The 2nd version of the standard was limited to nonmetallic materials and presented at the 20th Space Simulation Conference (Briskman and Borson, 1998). It covers the testing of nonmetallic materials embracing also polymer composite materials including metal components (metal matrix composites) to simulated space radiation. The standard does not cover semiconductor materials. The types of simulated radiation include charged particles (electrons and protons), solar ultraviolet radiation, and soft X-radiation of solar flares. Synergistic interactions of the radiation environment are covered only for these natural and some induced environmental effects. This standard outlines the recommended methodology and practices for the simulation of space radiation on materials. Simulation methods are used to reproduce the effects of the space radiation environment on materials that are located on surfaces of space vehicles and behind shielding. It was discovered that the problem of radiation environment simulation is very complex and the approaches of different specialists and countries to the problem are sometimes quite opposite. To the present moment we developed seven versions of the standard. The last version is a compromise between these approaches. It was approved at the last ISO TC20/SC14/WG4 meeting in Houston, October 2002. At a splinter meeting of Int. Conference on Materials in a Space Environment, Noordwijk, Netherlands, ESA, June 2003, the experts from ESA, USA, France, Russia and Japan discussed the last version of the draft and approved it with a number of notes. A revised version of the standard will be presented this May at ISO TC20/SC14 meeting in Russia.

  15. A new method to address verification bias in studies of clinical screening tests: cervical cancer screening assays as an example.

    PubMed

    Xue, Xiaonan; Kim, Mimi Y; Castle, Philip E; Strickler, Howard D

    2014-03-01

    Studies to evaluate clinical screening tests often face the problem that the "gold standard" diagnostic approach is costly and/or invasive. It is therefore common to verify only a subset of negative screening tests using the gold standard method. However, undersampling the screen negatives can lead to substantial overestimation of the sensitivity and underestimation of the specificity of the diagnostic test. Our objective was to develop a simple and accurate statistical method to address this "verification bias." We developed a weighted generalized estimating equation approach to estimate, in a single model, the accuracy (eg, sensitivity/specificity) of multiple assays and simultaneously compare results between assays while addressing verification bias. This approach can be implemented using standard statistical software. Simulations were conducted to assess the proposed method. An example is provided using a cervical cancer screening trial that compared the accuracy of human papillomavirus and Pap tests, with histologic data as the gold standard. The proposed approach performed well in estimating and comparing the accuracy of multiple assays in the presence of verification bias. The proposed approach is an easy to apply and accurate method for addressing verification bias in studies of multiple screening methods. Copyright © 2014 Elsevier Inc. All rights reserved.

  16. Evolutionary optimization methods for accelerator design

    NASA Astrophysics Data System (ADS)

    Poklonskiy, Alexey A.

    Many problems from the fields of accelerator physics and beam theory can be formulated as optimization problems and, as such, solved using optimization methods. Despite growing efficiency of the optimization methods, the adoption of modern optimization techniques in these fields is rather limited. Evolutionary Algorithms (EAs) form a relatively new and actively developed optimization methods family. They possess many attractive features such as: ease of the implementation, modest requirements on the objective function, a good tolerance to noise, robustness, and the ability to perform a global search efficiently. In this work we study the application of EAs to problems from accelerator physics and beam theory. We review the most commonly used methods of unconstrained optimization and describe the GATool, evolutionary algorithm and the software package, used in this work, in detail. Then we use a set of test problems to assess its performance in terms of computational resources, quality of the obtained result, and the tradeoff between them. We justify the choice of GATool as a heuristic method to generate cutoff values for the COSY-GO rigorous global optimization package for the COSY Infinity scientific computing package. We design the model of their mutual interaction and demonstrate that the quality of the result obtained by GATool increases as the information about the search domain is refined, which supports the usefulness of this model. We Giscuss GATool's performance on the problems suffering from static and dynamic noise and study useful strategies of GATool parameter tuning for these and other difficult problems. We review the challenges of constrained optimization with EAs and methods commonly used to overcome them. We describe REPA, a new constrained optimization method based on repairing, in exquisite detail, including the properties of its two repairing techniques: REFIND and REPROPT. We assess REPROPT's performance on the standard constrained optimization test problems for EA with a variety of different configurations and suggest optimal default parameter values based on the results. Then we study the performance of the REPA method on the same set of test problems and compare the obtained results with those of several commonly used constrained optimization methods with EA. Based on the obtained results, particularly on the outstanding performance of REPA on test problem that presents significant difficulty for other reviewed EAs, we conclude that the proposed method is useful and competitive. We discuss REPA parameter tuning for difficult problems and critically review some of the problems from the de-facto standard test problem set for the constrained optimization with EA. In order to demonstrate the practical usefulness of the developed method, we study several problems of accelerator design and demonstrate how they can be solved with EAs. These problems include a simple accelerator design problem (design a quadrupole triplet to be stigmatically imaging, find all possible solutions), a complex real-life accelerator design problem (an optimization of the front end section for the future neutrino factory), and a problem of the normal form defect function optimization which is used to rigorously estimate the stability of the beam dynamics in circular accelerators. The positive results we obtained suggest that the application of EAs to problems from accelerator theory can be very beneficial and has large potential. The developed optimization scenarios and tools can be used to approach similar problems.

  17. When Bad things Happen to Good Children: A Special Educator's Views of MCAS.

    ERIC Educational Resources Information Center

    Holbrook, Pixie J.

    2001-01-01

    A teacher describes the frustrations of an intelligent, learning-disabled fourth-grader who cannot pass the Massachusetts Comprehensive Assessment System despite standard academic accommodations. The teacher advocates development of alternative or "nonstandard" accommodations and tests that assess students' spatial, problem-solving, and…

  18. A Baseline Evaluation Procedure for Federal Standards on the Prevention, Identification and Treatment of Child Abuse and Neglect. Volume I: Development, Field Testing and Recommended Procedure. Volume II: State of Washington Field Test.

    ERIC Educational Resources Information Center

    Seaberg, James R.; And Others

    The National Center on Child Abuse and Neglect funded a project to develop and field-test an evaluation procedure that could be used by interested states or communities to determine the extent of congruity between (1) their provisions for responding to the problems of child abuse and neglect, and (2) provisions prescribed in the Federal Standards…

  19. Enhanced Low Dose Rate Effects in Bipolar Circuits: A New Hardness Assurance Problem for NASA

    NASA Technical Reports Server (NTRS)

    Johnston, A.; Barnes, C.

    1995-01-01

    Many bipolar integrated circuits are much more susceptible to ionizing radiation at low dose rates than they are at high dose rates typically used for radiation parts testing. Since the low dose rate is equivalent to that seen in space, the standard lab test no longer can be considered conservative and has caused the Air Force to issue an alert. Although a reliable radiation hardness assurance test has not yet been designed, possible mechanisms for low dose rate enhancement and hardness assurance tests are discussed.

  20. Infrastructure and the Virtual Observatory

    NASA Astrophysics Data System (ADS)

    Dowler, P.; Gaudet, S.; Schade, D.

    2011-07-01

    The modern data center is faced with architectural and software engineering challenges that grow along with the challenges facing observatories: massive data flow, distributed computing environments, and distributed teams collaborating on large and small projects. By using VO standards as key components of the infrastructure, projects can take advantage of a decade of intellectual investment by the IVOA community. By their nature, these standards are proven and tested designs that already exist. Adopting VO standards saves considerable design effort, allows projects to take advantage of open-source software and test suites to speed development, and enables the use of third party tools that understand the VO protocols. The evolving CADC architecture now makes heavy use of VO standards. We show examples of how these standards may be used directly, coupled with non-VO standards, or extended with custom capabilities to solve real problems and provide value to our users. In the end, we use VO services as major parts of the core infrastructure to reduce cost rather than as an extra layer with additional cost and we can deliver more general purpose and robust services to our user community.

  1. 20 CFR 664.205 - How is the “deficient in basic literacy skills” criterion in § 664.200(c)(1) defined and documented?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... State or local concerns, and must include a determination that an individual: (1) Computes or solves problems, reads, writes, or speaks English at or below the 8th grade level on a generally accepted standardized test or a comparable score on a criterion-referenced test; or (2) Is unable to compute or solve...

  2. Effect of oxidation products on service properties of motor oils

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhitova, T.Yu.; Polipanov, I.S.

    1995-01-01

    One of the most urgent problems in chemmotology is how to create in an engine - lube oil system a controllable tribochemical process for the purpose of stabilizing the service properties of the oil and forming protective surface structures on the engine parts in order to minimize wear. The complexity of this problem reflects the diversity of the processes taking place in the tribological system. It is impossible to elucidate the mechanism of tribochemical reactions without studying the influence of changes in the oil composition and structure on its service properties during the course of operation. If the relationships involvedmore » in this influence are defined, it will become possible to change the structure of the oil in the desired direction and to achieve the desired service properties. For our studies we selected the motor oil M-10-G{sub 2}, conforming to GOST 8581-78. Samples of this oil were drawn during test-stand evaluations of D-144 and D-144-60 tractor diesels without any oil changes these tests were conducted jointly by the Institute of Problems in Mechanical Engineering of the Russian Academy of Sciences, the Scientific-Research and Design-Technology Institute of Tractor and Combine Engines (NIKTID), and the Vladimir Tractor Plant Production Association. Tests were run for 1000 h with the standard conditions and test sequence, and for 1500 and 2300 h under conditions of a {open_quotes}constantly acting tribochemical regime{close_quotes}. Oil samples were drawn at 50-100 h intervals and tested by standard methods to determine the following physico-chemical characteristics: kinematic viscosity, acid and base numbers, ash, carbon residue, content of insoluble sludge, and content of particulate contaminant.« less

  3. Evaluation of a standard test method for total hemispherical emittance of surfaces from 293K to 1673K

    NASA Technical Reports Server (NTRS)

    Compton, E. C.

    1986-01-01

    Emittance tests were made on samples of Rene' 41, Haynes 188, and Inconel 625 superalloy metals in an evaluation of a standard test method for determining total hemispherical emittances of surfaces from 293 K to 1673 K. The intent of this evaluation was to address any problems encountered, check repeatability of measured emittances, and gain experience in use of the test procedure. Five test specimens were fabricated to prescribe test dimensions and surfaces cleaned of oil and residue. Three of these specimens were without oxidized surfaces and two with oxidized surfaces. The oxidized specimens were Rene' 41 and Haynes 188. The tests were conducted in a vacuum where the samples were resistance-heated to various temperature levels ranging from 503 K to 1293 K. The calculated results for emittance, in the worst case, were repeatable to a maximum spread to + or - 4% from the mean of five sets of plotted data for each specimen.

  4. An accurate on-site calibration system for electronic voltage transformers using a standard capacitor

    NASA Astrophysics Data System (ADS)

    Hu, Chen; Chen, Mian-zhou; Li, Hong-bin; Zhang, Zhu; Jiao, Yang; Shao, Haiming

    2018-05-01

    Ordinarily electronic voltage transformers (EVTs) are calibrated off-line and the calibration procedure requires complex switching operations, which will influence the reliability of the power grid and induce large economic losses. To overcome this problem, this paper investigates a 110 kV on-site calibration system for EVTs, including a standard channel, a calibrated channel and a PC equipped with the LabView environment. The standard channel employs a standard capacitor and an analogue integrating circuit to reconstruct the primary voltage signal. Moreover, an adaptive full-phase discrete Fourier transform (DFT) algorithm is proposed to extract electrical parameters. The algorithm involves the process of extracting the frequency of the grid, adjusting the operation points, and calculating the results using DFT. In addition, an insulated automatic lifting device is designed to realize the live connection of the standard capacitor, which is driven by a wireless remote controller. A performance test of the capacitor verifies the accurateness of the standard capacitor. A system calibration test shows that the system ratio error is less than 0.04% and the phase error is below 2‧, which meets the requirement of the 0.2 accuracy class. Finally, the developed calibration system was used in a substation, and the field test data validates the availability of the system.

  5. Bias in logistic regression due to imperfect diagnostic test results and practical correction approaches.

    PubMed

    Valle, Denis; Lima, Joanna M Tucker; Millar, Justin; Amratia, Punam; Haque, Ubydul

    2015-11-04

    Logistic regression is a statistical model widely used in cross-sectional and cohort studies to identify and quantify the effects of potential disease risk factors. However, the impact of imperfect tests on adjusted odds ratios (and thus on the identification of risk factors) is under-appreciated. The purpose of this article is to draw attention to the problem associated with modelling imperfect diagnostic tests, and propose simple Bayesian models to adequately address this issue. A systematic literature review was conducted to determine the proportion of malaria studies that appropriately accounted for false-negatives/false-positives in a logistic regression setting. Inference from the standard logistic regression was also compared with that from three proposed Bayesian models using simulations and malaria data from the western Brazilian Amazon. A systematic literature review suggests that malaria epidemiologists are largely unaware of the problem of using logistic regression to model imperfect diagnostic test results. Simulation results reveal that statistical inference can be substantially improved when using the proposed Bayesian models versus the standard logistic regression. Finally, analysis of original malaria data with one of the proposed Bayesian models reveals that microscopy sensitivity is strongly influenced by how long people have lived in the study region, and an important risk factor (i.e., participation in forest extractivism) is identified that would have been missed by standard logistic regression. Given the numerous diagnostic methods employed by malaria researchers and the ubiquitous use of logistic regression to model the results of these diagnostic tests, this paper provides critical guidelines to improve data analysis practice in the presence of misclassification error. Easy-to-use code that can be readily adapted to WinBUGS is provided, enabling straightforward implementation of the proposed Bayesian models.

  6. FBST for Cointegration Problems

    NASA Astrophysics Data System (ADS)

    Diniz, M.; Pereira, C. A. B.; Stern, J. M.

    2008-11-01

    In order to estimate causal relations, the time series econometrics has to be aware of spurious correlation, a problem first mentioned by Yule [21]. To solve the problem, one can work with differenced series or use multivariate models like VAR or VEC models. In this case, the analysed series are going to present a long run relation i.e. a cointegration relation. Even though the Bayesian literature about inference on VAR/VEC models is quite advanced, Bauwens et al. [2] highlight that "the topic of selecting the cointegrating rank has not yet given very useful and convincing results." This paper presents the Full Bayesian Significance Test applied to cointegration rank selection tests in multivariate (VAR/VEC) time series models and shows how to implement it using available in the literature and simulated data sets. A standard non-informative prior is assumed.

  7. Human factors engineering verification and validation for APR1400 computerized control room

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Shin, Y. C.; Moon, H. K.; Kim, J. H.

    2006-07-01

    This paper introduces the Advanced Power Reactor 1400 (APR1400) HFE V and V activities the Korea Hydro Nuclear Plant Co. LTD. (KHNP) has performed for the last 10 years and some of the lessons learned through these activities. The features of APR1400 main control room include large display panel, redundant compact workstations, computer-based procedure, and safety console. Several iterations of human factors evaluations have been performed from small scale proof of concept tests to large scale integrated system tests for identifying human engineering deficiencies in the human system interface design. Evaluations in the proof of concept test were focused onmore » checking the presence of any show stopper problems in the design concept. Later evaluations were mostly for finding design problems and for assuring the resolution of human factors issues of advanced control room. The results of design evaluations were useful not only for refining the control room design, but also for licensing the standard design. Several versions of APR1400 mock-ups with dynamic simulation models of currently operating Korea Standard Nuclear Plant (KSNP) have been used for the evaluations with the participation of operators from KSNP plants. (authors)« less

  8. Permutation coding technique for image recognition systems.

    PubMed

    Kussul, Ernst M; Baidyk, Tatiana N; Wunsch, Donald C; Makeyev, Oleksandr; Martín, Anabel

    2006-11-01

    A feature extractor and neural classifier for image recognition systems are proposed. The proposed feature extractor is based on the concept of random local descriptors (RLDs). It is followed by the encoder that is based on the permutation coding technique that allows to take into account not only detected features but also the position of each feature on the image and to make the recognition process invariant to small displacements. The combination of RLDs and permutation coding permits us to obtain a sufficiently general description of the image to be recognized. The code generated by the encoder is used as an input data for the neural classifier. Different types of images were used to test the proposed image recognition system. It was tested in the handwritten digit recognition problem, the face recognition problem, and the microobject shape recognition problem. The results of testing are very promising. The error rate for the Modified National Institute of Standards and Technology (MNIST) database is 0.44% and for the Olivetti Research Laboratory (ORL) database it is 0.1%.

  9. Does standard deviation matter? Using "standard deviation" to quantify security of multistage testing.

    PubMed

    Wang, Chun; Zheng, Yi; Chang, Hua-Hua

    2014-01-01

    With the advent of web-based technology, online testing is becoming a mainstream mode in large-scale educational assessments. Most online tests are administered continuously in a testing window, which may post test security problems because examinees who take the test earlier may share information with those who take the test later. Researchers have proposed various statistical indices to assess the test security, and one most often used index is the average test-overlap rate, which was further generalized to the item pooling index (Chang & Zhang, 2002, 2003). These indices, however, are all defined as the means (that is, the expected proportion of common items among examinees) and they were originally proposed for computerized adaptive testing (CAT). Recently, multistage testing (MST) has become a popular alternative to CAT. The unique features of MST make it important to report not only the mean, but also the standard deviation (SD) of test overlap rate, as we advocate in this paper. The standard deviation of test overlap rate adds important information to the test security profile, because for the same mean, a large SD reflects that certain groups of examinees share more common items than other groups. In this study, we analytically derived the lower bounds of the SD under MST, with the results under CAT as a benchmark. It is shown that when the mean overlap rate is the same between MST and CAT, the SD of test overlap tends to be larger in MST. A simulation study was conducted to provide empirical evidence. We also compared the security of MST under the single-pool versus the multiple-pool designs; both analytical and simulation studies show that the non-overlapping multiple-pool design will slightly increase the security risk.

  10. A study of environmental characterization of conventional and advanced aluminum alloys for selection and design. Phase 1: Literature review

    NASA Technical Reports Server (NTRS)

    Sprowls, D. O.

    1984-01-01

    A review of the literature is presented with the objectives of identifying relationships between various accelerated stress corrosion testing techniques, and for determining the combination of test methods best suited to selection and design of high strength aluminum alloys. The following areas are reviewed: status of stress-corrosion test standards, the influence of mechanical and environmental factors on stress corrosion testing, correlation of accelerated test data with in-service experience, and procedures used to avoid stress corrosion problems in service. Promising areas for further work are identified.

  11. Long-term neurodevelopmental outcomes of congenital diaphragmatic hernia survivors not treated with extracorporeal membrane oxygenation.

    PubMed

    Frisk, Virginia; Jakobson, Lorna S; Unger, Sharon; Trachsel, Daniel; O'Brien, Karel

    2011-07-01

    Although there has been a marked improvement in the survival of children with congenital diaphragmatic hernia (CDH) in the past 2 decades, there are few reports of long-term neurodevelopmental outcome in this population. The present study examined neurodevelopmental outcomes in 10- to 16-year-old CDH survivors not treated with extracorporeal membrane oxygenation (ECMO). Parents of 27 CDH survivors completed questionnaires assessing medical problems, daily living skills, educational outcomes, behavioral problems, and executive functioning. Fifteen CDH survivors and matched full-term controls completed standardized intelligence, academic achievement, phonological processing, and working memory tests. Non-ECMO-treated CDH survivors demonstrated high rates of clinically significant difficulties on standardized academic achievement measures, and 14 of the 27 survivors had a formal diagnosis of specific learning disability, attention deficit hyperactivity disorder, or developmental disability. Specific problems with executive function, cognitive and attentional weaknesses, and social difficulties were more common in CDH patients than controls. Perioperative hypocapnia was linked to executive dysfunction, behavioral problems, lowered intelligence, and poor achievement in mathematics. Non-ECMO-treated CDH survivors are at substantial risk for neurodevelopmental problems in late childhood and adolescence. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Advanced Solar Cell Testing and Characterization

    NASA Technical Reports Server (NTRS)

    Bailey, Sheila; Curtis, Henry; Piszczor, Michael

    2005-01-01

    The topic for this workshop stems from an ongoing effort by the photovoltaic community and U.S. government to address issues and recent problems associated with solar cells and arrays experienced by a number of different space systems. In April 2003, a workshop session was held at the Aerospace Space Power Workshop to discuss an effort by the Air Force to update and standardize solar cell and array qualification test procedures in an effort to ameliorate some of these problems. The organizers of that workshop session thought it was important to continue these discussions and present this information to the entire photovoltaic community. Thus, it was decided to include this topic as a workshop at the following SPRAT conference.

  13. The Problem of Boys' Literacy Underachievement: Raising Some Questions

    ERIC Educational Resources Information Center

    Watson, Anne; Kehler, Michael; Martino, Wayne

    2010-01-01

    Boys' literacy underachievement continues to garner significant attention and has been identified by journalists, educational policymakers, and scholars in the field as the cause for much concern. It has been established that boys perform less well than girls on literacy benchmark or standardized tests. According to the National Assessment of…

  14. The Effects of Math Anxiety

    ERIC Educational Resources Information Center

    Andrews, Amanda; Brown, Jennifer

    2015-01-01

    Math anxiety is a reoccurring problem for many students, and the effects of this anxiety on college students are increasing. The purpose of this study was to examine the association between pre-enrollment math anxiety, standardized test scores, math placement scores, and academic success during freshman math coursework (i.e., pre-algebra, college…

  15. Video Measurements: Quantity or Quality

    ERIC Educational Resources Information Center

    Zajkov, Oliver; Mitrevski, Boce

    2012-01-01

    Students have problems with understanding, using and interpreting graphs. In order to improve the students' skills for working with graphs, we propose Manual Video Measurement (MVM). In this paper, the MVM method is explained and its accuracy is tested. The comparison with the standardized video data software shows that its accuracy is comparable…

  16. Psychometric Properties of an Intimate Partner Violence Tool for Health Care Students

    ERIC Educational Resources Information Center

    Connor, Pamela D.; Nouer, Simonne S.; Mackey, See Trail N.; Tipton, Nathan G.; Lloyd, Angela K.

    2011-01-01

    Health care professionals have acknowledged intimate partner violence (IPV) as a highly prevalent public health problem necessitating the creation of standardized education programs, survey tools, and well-defined outcome measures. Testing and evaluation of these measures, however, has been limited to specific populations of health care…

  17. The Problem with Performance Pay

    ERIC Educational Resources Information Center

    Gratz, Donald B.

    2009-01-01

    Although today's performance pay plans take many forms, the most commonly proposed version--in which teachers are rewarded on the basis of their students' standardized test scores--flows from flawed logic and several troublesome assumptions: that teachers lack motivation and supposedly need financial awards to give students what they need; that…

  18. 7 CFR 1728.70 - Procurement of materials.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... unlisted item in emergency situations and for experimental use or to meet a specific need. For purposes of... from the industry is not readily available, or the standard designs are not applicable to the borrower's specific problem under consideration. (3) RUS will make arrangements for test or experimental use...

  19. 7 CFR 1728.70 - Procurement of materials.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... unlisted item in emergency situations and for experimental use or to meet a specific need. For purposes of... from the industry is not readily available, or the standard designs are not applicable to the borrower's specific problem under consideration. (3) RUS will make arrangements for test or experimental use...

  20. 7 CFR 1728.70 - Procurement of materials.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... unlisted item in emergency situations and for experimental use or to meet a specific need. For purposes of... from the industry is not readily available, or the standard designs are not applicable to the borrower's specific problem under consideration. (3) RUS will make arrangements for test or experimental use...

  1. 7 CFR 1728.70 - Procurement of materials.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... unlisted item in emergency situations and for experimental use or to meet a specific need. For purposes of... from the industry is not readily available, or the standard designs are not applicable to the borrower's specific problem under consideration. (3) RUS will make arrangements for test or experimental use...

  2. 7 CFR 1728.70 - Procurement of materials.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... unlisted item in emergency situations and for experimental use or to meet a specific need. For purposes of... from the industry is not readily available, or the standard designs are not applicable to the borrower's specific problem under consideration. (3) RUS will make arrangements for test or experimental use...

  3. Problems and Methods of Teaching and Assessment of Students on Day Release in Higher Education.

    ERIC Educational Resources Information Center

    Trotman-Dickenson, Danusia

    1980-01-01

    Part-time students' characteristics and teaching and testing preferences were correlated with performance on a standardized economics exam. Learning modules are described. Methods of pinpointing student weaknesses and predicting students' final results are discussed as they relate to other subjects. (MSE)

  4. The Case of Public Schools in Argentina

    ERIC Educational Resources Information Center

    Adrogue, Cecilia; Orlicki, Maria Eugenia

    2013-01-01

    As Argentina presents problems of malnutrition, the federal in-school feeding program has become a key policy because it provides an important nutritional intervention during a relevant growth period. This paper estimates the effect of the program on academic performance--measured by standardized test scores--with a difference in difference model,…

  5. Improving reseeding success after catastrophic wildfire with surfactant seed coating technology

    USDA-ARS?s Scientific Manuscript database

    The application of soil surfactants in wildfire-affected ecosystems has been limited due to logistical and economic constraints associated with the standard practice of using large quantities of irrigation water as the surfactant carrier. We tested a potential solution to this problem that uses seed...

  6. Improving reseeding success after catastrophic wildfire - shifting the paradigm with surfactant seed coatings

    USDA-ARS?s Scientific Manuscript database

    The application of soil surfactants in wildfire-affected ecosystems has been limited due to logistical and economic constraints associated with the standard practice of using large quantities of irrigation water as the surfactant carrier. We tested a potential solution to this problem that uses seed...

  7. Standard Penetration Test and Relative Density

    DTIC Science & Technology

    1971-02-01

    Se OPSeS Debido a que el agua subterranea granclemente influve la resistencia a suelo, se establecio una relacion empirica entre el nurmero de golpes...de laboratorio ejecutados con un penetr6metro est’tico pequeno. INTRODUCTION One of the main problems encountered in subsoil e’xploration is in situ

  8. Discuss the testing problems of ultraviolet irradiance meters

    NASA Astrophysics Data System (ADS)

    Ye, Jun'an; Lin, Fangsheng

    2014-09-01

    Ultraviolet irradiance meters are widely used in many areas such as medical treatment, epidemic prevention, energy conservation and environment protection, computers, manufacture, electronics, ageing of material and photo-electric effect, for testing ultraviolet irradiance intensity. So the accuracy of value directly affects the sterile control in hospital, treatment, the prevention level of CDC and the control accuracy of curing and aging in manufacturing industry etc. Because the display of ultraviolet irradiance meters is easy to change, in order to ensure the accuracy, it needs to be recalibrated after being used period of time. By the comparison with the standard ultraviolet irradiance meters, which are traceable to national benchmarks, we can acquire the correction factor to ensure that the instruments working under accurate status and giving the accurate measured data. This leads to an important question: what kind of testing device is more accurate and reliable? This article introduces the testing method and problems of the current testing device for ultraviolet irradiance meters. In order to solve these problems, we have developed a new three-dimensional automatic testing device. We introduce structure and working principle of this system and compare the advantages and disadvantages of two devices. In addition, we analyses the errors in the testing of ultraviolet irradiance meters.

  9. The standardization of urine particle counting in medical laboratories--a Polish experience with the EQA programme.

    PubMed

    Cwiklińska, Agnieszka; Kąkol, Judyta; Kuchta, Agnieszka; Kortas-Stempak, Barbara; Pacanis, Anastasis; Rogulski, Jerzy; Wróblewska, Małgorzata

    2012-02-01

    Given the common problems with the standardization of urine particle counting methods and the great variability in the results obtained by Polish laboratories under international Labquality External Quality Assessment (EQA), we initiated educational recovery activities. Detailed instructions on how to perform the standardized examination were sent to EQA participants, as was a questionnaire forms which enabled information to be gathered in respect to the procedures being applied. Laboratory results were grouped according to the method declared on the EQA 'Result' form or according to a manual examination procedure established on the basis of the questionnaire. The between-laboratory CVs for leukocyte and erythrocyte counts were calculated for each group and compared using the Mann-Whitney test. Significantly lower between-laboratory CVs (p = 0.03) were achieved for leukocyte counting among the laboratories that analysed control specimens in accordance with standardized procedures as compared with those which used non-standardized procedures. We also observed a visible lower variability for erythrocyte counting. Unfortunately despite our activities, only a few of the Polish laboratories applied the standardized examination procedures, and only 29% of the results could have been considered to be standardized (16% - manual methods, 13% - automated systems). The standardization of urine particle counting methods continues to be a significant problem in medical laboratories and requires further recovery activities which can be conducted using the EQA scheme.

  10. The Effect of Baggase Ash on Fly Ash-Based Geopolimer Binder

    NASA Astrophysics Data System (ADS)

    Bayuaji, R.; Darmawan, M. S.; Husin, N. A.; Banugraha, R.; Alfi, M.; Abdullah, M. M. A. B.

    2018-06-01

    Geopolymer concrete is an environmentally friendly concrete. However, the geopolymer binder has a problem with setting time; mainly the composition comprises high calcium fly ash. This study utilized bagasse ash to improve setting time on fly ash-based geopolymer binder. The characterization of bagasse ash was carried out by using chemical and phase analysis, while the morphology characterization was examined by scanning electron microscope (SEM). The setting time test and the compressive strength test used standard ASTM C 191-04 and ASTM C39 / C39M respectively. The compressive strength of the samples determined at 3, 28 and 56 days. The result compared the requirement of the standards.

  11. [Genetic diseases:recent scientific findings and health and ethical problems].

    PubMed

    Taruscio, D; D'Agnolo, G

    1999-01-01

    Genetic diseases are very numerous, even though rare as single conditions: therefore, overall they represent a significant portion of morbidity at population level. The improvement of molecular genetic techniques has brought a great increase in the diagnostic potential toward genetic diseases, concerning either symptomatic or pre-symptomatic individuals and healthy carriers. However, this has frequently unforeseen consequences, such as a discrepancy between diagnostic and therapeutic potentials. Moreover, the development of genetic tests has raised a number of questions regarding ethical, legal e social problems. The Italian guidelines for genetic tests (available on the Internet site of Istituto Superiore di Sanità: http:@www.iss.it) have been elaborated in 1998 to define general principles for performing and managing genetic tests as well as for programming and promoting genetic testing within the public health system. In accordance with recommendations by international bodies (WHO, EU), the Guidelines give emphasis to the appropriate use of both safe and efficacious tests, the performance in laboratories with high quality standards. A further crucial point is the relationship between the health system and individuals: authonomy of decision, psychological and social assistance, as well as adequate attention to ethical and privacy problems should be guaranteed.

  12. Testing in semiparametric models with interaction, with applications to gene-environment interactions.

    PubMed

    Maity, Arnab; Carroll, Raymond J; Mammen, Enno; Chatterjee, Nilanjan

    2009-01-01

    Motivated from the problem of testing for genetic effects on complex traits in the presence of gene-environment interaction, we develop score tests in general semiparametric regression problems that involves Tukey style 1 degree-of-freedom form of interaction between parametrically and non-parametrically modelled covariates. We find that the score test in this type of model, as recently developed by Chatterjee and co-workers in the fully parametric setting, is biased and requires undersmoothing to be valid in the presence of non-parametric components. Moreover, in the presence of repeated outcomes, the asymptotic distribution of the score test depends on the estimation of functions which are defined as solutions of integral equations, making implementation difficult and computationally taxing. We develop profiled score statistics which are unbiased and asymptotically efficient and can be performed by using standard bandwidth selection methods. In addition, to overcome the difficulty of solving functional equations, we give easy interpretations of the target functions, which in turn allow us to develop estimation procedures that can be easily implemented by using standard computational methods. We present simulation studies to evaluate type I error and power of the method proposed compared with a naive test that does not consider interaction. Finally, we illustrate our methodology by analysing data from a case-control study of colorectal adenoma that was designed to investigate the association between colorectal adenoma and the candidate gene NAT2 in relation to smoking history.

  13. ISP33 standard problem on the PACTEL facility

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Purhonen, H.; Kouhia, J.; Kalli, H.

    ISP33 is the first OECD/NEA/CSNI standard problem related to VVER type of pressurized water reactors. The reference reactor of the PACTEL test facility, which was used to carry out the ISP33 experiment, is the VVER-440 reactor, two of which are located near the Finnish city of Loviisa. The objective of the ISP33 test was to study the natural circulation behaviour of VVER-440 reactors at different coolant inventories. Natural circulation was considered as a suitable phenomenon to focus on by the first VVER related ISP due to its importance in most accidents and transients. The behaviour of the natural circulation wasmore » expected to be different compared to Western type of PWRs as a result of the effect of horizontal steam generators and the hot leg loop seals. This ISP was conducted as a blind problem. The experiment was started at full coolant inventory. Single-phase natural circulation transported the energy from the core to the steam generators. The inventory was then reduced stepwise at about 900 s intervals draining 60 kg each time from the bottom of the downcomer. the core power was about 3.7% of the nominal value. The test was terminated after the cladding temperatures began to rise. ATHLET, CATHARE, RELAP5 (MODs 3, 2.5 and 2), RELAP4/MOD6, DINAMIKA and TECH-M4 codes were used in 21 pre- and 20 posttest calculations submitted for the ISP33.« less

  14. Pre-test CFD Calculations for a Bypass Flow Standard Problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rich Johnson

    The bypass flow in a prismatic high temperature gas-cooled reactor (HTGR) is the flow that occurs between adjacent graphite blocks. Gaps exist between blocks due to variances in their manufacture and installation and because of the expansion and shrinkage of the blocks from heating and irradiation. Although the temperature of fuel compacts and graphite is sensitive to the presence of bypass flow, there is great uncertainty in the level and effects of the bypass flow. The Next Generation Nuclear Plant (NGNP) program at the Idaho National Laboratory has undertaken to produce experimental data of isothermal bypass flow between three adjacentmore » graphite blocks. These data are intended to provide validation for computational fluid dynamic (CFD) analyses of the bypass flow. Such validation data sets are called Standard Problems in the nuclear safety analysis field. Details of the experimental apparatus as well as several pre-test calculations of the bypass flow are provided. Pre-test calculations are useful in examining the nature of the flow and to see if there are any problems associated with the flow and its measurement. The apparatus is designed to be able to provide three different gap widths in the vertical direction (the direction of the normal coolant flow) and two gap widths in the horizontal direction. It is expected that the vertical bypass flow will range from laminar to transitional to turbulent flow for the different gap widths that will be available.« less

  15. Comparative Study on High-Order Positivity-preserving WENO Schemes

    NASA Technical Reports Server (NTRS)

    Kotov, Dmitry V.; Yee, Helen M.; Sjogreen, Bjorn Axel

    2013-01-01

    The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more di cult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium ow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case (LeVeque & Yee 1990) as well as for the case consisting of two species and one reaction (Wang et al. 2012). For further information concerning this issue see (LeVeque & Yee 1990; Griffiths et al. 1992; Lafon & Yee 1996; Yee et al. 2012). This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.

  16. Milne, a routine for the numerical solution of Milne's problem

    NASA Astrophysics Data System (ADS)

    Rawat, Ajay; Mohankumar, N.

    2010-11-01

    The routine Milne provides accurate numerical values for the classical Milne's problem of neutron transport for the planar one speed and isotropic scattering case. The solution is based on the Case eigen-function formalism. The relevant X functions are evaluated accurately by the Double Exponential quadrature. The calculated quantities are the extrapolation distance and the scalar and the angular fluxes. Also, the H function needed in astrophysical calculations is evaluated as a byproduct. Program summaryProgram title: Milne Catalogue identifier: AEGS_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AEGS_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 701 No. of bytes in distributed program, including test data, etc.: 6845 Distribution format: tar.gz Programming language: Fortran 77 Computer: PC under Linux or Windows Operating system: Ubuntu 8.04 (Kernel version 2.6.24-16-generic), Windows-XP Classification: 4.11, 21.1, 21.2 Nature of problem: The X functions are integral expressions. The convergence of these regular and Cauchy Principal Value integrals are impaired by the singularities of the integrand in the complex plane. The DE quadrature scheme tackles these singularities in a robust manner compared to the standard Gauss quadrature. Running time: The test included in the distribution takes a few seconds to run.

  17. Epilogue: Reading Comprehension Is Not a Single Ability-Implications for Assessment and Instruction.

    PubMed

    Kamhi, Alan G; Catts, Hugh W

    2017-04-20

    In this epilogue, we review the 4 response articles and highlight the implications of a multidimensional view of reading for the assessment and instruction of reading comprehension. We reiterate the problems with standardized tests of reading comprehension and discuss the advantages and disadvantages of recently developed authentic tests of reading comprehension. In the "Instruction" section, we review the benefits and limitations of strategy instruction and highlight suggestions from the response articles to improve content and language knowledge. We argue that the only compelling reason to administer a standardized test of reading comprehension is when these tests are necessary to qualify students for special education services. Instruction should be focused on content knowledge, language knowledge, and specific task and learning requirements. This instruction may entail the use of comprehension strategies, particularly those that are specific to the task and focus on integrating new knowledge with prior knowledge.

  18. Testing for EMC (electromagnetic compatibility) in the clinical environment.

    PubMed

    Paperman, D; David, Y; Martinez, M

    1996-01-01

    Testing for electromagnetic compatibility (EMC) in the clinical environment introduces a host of complex conditions not normally encountered under laboratory conditions. In the clinical environment, various radio-frequency (RF) sources of electromagnetic interference (EMI) may be present throughout the entire spectrum of interest. Isolating and analyzing the impact from the sources of interference to medical devices involves a multidisciplinary approach based on training in, and knowledge of, the following: operation of medical devices and their susceptibility to EMI; RF propagation modalities and interaction theory; spectrum analysis systems and techniques (preferably with signature analysis capabilities) and calibrated antennas; the investigation methodology of suspected EMC problems, and testing protocols and standards. Using combinations of standard test procedures adapted for the clinical environment with personnel that have an understanding of radio-frequency behavior increases the probability of controlling, proactively, EMI in the clinical environment, thus providing for a safe and more effective patient care environment.

  19. Validation of vocational assessment tool for persons with substance use disorders.

    PubMed

    Sethuraman, Lakshmanan; Subodh, B N; Murthy, Pratima

    2016-01-01

    Work-related problems are a serious concern among persons with substance use but due to lack of a standardized tool to measure it; these problems are neither systematically assessed nor appropriately addressed. Most existing measures of work performance cater to the needs of the workplace rather than focusing on the workers' perception of the difficulties at work. To develop a standardized instrument to measure work-related problems in persons with substance use disorders. Qualitative data obtained from interviews with substance users were used to develop a scale. The refined list of the scale was circulated among an expert panel for content validation. The modified scale was administered to 150 cases, and 50 cases completed the scale twice at the interval of 2 weeks for test-retest reliability. Items with a test-retest reliability kappa coefficient of 0.4 or greater were retained and subjected to factor analysis. The final 45-item scale has a five-factor structure. The value of Cronbach's alpha of the final version of the scale was 0.91. This self-report questionnaire, which can be completed in 10 min, may help us in making a baseline assessment of the work-related impairment among persons with substance use and the impact of substance use on work.

  20. Evaluation of the measurement uncertainty when measuring the resistance of solid isolating materials to tracking

    NASA Astrophysics Data System (ADS)

    Stare, E.; Beges, G.; Drnovsek, J.

    2006-07-01

    This paper presents the results of research into the measurement of the resistance of solid isolating materials to tracking. Two types of tracking were investigated: the proof tracking index (PTI) and the comparative tracking index (CTI). Evaluation of the measurement uncertainty in a case study was performed using a test method in accordance with the IEC 60112 standard. In the scope of the tests performed here, this particular test method was used to ensure the safety of electrical appliances. According to the EN ISO/IEC 17025 standard (EN ISO/IEC 17025), in the process of conformity assessment, the evaluation of the measurement uncertainty of the test method should be carried out. In the present article, possible influential parameters that are in accordance with the third and fourth editions of the standard IEC 60112 are discussed. The differences, ambiguities or lack of guidance referring to both editions of the standard are described in the article 'Ambiguities in technical standards—case study IEC 60112—measuring the resistance of solid isolating materials to tracking' (submitted for publication). Several hundred measurements were taken in the present experiments in order to form the basis for the results and conclusions presented. A specific problem of the test (according to the IEC 60112 standard) is the great variety of influential physical parameters (mechanical, electrical, chemical, etc) that can affect the results. At the end of the present article therefore, there is a histogram containing information on the contributions to the measurement uncertainty.

  1. [Analysis of heavy metals monitoring results in food in Shaoxing in 2014].

    PubMed

    Fan, Wei; Wang, Jing; Wu, Hongmiao; Lian, Lingjun; Du, Sai; Chen, Li

    2015-11-01

    To investigate heavy metals contamination level in food in Shaoxing, and to provide basis evidence for supervising heavy metals pollution in food and environmental pollution control in Shaoxing. Food samples in 2014 were detected for lead, cadmium, mercury, arsenic, nickel, copper and chromium by national standard methods, and the results were evaluated by GB 2762-2012 Pollutants limits in food. 1384 samples from 10 food categories were collected and tested for lead, cadmium, mercury and arsenic, the over standard rates were 2.0%, 3.0%, 1.5% and 0.22%, respectively, the median were 0.019, 0.0085, 0.0024 and 0.015 mg/kg, respectively; 273 samples were collected and tested for nickel, the detection rate was 48.4%, the median was 0.010 mg/kg; 255 samples were collected and tested for chromium, the detection rate was 14.9%, the median was 0.0050 mg/kg; 486 samples were collected and tested for copper, the detection rate was 94.0%, the median was 1.34 mg/kg. The heavy metals over standard rate of aquatic products, animal internal organs and grain were relatively high, 16.9%, 7.9% and 7.3% cadmium in swimming crabs exceeded standard seriously, the over standard rate was 38.9%. The overall pollution of heavy metals in food are not high in Shaoxing in 2014, but some food (aquatic products, animal internal organs and grain) pollution are relatively outstanding, and have the over standard problems of lead, cadmium, mercury and arsenic.

  2. Introduction and development of NCP using ICNP in Pakistan.

    PubMed

    Rukanuddin, R J

    2005-12-01

    Traditionally, nursing care has been described as performing nursing tasks and often focused on nurses carrying out doctors' orders. In many countries of the world, including Pakistan, nurses do not document care in a standardized manner. Because of this limitation many health administrators, policy makers, and consumers make inadequate assumptions about nursing work, often regarding nurses as any other 'health care technician' who can be easily replaced by more economical health care workers. To overcome this problem, standardized documentation is being introduced into the Aga Khan University School of Nursing and hospital, Aga Khan Health Services, Public Health School in Karachi, and government colleges of nursing, using the International Classification for Nursing Practice (ICNP). The purpose of this paper is to highlight the process of introducing and developing standardized nursing care plans (NCP) using ICNP in Pakistan. The process for introducing ICNP consists of four components, including administrative planning, development, teaching and training, and testing. Subsets of the ICNP for (i) maternity: antenatal, postnatal and natal care; and (ii) cardiology were developed using standardized NCPs. The subsets were developed by nurse experts and introduced at the testing sites. The testing will be conducted as a pilot project. Findings from the pilot will be used to continue and expand standardized nursing documentation using the ICNP across Pakistan. Through this project, nurses, midwives and lady health visitors (midwives, vaccinator and health educators) will test standardization of documentation and begin to evaluate efficiency and effectiveness of clinical practice.

  3. Department of Homeland Security (DHS) Proficiency Testing on Small-Scale Safety and Thermal Testing of Improvised Explosives

    NASA Astrophysics Data System (ADS)

    Reynolds, John; Sandstrom, Mary; Brown, Geoffrey; Warner, Kirstin; Phillips, Jason; Shelley, Timothy; Reyes, Jose; Hsu, Peter

    2013-06-01

    One of the first steps in establishing safe handling procedures for explosives is small-scale safety and thermal (SSST) testing. To better understand the response of improvised materials or HMEs to SSST testing, 18 HME materials were compared to 3 standard military explosives in a proficiency-type round robin study among five laboratories--2 DoD and 3 DOE--sponsored by DHS. The testing matrix has been designed to address problems encountered with improvised materials--powder mixtures, liquid suspensions, partially wetted solids, immiscible liquids, and reactive materials. Over 30 issues have been identified that indicate standard test methods may require modification when applied to HMEs to derive accurate sensitivity assessments needed for development safe handling and storage practices. This presentation will discuss experimental difficulties encountered when testing these problematic samples, show inter-laboratory testing results, show some statistical interpretation of the results, and highlight some of the testing issues. Some of the work was performed under the auspices of the U.S. Department of Energy by Lawrence Livermore National Laboratory under Contract DE-AC52-07NA27344. LLNL-ABS-617519 (721812).

  4. Fission matrix-based Monte Carlo criticality analysis of fuel storage pools

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farlotti, M.; Ecole Polytechnique, Palaiseau, F 91128; Larsen, E. W.

    2013-07-01

    Standard Monte Carlo transport procedures experience difficulties in solving criticality problems in fuel storage pools. Because of the strong neutron absorption between fuel assemblies, source convergence can be very slow, leading to incorrect estimates of the eigenvalue and the eigenfunction. This study examines an alternative fission matrix-based Monte Carlo transport method that takes advantage of the geometry of a storage pool to overcome this difficulty. The method uses Monte Carlo transport to build (essentially) a fission matrix, which is then used to calculate the criticality and the critical flux. This method was tested using a test code on a simplemore » problem containing 8 assemblies in a square pool. The standard Monte Carlo method gave the expected eigenfunction in 5 cases out of 10, while the fission matrix method gave the expected eigenfunction in all 10 cases. In addition, the fission matrix method provides an estimate of the error in the eigenvalue and the eigenfunction, and it allows the user to control this error by running an adequate number of cycles. Because of these advantages, the fission matrix method yields a higher confidence in the results than standard Monte Carlo. We also discuss potential improvements of the method, including the potential for variance reduction techniques. (authors)« less

  5. Accelerated long-term forgetting in resected and seizure-free temporal lobe epilepsy patients.

    PubMed

    Visser, M; Forn, C; Gómez-Ibáñez, A; Rosell-Negre, P; Villanueva, V; Ávila, C

    2018-03-12

    Episodic memory impairments caused by temporal lobe epilepsy (TLE) are well documented in the literature. Standard clinical episodic memory tests typically include a 30-min delayed recall test. However, in the past decade, it has become apparent that this standard test does not capture the full range of memory problems in TLE patients. Some patients perform well on a standard 30-min delayed recall test, but show Accelerated Long-term Forgetting (ALF) after 24 h. Although ALF has been investigated in patients with different types of epilepsy, current research on resected TLE patients is missing. In the present study, resected TLE patients were compared to a control group matched on initial learning. They showed normal performance on verbal recall after 30 min, but impairments became apparent after one week. Moreover, the significant interaction between participant group and memory test delay demonstrated that the patients indeed showed an acceleration in forgetting. Furthermore, ALF was present in both left and right resected TLE patients, which contradicts the presence of material-specific hemispheric differences in ALF. In addition, ALF was observed in seizure-free resected TLE patients, thereby demonstrating that this factor is not crucial for long-term memory deficits. The outcome shows that clinicians are likely to underestimate memory deficits in resected TLE patients and, therefore, advocates for the inclusion of ALF tests in standard clinical batteries for both pre- and post-surgery testing sessions. Copyright © 2018 Elsevier Ltd. All rights reserved.

  6. Posttest analysis of international standard problem 10 using RELAP4/MOD7. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hsu, M.; Davis, C.B.; Peterson, A.C. Jr.

    RELAP4/MOD7, a best estimate computer code for the calculation of thermal and hydraulic phenomena in a nuclear reactor or related system, is the latest version in the RELAP4 code development series. This paper evaluates the capability of RELAP4/MOD7 to calculate refill/reflood phenomena. This evaluation uses the data of International Standard Problem 10, which is based on West Germany's KWU PKL refill/reflood experiment K9A. The PKL test facility represents a typical West German four-loop, 1300 MW pressurized water reactor (PWR) in reduced scale while maintaining prototypical volume-to-power ratio. The PKL facility was designed to specifically simulate the refill/reflood phase of amore » hypothetical loss-of-coolant accident (LOCA).« less

  7. Word Problem Solving in Contemporary Math Education: A Plea for Reading Comprehension Skills Training

    PubMed Central

    Boonen, Anton J. H.; de Koning, Björn B.; Jolles, Jelle; van der Schoot, Menno

    2016-01-01

    Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME. PMID:26925012

  8. Word Problem Solving in Contemporary Math Education: A Plea for Reading Comprehension Skills Training.

    PubMed

    Boonen, Anton J H; de Koning, Björn B; Jolles, Jelle; van der Schoot, Menno

    2016-01-01

    Successfully solving mathematical word problems requires both mental representation skills and reading comprehension skills. In Realistic Math Education (RME), however, students primarily learn to apply the first of these skills (i.e., representational skills) in the context of word problem solving. Given this, it seems legitimate to assume that students from a RME curriculum experience difficulties when asked to solve semantically complex word problems. We investigated this assumption under 80 sixth grade students who were classified as successful and less successful word problem solvers based on a standardized mathematics test. To this end, students completed word problems that ask for both mental representation skills and reading comprehension skills. The results showed that even successful word problem solvers had a low performance on semantically complex word problems, despite adequate performance on semantically less complex word problems. Based on this study, we concluded that reading comprehension skills should be given a (more) prominent role during word problem solving instruction in RME.

  9. Posturography and locomotor tests of dynamic balance after long-duration spaceflight.

    PubMed

    Cohen, Helen S; Kimball, Kay T; Mulavara, Ajitkumar P; Bloomberg, Jacob J; Paloski, William H

    2012-01-01

    The currently approved objective clinical measure of standing balance in astronauts after space flight is the Sensory Organization Test battery of computerized dynamic posturography. No tests of walking balance are currently approved for standard clinical testing of astronauts. This study determined the sensitivity and specificity of standing and walking balance tests for astronauts before and after long-duration space flight. Astronauts were tested on an obstacle avoidance test known as the Functional Mobility Test (FMT) and on the Sensory Organization Test using sway-referenced support surface motion with eyes closed (SOT 5) before and six months after (n=15) space flight on the International Space Station. They were tested two to seven days after landing. Scores on SOT tests decreased and scores on FMT increased significantly from pre- to post-flight. In other words, post-flight scores were worse than pre-flight scores. SOT and FMT scores were not significantly related. ROC analyses indicated supra-clinical cut-points for SOT 5 and for FMT. The standard clinical cut-point for SOT 5 had low sensitivity to post-flight astronauts. Higher cut-points increased sensitivity to post-flight astronauts but decreased specificity to pre-flight astronauts. Using an FMT cut-point that was moderately highly sensitive and highly specific plus SOT 5 at the standard clinical cut-point was no more sensitive than SOT 5, alone. FMT plus SOT 5 at higher cut-points was more specific and more sensitive. The total correctly classified was highest for FMT, alone, and for FMT plus SOT 5 at the highest cut-point. These findings indicate that standard clinical comparisons are not useful for identifying problems. Testing both standing and walking balance will be more likely to identify balance deficits.

  10. Spacecraft inertia estimation via constrained least squares

    NASA Technical Reports Server (NTRS)

    Keim, Jason A.; Acikmese, Behcet A.; Shields, Joel F.

    2006-01-01

    This paper presents a new formulation for spacecraft inertia estimation from test data. Specifically, the inertia estimation problem is formulated as a constrained least squares minimization problem with explicit bounds on the inertia matrix incorporated as LMIs [linear matrix inequalities). The resulting minimization problem is a semidefinite optimization that can be solved efficiently with guaranteed convergence to the global optimum by readily available algorithms. This method is applied to data collected from a robotic testbed consisting of a freely rotating body. The results show that the constrained least squares approach produces more accurate estimates of the inertia matrix than standard unconstrained least squares estimation methods.

  11. Social cognition and social problem solving abilities in individuals with alcohol use disorder.

    PubMed

    Schmidt, Tobias; Roser, Patrik; Juckel, Georg; Brüne, Martin; Suchan, Boris; Thoma, Patrizia

    2016-11-01

    Up to now, little is known about higher order cognitive abilities like social cognition and social problem solving abilities in alcohol-dependent patients. However, impairments in these domains lead to an increased probability for relapse and are thus highly relevant in treatment contexts. This cross-sectional study assessed distinct aspects of social cognition and social problem solving in 31 hospitalized patients with alcohol use disorder (AUD) and 30 matched healthy controls (HC). Three ecologically valid scenario-based tests were used to gauge the ability to infer the mental state of story characters in complicated interpersonal situations, the capacity to select the best problem solving strategy among other less optimal alternatives, and the ability to freely generate appropriate strategies to handle difficult interpersonal conflicts. Standardized tests were used to assess executive function, attention, trait empathy, and memory, and correlations were computed between measures of executive function, attention, trait empathy, and tests of social problem solving. AUD patients generated significantly fewer socially sensitive and practically effective solutions for problematic interpersonal situations than the HC group. Furthermore, patients performed significantly worse when asked to select the best alternative among a list of presented alternatives for scenarios containing sarcastic remarks and had significantly more problems to interpret sarcastic remarks in difficult interpersonal situations. These specific patterns of impairments should be considered in treatment programs addressing impaired social skills in individuals with AUD.

  12. Bayesian inference for psychology. Part II: Example applications with JASP.

    PubMed

    Wagenmakers, Eric-Jan; Love, Jonathon; Marsman, Maarten; Jamil, Tahira; Ly, Alexander; Verhagen, Josine; Selker, Ravi; Gronau, Quentin F; Dropmann, Damian; Boutin, Bruno; Meerhoff, Frans; Knight, Patrick; Raj, Akash; van Kesteren, Erik-Jan; van Doorn, Johnny; Šmíra, Martin; Epskamp, Sacha; Etz, Alexander; Matzke, Dora; de Jong, Tim; van den Bergh, Don; Sarafoglou, Alexandra; Steingroever, Helen; Derks, Koen; Rouder, Jeffrey N; Morey, Richard D

    2018-02-01

    Bayesian hypothesis testing presents an attractive alternative to p value hypothesis testing. Part I of this series outlined several advantages of Bayesian hypothesis testing, including the ability to quantify evidence and the ability to monitor and update this evidence as data come in, without the need to know the intention with which the data were collected. Despite these and other practical advantages, Bayesian hypothesis tests are still reported relatively rarely. An important impediment to the widespread adoption of Bayesian tests is arguably the lack of user-friendly software for the run-of-the-mill statistical problems that confront psychologists for the analysis of almost every experiment: the t-test, ANOVA, correlation, regression, and contingency tables. In Part II of this series we introduce JASP ( http://www.jasp-stats.org ), an open-source, cross-platform, user-friendly graphical software package that allows users to carry out Bayesian hypothesis tests for standard statistical problems. JASP is based in part on the Bayesian analyses implemented in Morey and Rouder's BayesFactor package for R. Armed with JASP, the practical advantages of Bayesian hypothesis testing are only a mouse click away.

  13. The piecewise-linear predictor-corrector code - A Lagrangian-remap method for astrophysical flows

    NASA Technical Reports Server (NTRS)

    Lufkin, Eric A.; Hawley, John F.

    1993-01-01

    We describe a time-explicit finite-difference algorithm for solving the nonlinear fluid equations. The method is similar to existing Eulerian schemes in its use of operator-splitting and artificial viscosity, except that we solve the Lagrangian equations of motion with a predictor-corrector and then remap onto a fixed Eulerian grid. The remap is formulated to eliminate errors associated with coordinate singularities, with a general prescription for remaps of arbitrary order. We perform a comprehensive series of tests on standard problems. Self-convergence tests show that the code has a second-order rate of convergence in smooth, two-dimensional flow, with pressure forces, gravity, and curvilinear geometry included. While not as accurate on idealized problems as high-order Riemann-solving schemes, the predictor-corrector Lagrangian-remap code has great flexibility for application to a variety of astrophysical problems.

  14. The impact of using standardized patients in psychiatric cases on the levels of motivation and perceived learning of the nursing students.

    PubMed

    Sarikoc, Gamze; Ozcan, Celale Tangul; Elcin, Melih

    2017-04-01

    The use of standardized patients is not very common in psychiatric nursing education and there has been no study conducted in Turkey. This study evaluated the impact of using standardized patients in psychiatric cases on the levels of motivation and perceived learning of the nursing students. This manuscript addressed the quantitative aspect of a doctoral thesis study in which both quantitative and qualitative methods were used. A pre-test and post-test were employed in the quantitative analysis in a randomized and controlled study design. The motivation scores, and interim and post-test scores for perceived learning were higher in the experimental group compared to pre-test scores and the scores of the control group. The students in the experimental group reported that they felt more competent about practical training in clinical psychiatry, as well as in performing interviews with patients having mental problems, and reported less anxiety about performing an interview when compared to students in the control group. It is considered that the inclusion of standardized patient methodology in the nursing education curriculum in order to improve the knowledge level and skills of students would be beneficial in the training of mental health nurses. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. Soil pH Mapping with an On-The-Go Sensor

    PubMed Central

    Schirrmann, Michael; Gebbers, Robin; Kramer, Eckart; Seidel, Jan

    2011-01-01

    Soil pH is a key parameter for crop productivity, therefore, its spatial variation should be adequately addressed to improve precision management decisions. Recently, the Veris pH Manager™, a sensor for high-resolution mapping of soil pH at the field scale, has been made commercially available in the US. While driving over the field, soil pH is measured on-the-go directly within the soil by ion selective antimony electrodes. The aim of this study was to evaluate the Veris pH Manager™ under farming conditions in Germany. Sensor readings were compared with data obtained by standard protocols of soil pH assessment. Experiments took place under different scenarios: (a) controlled tests in the lab, (b) semicontrolled test on transects in a stop-and-go mode, and (c) tests under practical conditions in the field with the sensor working in its typical on-the-go mode. Accuracy issues, problems, options, and potential benefits of the Veris pH Manager™ were addressed. The tests demonstrated a high degree of linearity between standard laboratory values and sensor readings. Under practical conditions in the field (scenario c), the measure of fit (r2) for the regression between the on-the-go measurements and the reference data was 0.71, 0.63, and 0.84, respectively. Field-specific calibration was necessary to reduce systematic errors. Accuracy of the on-the-go maps was considerably higher compared with the pH maps obtained by following the standard protocols, and the error in calculating lime requirements was reduced by about one half. However, the system showed some weaknesses due to blockage by residual straw and weed roots. If these problems were solved, the on-the-go sensor investigated here could be an efficient alternative to standard sampling protocols as a basis for liming in Germany. PMID:22346591

  16. Soil pH mapping with an on-the-go sensor.

    PubMed

    Schirrmann, Michael; Gebbers, Robin; Kramer, Eckart; Seidel, Jan

    2011-01-01

    Soil pH is a key parameter for crop productivity, therefore, its spatial variation should be adequately addressed to improve precision management decisions. Recently, the Veris pH Manager™, a sensor for high-resolution mapping of soil pH at the field scale, has been made commercially available in the US. While driving over the field, soil pH is measured on-the-go directly within the soil by ion selective antimony electrodes. The aim of this study was to evaluate the Veris pH Manager™ under farming conditions in Germany. Sensor readings were compared with data obtained by standard protocols of soil pH assessment. Experiments took place under different scenarios: (a) controlled tests in the lab, (b) semicontrolled test on transects in a stop-and-go mode, and (c) tests under practical conditions in the field with the sensor working in its typical on-the-go mode. Accuracy issues, problems, options, and potential benefits of the Veris pH Manager™ were addressed. The tests demonstrated a high degree of linearity between standard laboratory values and sensor readings. Under practical conditions in the field (scenario c), the measure of fit (r(2)) for the regression between the on-the-go measurements and the reference data was 0.71, 0.63, and 0.84, respectively. Field-specific calibration was necessary to reduce systematic errors. Accuracy of the on-the-go maps was considerably higher compared with the pH maps obtained by following the standard protocols, and the error in calculating lime requirements was reduced by about one half. However, the system showed some weaknesses due to blockage by residual straw and weed roots. If these problems were solved, the on-the-go sensor investigated here could be an efficient alternative to standard sampling protocols as a basis for liming in Germany.

  17. An Overview of the Clinical Use of Filter Paper in the Diagnosis of Tropical Diseases

    PubMed Central

    Smit, Pieter W.; Elliott, Ivo; Peeling, Rosanna W.; Mabey, David; Newton, Paul N.

    2014-01-01

    Tropical infectious diseases diagnosis and surveillance are often hampered by difficulties of sample collection and transportation. Filter paper potentially provides a useful medium to help overcome such problems. We reviewed the literature on the use of filter paper, focusing on the evaluation of nucleic acid and serological assays for diagnosis of infectious diseases using dried blood spots (DBS) compared with recognized gold standards. We reviewed 296 eligible studies and included 101 studies evaluating DBS and 192 studies on other aspects of filter paper use. We also discuss the use of filter paper with other body fluids and for tropical veterinary medicine. In general, DBS perform with sensitivities and specificities similar or only slightly inferior to gold standard sample types. However, important problems were revealed with the uncritical use of DBS, inappropriate statistical analysis, and lack of standardized methodology. DBS have great potential to empower healthcare workers by making laboratory-based diagnostic tests more readily accessible, but additional and more rigorous research is needed. PMID:24366501

  18. NASA pyrotechnically actuated systems program

    NASA Technical Reports Server (NTRS)

    Schulze, Norman R.

    1993-01-01

    The Office of Safety and Mission Quality initiated a Pyrotechnically Actuated Systems (PAS) Program in FY-92 to address problems experienced with pyrotechnically actuated systems and devices used both on the ground and in flight. The PAS Program will provide the technical basis for NASA's projects to incorporate new technological developments in operational systems. The program will accomplish that objective by developing/testing current and new hardware designs for flight applications and by providing a pyrotechnic data base. This marks the first applied pyrotechnic technology program funded by NASA to address pyrotechnic issues. The PAS Program has been structured to address the results of a survey of pyrotechnic device and system problems with the goal of alleviating or minimizing their risks. Major program initiatives include the development of a Laser Initiated Ordnance System, a pyrotechnic systems data base, NASA Standard Initiator model, a NASA Standard Linear Separation System and a NASA Standard Gas Generator. The PAS Program sponsors annual aerospace pyrotechnic systems workshops.

  19. A genetic algorithm solution to the unit commitment problem

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kazarlis, S.A.; Bakirtzis, A.G.; Petridis, V.

    1996-02-01

    This paper presents a Genetic Algorithm (GA) solution to the Unit Commitment problem. GAs are general purpose optimization techniques based on principles inspired from the biological evolution using metaphors of mechanisms such as natural selection, genetic recombination and survival of the fittest. A simple Ga algorithm implementation using the standard crossover and mutation operators could locate near optimal solutions but in most cases failed to converge to the optimal solution. However, using the Varying Quality Function technique and adding problem specific operators, satisfactory solutions to the Unit Commitment problem were obtained. Test results for systems of up to 100 unitsmore » and comparisons with results obtained using Lagrangian Relaxation and Dynamic Programming are also reported.« less

  20. Clinical and Cognitive Characteristics Associated with Mathematics Problem Solving in Adolescents with Autism Spectrum Disorder.

    PubMed

    Oswald, Tasha M; Beck, Jonathan S; Iosif, Ana-Maria; McCauley, James B; Gilhooly, Leslie J; Matter, John C; Solomon, Marjorie

    2016-04-01

    Mathematics achievement in autism spectrum disorder (ASD) has been understudied. However, the ability to solve applied math problems is associated with academic achievement, everyday problem-solving abilities, and vocational outcomes. The paucity of research on math achievement in ASD may be partly explained by the widely-held belief that most individuals with ASD are mathematically gifted, despite emerging evidence to the contrary. The purpose of the study was twofold: to assess the relative proportions of youth with ASD who demonstrate giftedness versus disability on applied math problems, and to examine which cognitive (i.e., perceptual reasoning, verbal ability, working memory) and clinical (i.e., test anxiety) characteristics best predict achievement on applied math problems in ASD relative to typically developing peers. Twenty-seven high-functioning adolescents with ASD and 27 age- and Full Scale IQ-matched typically developing controls were assessed on standardized measures of math problem solving, perceptual reasoning, verbal ability, and test anxiety. Results indicated that 22% of the ASD sample evidenced a mathematics learning disability, while only 4% exhibited mathematical giftedness. The parsimonious linear regression model revealed that the strongest predictor of math problem solving was perceptual reasoning, followed by verbal ability and test anxiety, then diagnosis of ASD. These results inform our theories of math ability in ASD and highlight possible targets of intervention for students with ASD struggling with mathematics. © 2015 International Society for Autism Research, Wiley Periodicals, Inc.

  1. Bayesian multivariate hierarchical transformation models for ROC analysis.

    PubMed

    O'Malley, A James; Zou, Kelly H

    2006-02-15

    A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box-Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial.

  2. Bayesian multivariate hierarchical transformation models for ROC analysis

    PubMed Central

    O'Malley, A. James; Zou, Kelly H.

    2006-01-01

    SUMMARY A Bayesian multivariate hierarchical transformation model (BMHTM) is developed for receiver operating characteristic (ROC) curve analysis based on clustered continuous diagnostic outcome data with covariates. Two special features of this model are that it incorporates non-linear monotone transformations of the outcomes and that multiple correlated outcomes may be analysed. The mean, variance, and transformation components are all modelled parametrically, enabling a wide range of inferences. The general framework is illustrated by focusing on two problems: (1) analysis of the diagnostic accuracy of a covariate-dependent univariate test outcome requiring a Box–Cox transformation within each cluster to map the test outcomes to a common family of distributions; (2) development of an optimal composite diagnostic test using multivariate clustered outcome data. In the second problem, the composite test is estimated using discriminant function analysis and compared to the test derived from logistic regression analysis where the gold standard is a binary outcome. The proposed methodology is illustrated on prostate cancer biopsy data from a multi-centre clinical trial. PMID:16217836

  3. Parents' evaluation of developmental status: how well do parents' concerns identify children with behavioral and emotional problems?

    PubMed

    Glascoe, Frances Page

    2003-03-01

    This study was undertaken to determine which parental concerns are most associated with significant behavioral/emotional problems and the extent to which parents' concerns can be depended on in the detection of mental health problems. An additional goal is to view how well a recently published screening test relying on parents' concerns, Parents' Evaluation of Developmental Status (PEDS), detects behavioral and emotional problems. Subjects were a national sample of 472 parents and their children (21 months to 8 years old) who were participants in 1 of 2 test standardization and validation studies. Sites included various pediatric settings, public schools, and Head Start programs in 5 diverse geographic locations. Subjects were representative of U.S. demographics in terms of ethnicity, parental level of education, gender, and socioeconomic status. At each site, psychological examiners, educational diagnosticians, or school psychologists recruited families, and obtained informed consent. Examiners disseminated a demographics questionnaire (in English or Spanish) and a developmental screening test that relies on parents' concerns (PEDS). Examiners were blinded to PEDS' scoring and interpretation administered either by interview or in writing, the Eyberg Child Behavior Inventory (ECBI) or the Possible Problems Checklist (PPC), a subtest of the Child Development Inventory that includes items measuring emotional well-being and behavioral self-control. PEDS was used to sort children into risk for developmental disabilities according to various types of parental concern. Those identified as having high or moderate risk were nominated for diagnostic testing or screening followed by developmental and mental health services when indicated. Because their emotional and behavioral needs would have been identified and addressed, these groups were removed from the analysis (N = 177). Of the 295 children who would not have been nominated for further scrutiny on PEDS due to their low risk of developmental problems, 102 had parents with concerns not predictive of developmental disabilities (e.g., behavior, social skills, self-help skills) and 193 had no concerns at all. Of the 295 children, 12% had scores on either the ECBI or the PPC indicative of mental health problems. Two parental concerns were identified through logistic regression as predictive of mental health status: behavior (OR = 4.74, CI = 1.69-13.30); and social skills (OR = 5.76, CI = 2.46-13.50). If one or more of these concerns was present, children had 8.5 times the risk of mental health problems (CI = 3.69-19.71) In children 434 years of age and older, one or both concerns was 87% sensitive and 79% specific to mental health status, figures keeping with standards for screening test accuracy. In young children, the presence of one or both concerns was 68% sensitive and 66% specific to mental health status. The findings suggest that certain parental concerns, if carefully elicited, can be depended on to detect mental health problems when children are 41 years and older and at low risk of developmental problems. For younger children, clinicians should counsel parents in disciplinary techniques, follow up, and if suggestions were not effective, administer a behavioral-emotional screening test such as the Pediatric Symptoms Checklist or the ECBI before making a referral decision.

  4. A Low-Cost Inkjet-Printed Glucose Test Strip System for Resource-Poor Settings.

    PubMed

    Gainey Wilson, Kayla; Ovington, Patrick; Dean, Delphine

    2015-06-12

    The prevalence of diabetes is increasing in low-resource settings; however, accessing glucose monitoring is extremely difficult and expensive in these regions. Work is being done to address the multitude of issues surrounding diabetes care in low-resource settings, but an affordable glucose monitoring solution has yet to be presented. An inkjet-printed test strip solution is being proposed as a solution to this problem. The use of a standard inkjet printer is being proposed as a manufacturing method for low-cost glucose monitoring test strips. The printer cartridges are filled with enzyme and dye solutions that are printed onto filter paper. The result is a colorimetric strip that turns a blue/green color in the presence of blood glucose. Using a light-based spectroscopic reading, the strips show a linear color change with an R(2) = .99 using glucose standards and an R(2) = .93 with bovine blood. Initial testing with bovine blood indicates that the strip accuracy is comparable to the International Organization for Standardization (ISO) standard 15197 for glucose testing in the 0-350 mg/dL range. However, further testing with human blood will be required to confirm this. A visible color gradient was observed with both the glucose standard and bovine blood experiment, which could be used as a visual indicator in cases where an electronic glucose meter was unavailable. These results indicate that an inkjet-printed filter paper test strip is a feasible method for monitoring blood glucose levels. The use of inkjet printers would allow for local manufacturing to increase supply in remote regions. This system has the potential to address the dire need for glucose monitoring in low-resource settings. © 2015 Diabetes Technology Society.

  5. Policy Reform: Testing Times for Teacher Education in Australia

    ERIC Educational Resources Information Center

    Fitzgerald, Tanya; Knipe, Sally

    2016-01-01

    In Australia as well as elsewhere, initial teacher education has become centre stage to a political agenda that calls for global competitiveness in the knowledge economy. The common problem cited has been declining educational standards linked with the quality of teaching and teacher education. The avalanche of review and policy reform has exposed…

  6. The Association between Preschool Children's Social Functioning and Their Emergent Academic Skills

    ERIC Educational Resources Information Center

    Arnold, David H.; Kupersmidt, Janis B.; Voegler-Lee, Mary Ellen; Marshall, Nastassja A.

    2012-01-01

    This study examined the relationship between social functioning and emergent academic development in a sample of 467 preschool children (M=55.9 months old, SD=3.8). Teachers reported on children's aggression, attention problems, and prosocial skills. Preliteracy, language, and early mathematics skills were assessed with standardized tests. Better…

  7. The New Common Sense of Education: Advocacy Research Versus Academic Authority

    ERIC Educational Resources Information Center

    Shaker, Paul; Heilman, Elizabeth E.

    2004-01-01

    Current education policy is increasingly controlled by partisan politicians and the corporate interests that speak through them. Attacking American education and blaming economic troubles on failing schools and low standardized test scores coalesces the rhetoric of the right and draws attention away from fundamental social and economic problems.…

  8. [Animal drugs quality status and reason analysis].

    PubMed

    Ding, Qing; Qiu, Ya-jing; Fang, Ke-hui; Hu, Hao-bin; Wu, Yue

    2015-11-01

    In order to reaction the quality present situation, problems on the current quality of animal sources of drugs are summed up by using test data analysis, literature search and marketing research. This paper can also help the improvement of the quality management, the revision of the relevant department policy system and the improvement of standards.

  9. A School Voucher Program for Baltimore City

    ERIC Educational Resources Information Center

    Lips, Dan

    2005-01-01

    Baltimore City's public school system is in crisis. Academically, the school system fails on any number of measures. The city's graduation rate is barely above 50 percent and students continually lag well behind state averages on standardized tests. Adding to these problems is the school system's current fiscal crisis, created by years of fiscal…

  10. CONDITIONING CHILDREN FOR SCHOOL. FINAL REPORT.

    ERIC Educational Resources Information Center

    PRINCE, ALBERT I.

    A SET OF BEHAVIORAL PRINCIPLES USED IN INTELLECTUAL REHABILITATION OF A SMALL GROUP OF THIRD GRADERS WITH EDUCATIONAL AND RELATED BEHAVIORAL PROBLEMS WAS EVALUATED. SUBJECTS SELECTED WERE EIGHT THIRD-GRADE STUDENTS AGED 8 TO 10, WHO WERE 1 YEAR BEHIND IN READING AS MEASURED BY A STANDARDIZED ACHIEVEMENT TEST AND 1 YEAR BEHIND IN EITHER SPELLING OR…

  11. Comparison of Student Performance in Cooperative Learning and Traditional Lecture-Based Biochemistry Classes

    ERIC Educational Resources Information Center

    Anderson, William L.; Mitchell, Steven M.; Osgood, Marcy P.

    2005-01-01

    Student performance in two different introductory biochemistry curricula are compared based on standardized testing of student content knowledge, problem-solving skills, and student opinions about the courses. One curriculum was used in four traditional, lecture-based classes (n = 381 students), whereas the second curriculum was used in two…

  12. Enhancing SCORM Metadata for Assessment Authoring in E-Learning

    ERIC Educational Resources Information Center

    Chang, Wen-Chih; Hsu, Hui-Huang; Smith, Timothy K.; Wang, Chun-Chia

    2004-01-01

    With the rapid development of distance learning and the XML technology, metadata play an important role in e-Learning. Nowadays, many distance learning standards, such as SCORM, AICC CMI, IEEE LTSC LOM and IMS, use metadata to tag learning materials. However, most metadata models are used to define learning materials and test problems. Few…

  13. The Price of a Good Education

    ERIC Educational Resources Information Center

    Schachter, Ron

    2010-01-01

    There are plenty of statistics available for measuring the performance, potential and problems of school districts, from standardized test scores to the number of students eligible for free or reduced-price lunch. Last June, another metric came into sharper focus when the U.S. Census Bureau released its latest state-by-state data on per-pupil…

  14. Improving Students' Reading Fluency through the Use of Phonics and Word Recognition Strategies.

    ERIC Educational Resources Information Center

    Ballard, Christine; Jacocks, Kathleen

    This study describes a program designed to improve student reading fluency. The targeted population consisted of first and third grade students in a growing urban community in the Midwest. Evidence for the existence of the problem included standardized test scores and independent computer reports that measured academic achievement, phonic…

  15. Epilogue: Reading Comprehension Is Not a Single Ability--Implications for Assessment and Instruction

    ERIC Educational Resources Information Center

    Kamhi, Alan G.; Catts, Hugh W.

    2017-01-01

    Purpose: In this epilogue, we review the 4 response articles and highlight the implications of a multidimensional view of reading for the assessment and instruction of reading comprehension. Method: We reiterate the problems with standardized tests of reading comprehension and discuss the advantages and disadvantages of recently developed…

  16. Teacher Technology Acceptance and Usage for the Middle School Classroom

    ERIC Educational Resources Information Center

    Stone, Wilton, Jr.

    2014-01-01

    According to the U.S. Department of Education National Center for Education Statistics, students in the United States routinely perform poorly on international assessments. This study was focused specifically on the problem of the decrease in the number of middle school students meeting the requirements for one state's standardized tests for…

  17. Do American and Korean Education Systems Converge? Tracking School Reform Policies and Outcomes in Korea and the USA

    ERIC Educational Resources Information Center

    Lee, Jaekyung; Park, Daekwon

    2014-01-01

    This study examines key school reform policies and outcomes of the USA and Korea over the past three decades from comparative perspectives. Since the two nations' unique educational problems brought divergent educational reform paths--standardization versus differentiation, high-stakes testing versus individualized assessment, and centralization…

  18. Middle School Students' Mathematics Knowledge Retention: Online or Face-To-Face Environments

    ERIC Educational Resources Information Center

    Edwards, Clayton M.; Rule, Audrey C.; Boody, Robert M.

    2017-01-01

    Educators seek to develop students' mathematical knowledge retention to increase student efficacy in follow-on classwork, improvement of test scores, attainment of standards, and preparation for careers. Interactive visuals, feedback during problem solving, and incorporation of higher-order thinking skills are known to increase retention, but a…

  19. Mathematical Instructional Practices and Self-Efficacy of Kindergarten Teachers

    ERIC Educational Resources Information Center

    Schillinger, Tammy

    2016-01-01

    A local urban school district recently reported that 86% of third graders did not demonstrate proficiency on the Math Standardized Test, which challenges students to solve problems and justify solutions. It is beneficial if these skills are developed prior to third grade. Students may be more academically successful if kindergarten teachers have…

  20. Earning the Stamp of Approval: How To Achieve Optimal Usability.

    ERIC Educational Resources Information Center

    Makar, Susan

    2003-01-01

    Describes the redesign of the Web site at the virtual library of the NIST (National Institute of Standards and Technology). Discusses usability problems with the original site, including navigation difficulties; focus groups to determine user needs; usability testing for the new Web site; and the importance of customer input. (LRW)

  1. Environmental Systems Simulations for Carbon, Energy, Nitrogen, Water, and Watersheds: Design Principles and Pilot Testing

    ERIC Educational Resources Information Center

    Lant, Christopher; Pérez-Lapeña, Blanca; Xiong, Weidong; Kraft, Steven; Kowalchuk, Rhonda; Blair, Michael

    2016-01-01

    Guided by the Next Generation Science Standards and elements of problem-based learning, four human-environment systems simulations are described in brief--carbon, energy, water, and watershed--and a fifth simulation on nitrogen is described in more depth. These science, technology, engineering, and math (STEM) education simulations illustrate…

  2. Integration of Reading and Writing Strategies To Improve Reading.

    ERIC Educational Resources Information Center

    Murphy, Judy

    A program was developed for improving the reading of first-grade students in a progressive suburban community in northern Illinois. The problem was originally noted by an increase in the need for support services and low standardized test scores. Analysis of probable cause data revealed that students lacked knowledge of the relationship between…

  3. Integration of Reading and Writing Strategies To Improve Reading.

    ERIC Educational Resources Information Center

    Lewandowski, Susan

    A program was developed for improving the reading of second-grade students in a progressive suburban community in northern Illinois. The problem was originally noted by an increase in the need for support services and low standardized test scores. Analysis of probable cause data revealed that students lacked knowledge of the relationship between…

  4. Speeding-up Bioinformatics Algorithms with Heterogeneous Architectures: Highly Heterogeneous Smith-Waterman (HHeterSW).

    PubMed

    Gálvez, Sergio; Ferusic, Adis; Esteban, Francisco J; Hernández, Pilar; Caballero, Juan A; Dorado, Gabriel

    2016-10-01

    The Smith-Waterman algorithm has a great sensitivity when used for biological sequence-database searches, but at the expense of high computing-power requirements. To overcome this problem, there are implementations in literature that exploit the different hardware-architectures available in a standard PC, such as GPU, CPU, and coprocessors. We introduce an application that splits the original database-search problem into smaller parts, resolves each of them by executing the most efficient implementations of the Smith-Waterman algorithms in different hardware architectures, and finally unifies the generated results. Using non-overlapping hardware allows simultaneous execution, and up to 2.58-fold performance gain, when compared with any other algorithm to search sequence databases. Even the performance of the popular BLAST heuristic is exceeded in 78% of the tests. The application has been tested with standard hardware: Intel i7-4820K CPU, Intel Xeon Phi 31S1P coprocessors, and nVidia GeForce GTX 960 graphics cards. An important increase in performance has been obtained in a wide range of situations, effectively exploiting the available hardware.

  5. Efficient computation of significance levels for multiple associations in large studies of correlated data, including genomewide association studies.

    PubMed

    Dudbridge, Frank; Koeleman, Bobby P C

    2004-09-01

    Large exploratory studies, including candidate-gene-association testing, genomewide linkage-disequilibrium scans, and array-expression experiments, are becoming increasingly common. A serious problem for such studies is that statistical power is compromised by the need to control the false-positive rate for a large family of tests. Because multiple true associations are anticipated, methods have been proposed that combine evidence from the most significant tests, as a more powerful alternative to individually adjusted tests. The practical application of these methods is currently limited by a reliance on permutation testing to account for the correlated nature of single-nucleotide polymorphism (SNP)-association data. On a genomewide scale, this is both very time-consuming and impractical for repeated explorations with standard marker panels. Here, we alleviate these problems by fitting analytic distributions to the empirical distribution of combined evidence. We fit extreme-value distributions for fixed lengths of combined evidence and a beta distribution for the most significant length. An initial phase of permutation sampling is required to fit these distributions, but it can be completed more quickly than a simple permutation test and need be done only once for each panel of tests, after which the fitted parameters give a reusable calibration of the panel. Our approach is also a more efficient alternative to a standard permutation test. We demonstrate the accuracy of our approach and compare its efficiency with that of permutation tests on genomewide SNP data released by the International HapMap Consortium. The estimation of analytic distributions for combined evidence will allow these powerful methods to be applied more widely in large exploratory studies.

  6. Evaluation of mechanical properties of hybrid fiber (hemp, jute, kevlar) reinforced composites

    NASA Astrophysics Data System (ADS)

    Suresha, K. V.; Shivanand, H. K.; Amith, A.; Vidyasagar, H. N.

    2018-04-01

    In today's world composites play wide role in all the engineering fields. The reinforcement of composites decides the properties of the material. Natural fiber composites compared to synthetic fiber possesses poor mechanical properties. The solution for this problem is to use combination of natural fiber and synthetic fiber. Hybridization helps to improve the overall mechanical properties of the material. In this study, hybrid reinforced composites of Hemp fabric/Kevlar fabric/Epoxy and Jute fabric/ Kevlar fabric/Epoxy composites are fabricated using Simple hand layup technique followed by Vacuum bagging process. Appropriate test methods as per standards and guidelines are followed to analyze mechanical behavior of the composites. The mechanical characteristics like tensile, compression and flexural properties of the hybrid reinforced composites are tested as per the ASTM standards by series of tensile test; compression test and three point bending tests were conducted on the hybrid composites. A quantitative relationship between the Hemp fabric/Kevlar fabric/Epoxy and Jute/ Kevlar fabric/Epoxy has been established with constant thickness.

  7. Harmonizing Screening for Gambling Problems in Epidemiological Surveys – Development of the Rapid Screener for Problem Gambling (RSPG)

    PubMed Central

    Challet-Bouju, Gaëlle; Perrot, Bastien; Romo, Lucia; Valleur, Marc; Magalon, David; Fatséas, Mélina; Chéreau-Boudet, Isabelle; Luquiens, Amandine; Grall-Bronnec, Marie; Hardouin, Jean-Benoit

    2016-01-01

    Background and aims The aim of this study was to test the screening properties of several combinations of items from gambling scales, in order to harmonize screening of gambling problems in epidemiological surveys. The objective was to propose two brief screening tools (three items or less) for a use in interviews and self-administered questionnaires. Methods We tested the screening properties of combinations of items from several gambling scales, in a sample of 425 gamblers (301 non-problem gamblers and 124 disordered gamblers). Items tested included interview-based items (Pathological Gambling section of the DSM-IV, lifetime history of problem gambling, monthly expenses in gambling, and abstinence of 1 month or more) and self-report items (South Oaks Gambling Screen, Gambling Attitudes, and Beliefs Survey). The gold standard used was the diagnosis of a gambling disorder according to the DSM-5. Results Two versions of the Rapid Screener for Problem Gambling (RSPG) were developed: the RSPG-Interview (RSPG-I), being composed of two interview items (increasing bets and loss of control), and the RSPG-Self-Assessment (RSPG-SA), being composed of three self-report items (chasing, guiltiness, and perceived inability to stop). Discussion and conclusions We recommend using the RSPG-SA/I for screening problem gambling in epidemiological surveys, with the version adapted for each purpose (RSPG-I for interview-based surveys and RSPG-SA for self-administered surveys). This first triage of potential problem gamblers must be supplemented by further assessment, as it may overestimate the proportion of problem gamblers. However, a first triage has the great advantage of saving time and energy in large-scale screening for problem gambling. PMID:27348558

  8. The efficacy of problem-solving treatments after deliberate self-harm: meta-analysis of randomized controlled trials with respect to depression, hopelessness and improvement in problems.

    PubMed

    Townsend, E; Hawton, K; Altman, D G; Arensman, E; Gunnell, D; Hazell, P; House, A; Van Heeringen, K

    2001-08-01

    Brief problem-solving therapy is regarded as a pragmatic treatment for deliberate self-harm (DSH) patients. A recent meta-analysis of randomized controlled trials (RCTs) evaluating this approach indicated a trend towards reduced repetition of DSH but the pooled odds ratio was not statistically significant. We have now examined other important outcomes using this procedure, namely depression, hopelessness and improvement in problems. Six trials in which problem-solving therapy was compared with control treatment were identified from an extensive literature review of RCTs of treatments for DSH patients. Data concerning depression, hopelessness and improvement in problems were extracted. Where relevant statistical data (e.g. standard deviations) were missing these were imputed using various statistical methods. Results were pooled using meta-analytical procedures. At follow-up, patients who were offered problem-solving therapy had significantly greater improvement in scores for depression (standardized mean difference = -0.36; 95% CI -0.61 to -0.11) and hopelessness (weighted mean difference =-3.2; 95% CI -4.0 to -2.41), and significantly more reported improvement in their problems (odds ratio = 2.31; 95% CI 1.29 to 4.13), than patients who were in the control treatment groups. Problem-solving therapy for DSH patients appears to produce better results than control treatment with regard to improvement in depression, hopelessness and problems. It is desirable that this finding is confirmed in a large trial, which will also allow adequate testing of the impact of this treatment on repetition of DSH.

  9. DSN system performance test Doppler noise models; noncoherent configuration

    NASA Technical Reports Server (NTRS)

    Bunce, R.

    1977-01-01

    The newer model for variance, the Allan technique, now adopted for testing, is analyzed in the subject mode. A model is generated (including considerable contribution from the station secondary frequency standard), and rationalized with existing data. The variance model is definitely sound; the Allan technique mates theory and measure. The mean-frequency model is an estimate; this problem is yet to be rigorously resolved. The unaltered defining expressions are noncovergent, and the observed mean is quite erratic.

  10. Comparing Evolutionary Programs and Evolutionary Pattern Search Algorithms: A Drug Docking Application

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hart, W.E.

    1999-02-10

    Evolutionary programs (EPs) and evolutionary pattern search algorithms (EPSAS) are two general classes of evolutionary methods for optimizing on continuous domains. The relative performance of these methods has been evaluated on standard global optimization test functions, and these results suggest that EPSAs more robustly converge to near-optimal solutions than EPs. In this paper we evaluate the relative performance of EPSAs and EPs on a real-world application: flexible ligand binding in the Autodock docking software. We compare the performance of these methods on a suite of docking test problems. Our results confirm that EPSAs and EPs have comparable performance, and theymore » suggest that EPSAs may be more robust on larger, more complex problems.« less

  11. Status of research into lightning effects on aircraft

    NASA Technical Reports Server (NTRS)

    Plumer, J. A.

    1976-01-01

    Developments in aircraft lightning protection since 1938 are reviewed. Potential lightning problems resulting from present trends toward the use of electronic controls and composite structures are discussed, along with presently available lightning test procedures for problem assessment. The validity of some procedures is being questioned because of pessimistic results and design implications. An in-flight measurement program is needed to provide statistics on lightning severity at flight altitudes and to enable more realistic tests, and operators are urged to supply researchers with more details on electronic components damaged by lightning strikes. A need for review of certain aspects of fuel system vulnerability is indicated by several recent accidents, and specific areas for examination are identified. New educational materials and standardization activities are also noted.

  12. Copyright at the Bedside: Should We Stop the Spread?

    PubMed Central

    Feldman, Robin; Newman, John

    2014-01-01

    We recently published an article in the New England Journal of Medicine describing a crisis in cognitive testing, as doctors and medical researchers increasingly face copyright claims in sets of questions used for testing mental state. We encouraged the creation of a cultural norm in medicine, in which medical researchers would ensure continued availability of their tests through open source licensing for any copyrights that might exist. In this piece, we consider the legal side of the question. Although copyrights are being copiously asserted in medical testing, are those rights valid, and should they be upheld? The legal precedents in this area are anything but clear, and the courts are divided in the few analogous circumstances that have arisen. We examine analogies in standardized testing, computer compilations and baseball pitching forms to consider the marvelous question of how to conceptualize a process—which is the purview of patent law—when that process consists of words—which are the purview of copyright law. We also look from an economics perspective at the issue of investment and value creation in the development of de facto standards. Legal scholars are so often in the position of looking backwards, teasing out solutions to problems that have developed within a doctrinal or theoretical area. Rarely does one have the opportunity to affect the course of events before problems become so deeply entrenched that they are intractable. This is such a moment, and the legal and medical fields should take advantage of the opportunities presented. PMID:25221427

  13. An examination of gender bias on the eighth-grade MEAP science test as it relates to the Hunter Gatherer Theory of Spatial Sex Differences

    NASA Astrophysics Data System (ADS)

    Armstrong-Hall, Judy Gail

    The purpose of this study was to apply the Hunter-Gatherer Theory of sex spatial skills to responses to individual questions by eighth grade students on the Science component of the Michigan Educational Assessment Program (MEAP) to determine if sex bias was inherent in the test. The Hunter-Gatherer Theory on Spatial Sex Differences, an original theory, that suggested a spatial dimorphism concept with female spatial skill of pattern recall of unconnected items and male spatial skills requiring mental movement. This is the first attempt to apply the Hunter-Gatherer Theory on Spatial Sex Differences to a standardized test. An overall hypothesis suggested that the Hunter-Gatherer Theory of Spatial Sex Differences could predict that males would perform better on problems involving mental movement and females would do better on problems involving the pattern recall of unconnected items. Responses to questions on the 1994-95 MEAP requiring the use of male spatial skills and female spatial skills were analyzed for 5,155 eighth grade students. A panel composed of five educators and a theory developer determined which test items involved the use of male and female spatial skills. A MANOVA, using a random sample of 20% of the 5,155 students to compare male and female correct scores, was statistically significant, with males having higher scores on male spatial skills items and females having higher scores on female spatial skills items. Pearson product moment correlation analyses produced a positive correlation for both male and female performance on both types of spatial skills. The Hunter-Gatherer Theory of Spatial Sex Differences appears to be able to predict that males could perform better on the problems involving mental movement and females could perform better on problems involving the pattern recall of unconnected items. Recommendations for further research included: examination of male/female spatial skill differences at early elementary and high school levels to determine impact of gender on difficulties in solving spatial problems; investigation of the relationship between dominant female spatial skills for students diagnosed with ADHD; study effects of teaching male spatial skills to female students starting in early elementary school to determine the effect on standardized testing.

  14. Adaptive functioning in children with epilepsy and learning problems.

    PubMed

    Buelow, Janice M; Perkins, Susan M; Johnson, Cynthia S; Byars, Anna W; Fastenau, Philip S; Dunn, David W; Austin, Joan K

    2012-10-01

    In the study we describe adaptive functioning in children with epilepsy whose primary caregivers identified them as having learning problems. This was a cross-sectional study of 50 children with epilepsy and learning problems. Caregivers supplied information regarding the child's adaptive functioning and behavior problems. Children rated their self-concept and completed a battery of neuropsychological tests. Mean estimated IQ (PPVT-III) in the participant children was 72.8 (SD = 18.3). On average, children scored 2 standard deviations below the norm on the Vineland Adaptive Behavior Scale-II and this was true even for children with epilepsy who had estimated IQ in the normal range. In conclusion, children with epilepsy and learning problems had relatively low adaptive functioning scores and substantial neuropsychological and mental health problems. In epilepsy, adaptive behavior screening can be very informative and guide further evaluation and intervention, even in those children whose IQ is in the normal range.

  15. ACCESS, Absolute Color Calibration Experiment for Standard Stars: Integration, Test, and Ground Performance

    NASA Astrophysics Data System (ADS)

    Kaiser, Mary Elizabeth; Morris, Matthew; Aldoroty, Lauren; Kurucz, Robert; McCandliss, Stephan; Rauscher, Bernard; Kimble, Randy; Kruk, Jeffrey; Wright, Edward L.; Feldman, Paul; Riess, Adam; Gardner, Jonathon; Bohlin, Ralph; Deustua, Susana; Dixon, Van; Sahnow, David J.; Perlmutter, Saul

    2018-01-01

    Establishing improved spectrophotometric standards is important for a broad range of missions and is relevant to many astrophysical problems. Systematic errors associated with astrophysical data used to probe fundamental astrophysical questions, such as SNeIa observations used to constrain dark energy theories, now exceed the statistical errors associated with merged databases of these measurements. ACCESS, “Absolute Color Calibration Experiment for Standard Stars”, is a series of rocket-borne sub-orbital missions and ground-based experiments designed to enable improvements in the precision of the astrophysical flux scale through the transfer of absolute laboratory detector standards from the National Institute of Standards and Technology (NIST) to a network of stellar standards with a calibration accuracy of 1% and a spectral resolving power of 500 across the 0.35‑1.7μm bandpass. To achieve this goal ACCESS (1) observes HST/ Calspec stars (2) above the atmosphere to eliminate telluric spectral contaminants (e.g. OH) (3) using a single optical path and (HgCdTe) detector (4) that is calibrated to NIST laboratory standards and (5) monitored on the ground and in-flight using a on-board calibration monitor. The observations are (6) cross-checked and extended through the generation of stellar atmosphere models for the targets. The ACCESS telescope and spectrograph have been designed, fabricated, and integrated. Subsystems have been tested. Performance results for subsystems, operations testing, and the integrated spectrograph will be presented. NASA sounding rocket grant NNX17AC83G supports this work.

  16. The results of STEM education methods for enhancing critical thinking and problem solving skill in physics the 10th grade level

    NASA Astrophysics Data System (ADS)

    Soros, P.; Ponkham, K.; Ekkapim, S.

    2018-01-01

    This research aimed to: 1) compare the critical think and problem solving skills before and after learning using STEM Education plan, 2) compare student achievement before and after learning about force and laws of motion using STEM Education plan, and 3) the satisfaction of learning by using STEM Education. The sample used were 37 students from grade 10 at Borabu School, Borabu District, Mahasarakham Province, semester 2, Academic year 2016. Tools used in this study consist of: 1) STEM Education plan about the force and laws of motion for grade 10 students of 1 schemes with total of 14 hours, 2) The test of critical think and problem solving skills with multiple-choice type of 5 options and 2 option of 30 items, 3) achievement test on force and laws of motion with multiple-choice of 4 options of 30 items, 4) satisfaction learning with 5 Rating Scale of 20 items. The statistics used in data analysis were percentage, mean, standard deviation, and t-test (Dependent). The results showed that 1) The student with learning using STEM Education plan have score of critical think and problem solving skills on post-test higher than pre-test with statistically significant level .01. 2) The student with learning using STEM Education plan have achievement score on post-test higher than pre-test with statistically significant level of .01. 3) The student'level of satisfaction toward the learning by using STEM Education plan was at a high level (X ¯ = 4.51, S.D=0.56).

  17. Heuristic Algorithms for Solving Two Dimensional Loading Problems.

    DTIC Science & Technology

    1981-03-01

    L6i MICROCOPY RESOLUTION TEST CHART WTI0WAL BL4WA64OF STANDARDS- 1963-A -~~ le -I I ~- A-LA4C TEC1-NlCAL ’c:LJ? HEURISTIC ALGORITHMS FOR SOLVING...CONSIDER THE FOLLOWjING PROBLEM; ALLOCATE A SET OF ON’ DOXES, EACH HAVING A SPECIFIED LENGTH, WIDTH AND HEIGHT, TO A PALLET OF LENGTH " Le AND WIDTH "W...THE BOXES AND TI-EN-SELECT TI- lE BEST SOLUTION. SINCE THESE HEURISTICS ARE ESSENTIALLY A TRIAL AND ERROR PROCEDURE THEIR FORMULAS BECOME VERY

  18. The IEEE GRSS Standardized Remote Sensing Data Website: A Step Towards "Science 2.0" in Remote Sensing

    NASA Astrophysics Data System (ADS)

    Dell'Acqua, Fabio; Iannelli, Gianni Cristian; Kerekes, John; Lisini, Gianni; Moser, Gabriele; Ricardi, Niccolo; Pierce, Leland

    2016-08-01

    The issue of homogeneity in performance assessment of proposed algorithms for information extraction is generally perceived also in the Earth Observation (EO) domain. Different authors propose different datasets to test their developed algorithms and to the reader it is frequently difficult to assess which is better for his/her specific application, given the wide variability in test sets that makes pure comparison of e.g. accuracy values less meaningful than one would desire. With our work, we gave a modest contribution to ease the problem by making it possible to automatically distribute a limited set of possible "standard" open datasets, together with some ground truth info, and automatically assess processing results provided by the users.

  19. Testing the internal consistency of the standard gamble in 'success' and 'failure' frames.

    PubMed

    Oliver, Adam

    2004-06-01

    Decision making behaviour has often been shown to vary following changes in the way in which choice problems are described (or 'framed'). Moreover, a number of researchers have demonstrated that the standard gamble is prone to internal inconsistency, and loss aversion has been proposed as an explanation for this observed bias. This study attempts to alter the influence of loss aversion by framing the treatment arm of the standard gamble in terms of success (where we may expect the influence of loss aversion to be relatively weak) and in terms of failure (where we may expect the influence of loss aversion to be relatively strong). The objectives of the study are (1) to test whether standard gamble values vary when structurally identical gambles are differentially framed, and (2) to test whether the standard gamble is equally prone to internal inconsistency across the two frames. The results show that compared to framing in terms of treatment success, significantly higher values were inferred when the gamble was framed in terms of treatment failure. However, there was no difference in the quite marked levels of internal inconsistency observed in both frames. It is possible that the essential construct of the standard gamble induces substantial and/or widespread loss aversion irrespective of the way in which the gamble is framed, which offers a fundamental challenge to the usefulness of this value elicitation instrument. It is therefore recommended that further tests are undertaken on more sophisticated corrective procedures designed to limit the influence of loss aversion.

  20. Abbreviated Combined MR Protocol: A New Faster Strategy for Characterizing Breast Lesions.

    PubMed

    Moschetta, Marco; Telegrafo, Michele; Rella, Leonarda; Stabile Ianora, Amato Antonio; Angelelli, Giuseppe

    2016-06-01

    The use of an abbreviated magnetic resonance (MR) protocol has been recently proposed for cancer screening. The aim of our study is to evaluate the diagnostic accuracy of an abbreviated MR protocol combining short TI inversion recovery (STIR), turbo-spin-echo (TSE)-T2 sequences, a pre-contrast T1, and a single intermediate (3 minutes after contrast injection) post-contrast T1 sequence for characterizing breast lesions. A total of 470 patients underwent breast MR examination for screening, problem solving, or preoperative staging. Two experienced radiologists evaluated both standard and abbreviated protocols in consensus. Sensitivity, specificity, positive predictive value (PPV), negative predictive value (NPV), and diagnostic accuracy for both protocols were calculated (with the histological findings and 6-month ultrasound follow-up as the reference standard) and compared with the McNemar test. The post-processing and interpretation times for the MR images were compared with the paired t test. In 177 of 470 (38%) patients, the MR sequences detected 185 breast lesions. Standard and abbreviated protocols obtained sensitivity, specificity, diagnostic accuracy, PPV, and NPV values respectively of 92%, 92%, 92%, 68%, and 98% and of 89%, 91%, 91%, 64%, and 98% with no statistically significant difference (P < .0001). The mean post-processing and interpretation time were, respectively, 7 ± 1 minutes and 6 ± 3.2 minutes for the standard protocol and 1 ± 1.2 minutes and 2 ± 1.2 minutes for the abbreviated protocol, with a statistically significant difference (P < .01). An abbreviated combined MR protocol represents a time-saving tool for radiologists and patients with the same diagnostic potential as the standard protocol in patients undergoing breast MRI for screening, problem solving, or preoperative staging. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. A hybrid model for combining case-control and cohort studies in systematic reviews of diagnostic tests

    PubMed Central

    Chen, Yong; Liu, Yulun; Ning, Jing; Cormier, Janice; Chu, Haitao

    2014-01-01

    Systematic reviews of diagnostic tests often involve a mixture of case-control and cohort studies. The standard methods for evaluating diagnostic accuracy only focus on sensitivity and specificity and ignore the information on disease prevalence contained in cohort studies. Consequently, such methods cannot provide estimates of measures related to disease prevalence, such as population averaged or overall positive and negative predictive values, which reflect the clinical utility of a diagnostic test. In this paper, we propose a hybrid approach that jointly models the disease prevalence along with the diagnostic test sensitivity and specificity in cohort studies, and the sensitivity and specificity in case-control studies. In order to overcome the potential computational difficulties in the standard full likelihood inference of the proposed hybrid model, we propose an alternative inference procedure based on the composite likelihood. Such composite likelihood based inference does not suffer computational problems and maintains high relative efficiency. In addition, it is more robust to model mis-specifications compared to the standard full likelihood inference. We apply our approach to a review of the performance of contemporary diagnostic imaging modalities for detecting metastases in patients with melanoma. PMID:25897179

  2. The continuing problem of missed test results in an integrated health system with an advanced electronic medical record.

    PubMed

    Wahls, Terry; Haugen, Thomas; Cram, Peter

    2007-08-01

    Missed results can cause needless treatment delays. However, there is little data about the magnitude of this problem and the systems that clinics use to manage test results. Surveys about potential problems related to test results management were developed and administered to clinical staff in a regional Veterans Administration (VA) health care network. The provider survey, conducted four times between May 2005 and October 2006, sampling VA staff physicians, physician assistants, nurse practitioners, and internal medicine trainees, asked questions about the frequency of missed results and diagnosis or treatment delays seen in the antecedent two weeks in their clinics, or if a trainee, the antecedent month. Clinical staff survey response rate was 39% (143 of 370), with 40% using standard operating procedures to manage test results. Forty-four percent routinely reported all results to patients. The provider survey response rate was 50% (441 of 884) overall, with responses often (37% overall; range 29% to 46%) indicating they had seen patients with diagnosis or treatment delays attributed to a missed result; 15% reported two or more such encounters. Even in an integrated health system with an advanced electronic medical record, missed test results and associated diagnosis or treatment delays are common. Additional study and measures of missed results and associated treatment delays are needed.

  3. Competency to consent to research: a psychiatric overview.

    PubMed

    Appelbaum, P S; Roth, L H

    1982-08-01

    The requirement that a subject be competent as a condition of valid consent to participate in research has been accepted by most students of legal and ethical problems of human experimentation. "Competency," however, has lacked a clear and generally agreed on standard. There are four commonly used standards for competency: evidencing a choice in regard to research participation, factual understanding of the issues, rational manipulation of information, and appreciation of the nature of the situation. These standards can be arranged hierarchically such that each represents a stricter test of competency. The decision as to how rigorous a standard for competency is desirable cannot be made on psychiatric grounds. It requires consideration of the policy goals on hopes to attain. Empirical research helps demonstrate the consequences of choosing a particular standard but cannot replace the need for achieving consensus on policy goals.

  4. Detecting effects of the indicated prevention Programme for Externalizing Problem behaviour (PEP) on child symptoms, parenting, and parental quality of life in a randomized controlled trial.

    PubMed

    Hanisch, Charlotte; Freund-Braier, Inez; Hautmann, Christopher; Jänen, Nicola; Plück, Julia; Brix, Gabriele; Eichelberger, Ilka; Döpfner, Manfred

    2010-01-01

    Behavioural parent training is effective in improving child disruptive behavioural problems in preschool children by increasing parenting competence. The indicated Prevention Programme for Externalizing Problem behaviour (PEP) is a group training programme for parents and kindergarten teachers of children aged 3-6 years with externalizing behavioural problems. To evaluate the effects of PEP on child problem behaviour, parenting practices, parent-child interactions, and parental quality of life. Parents and kindergarten teachers of 155 children were randomly assigned to an intervention group (n = 91) and a nontreated control group (n = 64). They rated children's problem behaviour before and after PEP training; parents also reported on their parenting practices and quality of life. Standardized play situations were video-taped and rated for parent-child interactions, e.g. parental warmth. In the intention to treat analysis, mothers of the intervention group described less disruptive child behaviour and better parenting strategies, and showed more parental warmth during a standardized parent-child interaction. Dosage analyses confirmed these results for parents who attended at least five training sessions. Children were also rated to show less behaviour problems by their kindergarten teachers. Training effects were especially positive for parents who attended at least half of the training sessions. CBCL: Child Behaviour Checklist; CII: Coder Impressions Inventory; DASS: Depression anxiety Stress Scale; HSQ: Home-situation Questionnaire; LSS: Life Satisfaction Scale; OBDT: observed behaviour during the test; PCL: Problem Checklist; PEP: prevention programme for externalizing problem behaviour; PPC: Parent Problem Checklist; PPS: Parent Practices Scale; PS: Parenting Scale; PSBC: Problem Setting and Behaviour checklist; QJPS: Questionnaire on Judging Parental Strains; SEFS: Self-Efficacy Scale; SSC: Social Support Scale; TRF: Caregiver-Teacher Report Form.

  5. DAIDALUS Observations From UAS Integration in the NAS Project Flight Test 4

    NASA Technical Reports Server (NTRS)

    Vincent, Michael J.; Tsakpinis, Dimitrios

    2016-01-01

    In order to validate the Unmanned Aerial System (UAS) Detect-and-Avoid (DAA) solution proposed by standards body RTCA Inc., the National Aeronautics and Space Administration (NASA) UAS Integration in the NAS project, alongside industry members General Atomics and Honeywell, conducted the fourth flight test in a series at Armstrong Flight Research Center in Edwards, California. Flight Test 4 (FT4) investigated problems of interoperability with the TCAS collision avoidance system with a DAA system as well as problems associated with sensor uncertainty. A series of scripted flight encounters between the NASA Ikhana UAS and various "intruder" aircraft were flown while alerting and guidance from the DAA algorithm were recorded to investigate the timeliness of the alerts and correctness of the guidance triggered by the DAA system. The results found that alerts were triggered in a timely manner in most instances. Cases where the alerting and guidance was incorrect were investigated further.

  6. Effects of topiramate on language functions in newly diagnosed pediatric epileptic patients.

    PubMed

    Kim, Sun Jun; Kim, Moon Yeon; Choi, Yoon Mi; Song, Mi Kyoung

    2014-09-01

    The aim of this study was to characterize the effects of topiramate on language functions in newly diagnosed pediatric epileptic patients. Thirty-eight newly diagnosed epileptic patients were assessed using standard language tests. Data were collected before and after beginning topiramate during which time a monotherapy treatment regimen was maintained. Language tests included the Test of Language Problem Solving Abilities, a Korean version of the Peabody Picture Vocabulary Test. We used language tests in the Korean version because all the patients were spoken Korean exclusively in their families. All the language parameters of Test of Language Problem Solving Abilities worsened after initiation of topiramate (determine cause, 13.2 ± 4.8 to 11.2 ± 4.3; problem solving, 14.8 ± 6.0 to 12.8 ± 5.0; predicting, 9.8 ± 3.6 to 8.8 ± 4.6). Patients given topiramate exhibited a shortened mean length of utterance in words during response (determine cause, 4.8 ± 0.9 to 4.3 ± 0.7; making inference, 4.5 ± 0.8 to 4.1 ± 1.1; predicting, 5.2 ± 1.0 to 4.7 ± 0.6; P < 0.05), provided ambiguous answers during the testing, exhibited difficulty in selecting appropriate words, took more time to provide answers, and used incorrect grammar. However, there were no statistically significant changes in the receptive language of patients after taking topiramate (95.4 ± 20.4 to 100.8 ± 19.1). Our data suggest that topiramate may have negative effects on problem-solving abilities in children. We recommend performing language tests should be considered in children being treated with topiramate. Copyright © 2014 Elsevier Inc. All rights reserved.

  7. SURVEY OF TOXICITY IN AMBIENT WATERS OF THE HUDSON/RARITAN ESTUARY: IMPORTANCE OF SMALL-SCALE VARIATIONS

    EPA Science Inventory

    This study was part of a characterization of the nature and severity of water-quality problems in the Hudson/Raritan Estuary in New York and New Jersey, USA. The toxicity of ambient water was measured at 51 stations in the estuary by using standard tests with the sea urchin Arbac...

  8. The Impact of the Measures of Academic Progress (MAP) Program on Student Reading Achievement. Final Report. NCEE 2013-4000

    ERIC Educational Resources Information Center

    Cordray, David; Pion, Georgine; Brandt, Chris; Molefe, Ayrin; Toby, Megan

    2012-01-01

    During the past decade, the use of standardized benchmark measures to differentiate and individualize instruction for students received renewed attention from educators. Although teachers may use their own assessments (tests, quizzes, homework, problem sets) for monitoring learning, it is challenging for them to equate performance on classroom…

  9. Convergence of Educational Attainment Levels in the OECD: More Data, More Problems?

    ERIC Educational Resources Information Center

    Crespo Cuaresma, Jesus

    2006-01-01

    This note shows that the dynamics of the dispersion of educational attainment across OECD countries in the period 1960-1990 differ enormously depending on the dataset used, as do the results of the test of significance in the change of the cross-country standard deviation of schooling years between subperiods. The three datasets studied (the…

  10. Teachers' Perceptions in Developing Robust Vocabulary Instruction at an American International School

    ERIC Educational Resources Information Center

    Lee, Cathleen S. M.

    2017-01-01

    At an international school in Taiwan, English learners have struggled to meet the U.S. national average in vocabulary on standardized testing instruments. This problem has become more significant since 2009. The purpose of this research was to conduct a case study on successful vocabulary teachers to determine their perceptions of effective…

  11. Developing and Planning a Texas Based Homeschool Curriculum

    ERIC Educational Resources Information Center

    Terry, Bobby K.

    2011-01-01

    Texas has some of the lowest SAT scores in the nation. They are ranked 36th nationwide in graduation rates and teacher salaries rank at number 33. The public school system in Texas has problems with overcrowding, violence, and poor performance on standardized testing. Currently 300,000 families have opted out of the public school system in order…

  12. Use of Standardized Test Scores to Predict Success in a Computer Applications Course

    ERIC Educational Resources Information Center

    Harris, Robert V.

    2014-01-01

    In this educational study, the research problem was that each semester a variable number of community college students are unable to complete an introductory computer applications course at a community college in the state of Mississippi with a successful course letter grade. Course failure, or non-success, at the collegiate level is a negative…

  13. Grade 11 Students' Interconnected Use of Conceptual Knowledge, Procedural Skills, and Strategic Competence in Algebra: A Mixed Method Study of Error Analysis

    ERIC Educational Resources Information Center

    Egodawatte, Gunawardena; Stoilescu, Dorian

    2015-01-01

    The purpose of this mixed-method study was to investigate grade 11 university/college stream mathematics students' difficulties in applying conceptual knowledge, procedural skills, strategic competence, and algebraic thinking in solving routine (instructional) algebraic problems. A standardized algebra test was administered to thirty randomly…

  14. Youth Engagement in High Schools: Developing a Multidimensional, Critical Approach to Improving Engagement for All Students

    ERIC Educational Resources Information Center

    Yonezawa, Susan; Jones, Makeba; Joselowksy, Francine

    2009-01-01

    Addressing the problem of youth disengagement from school is of paramount importance to the improvement of academic outcomes. Unfortunately, today's climate of accountability under the federal No Child Left Behind Act (NCLB)--with its focus on data from standardized tests--has created a policy environment that makes it exceedingly difficult to…

  15. Promoting Cultural Awareness and the Acceptance of Diversity through the Implementation of Cross-Cultural Activities.

    ERIC Educational Resources Information Center

    Keime, Susan; Landes, Melissa; Rickertsen, Gwenn; Wescott, Nicol

    An action research project implemented a program for developing tolerance through increased cultural awareness. Targeted population consisted of third grade and high school students in a rural, middle class community in western Illinois. The problem of lack of cultural awareness was documented through standardized test scores and student and…

  16. Outcomes Linked to High-Quality Afterschool Programs: Longitudinal Findings from the Study of Promising Afterschool Programs

    ERIC Educational Resources Information Center

    Vandell, Deborah Lowe; Reisner, Elizabeth R.; Pierce, Kim M.

    2007-01-01

    This study by researchers at the University of California, Irvine, the University of Wisconsin-Madison and Policy Studies Associates, Inc. finds that regular participation in high-quality afterschool programs is linked to significant gains in standardized test scores and work habits as well as reductions in behavior problems among disadvantaged…

  17. The Impact of Question-Answer Relationships on Thai Reading Comprehension of Economically Disadvantaged Students: A Mixed Methods Study in Thailand

    ERIC Educational Resources Information Center

    Mongkolrat, Raveema

    2017-01-01

    Thailand's education has not succeeded in meeting the Ministry of Education Thailand's goals for Thai language. The problem manifests in students' substandard Thai reading comprehension. Results of the Thailand's standardized national test showed that students, especially those with economical disadvantages, have performed poorly in Thai reading…

  18. Drums and Poems: An Intervention Promoting Empathic Connection and Literacy in Children

    ERIC Educational Resources Information Center

    Sassen, Georgia

    2012-01-01

    Expressive therapies can be used with groups of children to increase empathy and reduce bullying and violence. When educators feel pressured to focus on standardized tests and basic skills, there is little attention and time for such programs. Drums and Poems is an intervention that counselors and teachers can use to address these problems by…

  19. An Initial Needs Assessment of Science Inquiry Curriculum Practices at a Local Level

    ERIC Educational Resources Information Center

    Cottingham, Susan M.

    2010-01-01

    Frequently, students learn in science classes taught like traditional reading courses in which reading texts and answering questions is the main activity. The problem at one southern middle school is that students are not developing an understanding of science concepts and are doing poorly on standardized testing. Students are seldom given the…

  20. After-School Tutoring for Reading Achievement and Urban Middle School Students

    ERIC Educational Resources Information Center

    Nelson-Royes, Andrea M.; Reglin, Gary L.

    2011-01-01

    This research study's purpose or theme was to qualitatively investigate the reading component of a private after-school tutoring program that offered academic assistance to eighth-grade students. The problem with reading is many urban middle school students have poor reading skills and do not perform well on reading standardized tests. Relative to…

  1. Language Deficits in Children with Psychiatric Disorders: Educational Implications.

    ERIC Educational Resources Information Center

    Lutz, Margaret M.

    The study was designed to determine if there was an identifiable pattern of language deficit in 10 children (6 to 12 years old) undergoing treatment in a hospital inpatient psychiatry ward for behavioral and emotional problems. Ss were administered the standard hearing and speech clinic test battery which included the Binet Memory for Sentences,…

  2. The Same or Not the Same: Equivalence as an Issue in Educational Research

    NASA Astrophysics Data System (ADS)

    Lewis, Scott E.; Lewis, Jennifer E.

    2005-09-01

    In educational research, particularly in the sciences, a common research design calls for the establishment of a control and experimental group to determine the effectiveness of an intervention. As part of this design, it is often desirable to illustrate that the two groups were equivalent at the start of the intervention, based on measures such as standardized cognitive tests or student grades in prior courses. In this article we use SAT and ACT scores to illustrate a more robust way of testing equivalence. The method incorporates two one-sided t tests evaluating two null hypotheses, providing a stronger claim for equivalence than the standard method, which often does not address the possible problem of low statistical power. The two null hypotheses are based on the construction of an equivalence interval particular to the data, so the article also provides a rationale for and illustration of a procedure for constructing equivalence intervals. Our consideration of equivalence using this method also underscores the need to include sample sizes, standard deviations, and group means in published quantitative studies.

  3. Testing the Big Bang: Light elements, neutrinos, dark matter and large-scale structure

    NASA Technical Reports Server (NTRS)

    Schramm, David N.

    1991-01-01

    Several experimental and observational tests of the standard cosmological model are examined. In particular, a detailed discussion is presented regarding: (1) nucleosynthesis, the light element abundances, and neutrino counting; (2) the dark matter problems; and (3) the formation of galaxies and large-scale structure. Comments are made on the possible implications of the recent solar neutrino experimental results for cosmology. An appendix briefly discusses the 17 keV thing and the cosmological and astrophysical constraints on it.

  4. Prototyping and Characterization of an Adjustable Skew Angle Single Gimbal Control Moment Gyroscope

    DTIC Science & Technology

    2015-03-01

    performance, and an analysis of the test results is provided. In addition to the standard battery of CMG performance tests that were planned, a...objectives for this new CMG is to provide comparable performance to the Andrews CMGs, the values in Table 1 will be used for output torque comparison...essentially fixed at 53.4°. This specific skew angle value is not the problem, as this is one commonly used CMG skew angle for satellite systems. The real

  5. An Interferometry Imaging Beauty Contest

    NASA Technical Reports Server (NTRS)

    Lawson, Peter R.; Cotton, William D.; Hummel, Christian A.; Monnier, John D.; Zhaod, Ming; Young, John S.; Thorsteinsson, Hrobjartur; Meimon, Serge C.; Mugnier, Laurent; LeBesnerais, Guy; hide

    2004-01-01

    We present a formal comparison of the performance of algorithms used for synthesis imaging with optical/infrared long-baseline interferometers. Six different algorithms are evaluated based on their performance with simulated test data. Each set of test data is formated in the interferometry Data Exchange Standard and is designed to simulate a specific problem relevant to long-baseline imaging. The data are calibrated power spectra and bispectra measured with a ctitious array, intended to be typical of existing imaging interferometers. The strengths and limitations of each algorithm are discussed.

  6. Standard Procedures for Air Force Operational Test and Evaluation. Volume I.

    DTIC Science & Technology

    1974-10-01

    of the foregoing, and assuming the successful advocacy of the prcposed OT&E a Test Directive is drafted by AFTEC, reviewed by HQ USAF, and upon...design purposes. A successful approach to this problem used by both Air Force and Army OT&E agencies is to conduct an analysis of the operational mission...conduct an analysis of the mission structure involved in the OT&E. This constitutes a top level structure of the scenario with identification of the players

  7. Experimental investigations on airborne gravimetry based on compressed sensing.

    PubMed

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-03-18

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements.

  8. Experimental Investigations on Airborne Gravimetry Based on Compressed Sensing

    PubMed Central

    Yang, Yapeng; Wu, Meiping; Wang, Jinling; Zhang, Kaidong; Cao, Juliang; Cai, Shaokun

    2014-01-01

    Gravity surveys are an important research topic in geophysics and geodynamics. This paper investigates a method for high accuracy large scale gravity anomaly data reconstruction. Based on the airborne gravimetry technology, a flight test was carried out in China with the strap-down airborne gravimeter (SGA-WZ) developed by the Laboratory of Inertial Technology of the National University of Defense Technology. Taking into account the sparsity of airborne gravimetry by the discrete Fourier transform (DFT), this paper proposes a method for gravity anomaly data reconstruction using the theory of compressed sensing (CS). The gravity anomaly data reconstruction is an ill-posed inverse problem, which can be transformed into a sparse optimization problem. This paper uses the zero-norm as the objective function and presents a greedy algorithm called Orthogonal Matching Pursuit (OMP) to solve the corresponding minimization problem. The test results have revealed that the compressed sampling rate is approximately 14%, the standard deviation of the reconstruction error by OMP is 0.03 mGal and the signal-to-noise ratio (SNR) is 56.48 dB. In contrast, the standard deviation of the reconstruction error by the existing nearest-interpolation method (NIPM) is 0.15 mGal and the SNR is 42.29 dB. These results have shown that the OMP algorithm can reconstruct the gravity anomaly data with higher accuracy and fewer measurements. PMID:24647125

  9. A comparison of university student and community gamblers: Motivations, impulsivity, and gambling cognitions

    PubMed Central

    Marmurek, Harvey H. C.; Switzer, Jessica; D’Alvise, Joshua

    2014-01-01

    Background and aims: The present study tested whether the associations among motivational, cognitive, and personality correlates of problem gambling severity differed across university student gamblers (n = 123) and gamblers in the general adult community (n = 113). Methods: The participants completed a survey that included standardized measures of gambling motivation, gambling related cognitions, and impulsivity. The survey also asked participants to report the forms of gambling in which they engaged to test whether gambling involvement (number of different forms of gambling) was related to problem gambling severity. After completing the survey, participants played roulette online to examine whether betting patterns adhered to the gambler’s fallacy. Results: Gambling involvement was significantly related to problem gambling severity for the community sample but not for the student sample. A logistic regression analysis that tested the involvement, motivation, impulsivity and cognitive correlates showed that money motivation and gambling related cognitions were the only significant independent predictors of gambling severity. Adherence to the gambler’s fallacy was stronger for students than for the community sample, and was associated with gambling related cognitions. Discussion: The motivational, impulsivity and cognitive, and correlates of problem gambling function similarly in university student gamblers and in gamblers from the general adult community. Interventions for both groups should focus on the financial and cognitive supports of problem gambling. PMID:25215214

  10. Instabilities of geared couplings: Theory and practice

    NASA Technical Reports Server (NTRS)

    Kirk, R. G.; Mondy, R. E.; Murphy, R. C.

    1982-01-01

    The use of couplings for high speed turbocompressors or pumps is essential to transmit power from the driver. Typical couplings are either of the lubricated gear or dry diaphragm type design. Gear couplings have been the standard design for many years and recent advances in power and speed requirements have pushed the standard design criteria to the limit. Recent test stand and field data on continuous lube gear type couplings have forced a closer examination of design tolerances and concepts to avoid operational instabilities. Two types of mechanical instabilities are reviewed in this paper: (1) entrapped fluid, and (2) gear mesh instability resulting in spacer throw-out onset. Test stand results of these types of instabilities and other directly related problems are presented together with criteria for proper coupling design to avoid these conditions. An additional test case discussed shows the importance of proper material selection and processing and what can happen to an otherwise good design.

  11. Car seat safety: literature review.

    PubMed

    Lincoln, Michelle

    2005-01-01

    After staggering numbers of infants were killed in automotive crashes in the 1970s, the American Academy of Pediatrics (AAP) recommended in 1974 universal use of car seats for all infants. However, positional problems were reported when car seats are used with premature infants less than 37 weeks gestational age as a result of head slouching and its sequelae. In 1990, the AAP responded with another policy statement introducing car seat testing. It recommended that any infant at or under 37 weeks gestational age be observed in a car seat prior to discharge from the hospital. The AAP did not give specific guidelines on type of car seat, length of testing, equipment, or personnel proficiency, however. Few nurseries have standard policies to evaluate car seats, to teach parents about car seats, or to position newborns in them, and not all hospitals actually conduct car seat challenges or have common standards for testing that is performed.

  12. VDLLA: A virtual daddy-long legs optimization

    NASA Astrophysics Data System (ADS)

    Yaakub, Abdul Razak; Ghathwan, Khalil I.

    2016-08-01

    Swarm intelligence is a strong optimization algorithm based on a biological behavior of insects or animals. The success of any optimization algorithm is depending on the balance between exploration and exploitation. In this paper, we present a new swarm intelligence algorithm, which is based on daddy long legs spider (VDLLA) as a new optimization algorithm with virtual behavior. In VDLLA, each agent (spider) has nine positions which represent the legs of spider and each position represent one solution. The proposed VDLLA is tested on four standard functions using average fitness, Medium fitness and standard deviation. The results of proposed VDLLA have been compared against Particle Swarm Optimization (PSO), Differential Evolution (DE) and Bat Inspired Algorithm (BA). Additionally, the T-Test has been conducted to show the significant deference between our proposed and other algorithms. VDLLA showed very promising results on benchmark test functions for unconstrained optimization problems and also significantly improved the original swarm algorithms.

  13. Alternative Shear Panel Configurations for Light Wood Construction. Development, Seismic Performance, and Design Guidance

    NASA Astrophysics Data System (ADS)

    Wilcoski, James; Fischer, Chad; Allison, Tim; Malach, Kelly Jo

    2002-04-01

    Shear panels are used in light wood construction to resist lateral loads resulting from earthquakes or strong winds. These panels are typically made of wooden sheathing nailed to building frame members, but this standard panel design interferes with the installation of sheet insulation. A non-insulated shear panel conducts heat between the building interior and exterior wasting considerable amounts of energy. Several alternative shear panel designs were developed to avoid this insulation-mounting problem and sample panels were tested according to standard cyclic test protocols. One of the alternative designs consisted of diagonal steel straps nailed directly to the structural framing. Several others consisted of sheathing nailed to 2 x 4 framing then set into a larger 2 x 6 structural frame in such a way that no sheathing protruded beyond the edge of the 2 x 6 members. Also samples of industry-standard shear panels were constructed and tested in order to establish a performance baseline. Analytical models were developed to size test panels and predict panel behavior. A procedure was developed for establishing design capacities based on both test data and established baseline panel design capacity. The behavior of each panel configuration is documented and recommended design capacities are presented.

  14. Planning abilities and chess: a comparison of chess and non-chess players on the Tower of London task.

    PubMed

    Unterrainer, J M; Kaller, C P; Halsband, U; Rahm, B

    2006-08-01

    Playing chess requires problem-solving capacities in order to search through the chess problem space in an effective manner. Chess should thus require planning abilities for calculating many moves ahead. Therefore, we asked whether chess players are better problem solvers than non-chess players in a complex planning task. We compared planning performance between chess ( N=25) and non-chess players ( N=25) using a standard psychometric planning task, the Tower of London (ToL) test. We also assessed fluid intelligence (Raven Test), as well as verbal and visuospatial working memory. As expected, chess players showed better planning performance than non-chess players, an effect most strongly expressed in difficult problems. On the other hand, they showed longer planning and movement execution times, especially for incorrectly solved trials. No differences in fluid intelligence and verbal/visuospatial working memory were found between both groups. These findings indicate that better performance in chess players is associated with disproportionally longer solution times, although it remains to be investigated whether motivational or strategic differences account for this result.

  15. Comparison of eigensolvers for symmetric band matrices.

    PubMed

    Moldaschl, Michael; Gansterer, Wilfried N

    2014-09-15

    We compare different algorithms for computing eigenvalues and eigenvectors of a symmetric band matrix across a wide range of synthetic test problems. Of particular interest is a comparison of state-of-the-art tridiagonalization-based methods as implemented in Lapack or Plasma on the one hand, and the block divide-and-conquer (BD&C) algorithm as well as the block twisted factorization (BTF) method on the other hand. The BD&C algorithm does not require tridiagonalization of the original band matrix at all, and the current version of the BTF method tridiagonalizes the original band matrix only for computing the eigenvalues. Avoiding the tridiagonalization process sidesteps the cost of backtransformation of the eigenvectors. Beyond that, we discovered another disadvantage of the backtransformation process for band matrices: In several scenarios, a lot of gradual underflow is observed in the (optional) accumulation of the transformation matrix and in the (obligatory) backtransformation step. According to the IEEE 754 standard for floating-point arithmetic, this implies many operations with subnormal (denormalized) numbers, which causes severe slowdowns compared to the other algorithms without backtransformation of the eigenvectors. We illustrate that in these cases the performance of existing methods from Lapack and Plasma reaches a competitive level only if subnormal numbers are disabled (and thus the IEEE standard is violated). Overall, our performance studies illustrate that if the problem size is large enough relative to the bandwidth, BD&C tends to achieve the highest performance of all methods if the spectrum to be computed is clustered. For test problems with well separated eigenvalues, the BTF method tends to become the fastest algorithm with growing problem size.

  16. A low emission vehicle procurement approach for Washington state

    NASA Astrophysics Data System (ADS)

    McCoy, G. A.; Lyons, J. K.; Ware, G.

    1992-06-01

    The Clean Air Washington Act of 1991 directs the Department of Ecology to establish a clean-fuel vehicle standard. The Department of General Administration shall purchase vehicles based on this standard beginning in the Fall of 1992. The following summarizes the major issues effecting vehicle emissions and their regulation, and present a methodology for procuring clean-fuel vehicles for the State of Washington. Washington State's air quality problems are much less severe than in other parts of the country such as California, the East Coast and parts of the Mid West. Ozone, which is arguably the dominant air quality problem in the US, is a recent and relatively minor issue in Washington. Carbon monoxide (CO) represents a more immediate problem in Washington, with most of the state's urban areas exceeding national CO air quality standards. Since the mid-1960's, vehicle tailpipe hydrocarbon and carbon monoxide emissions have been reduced by 96 percent relative to precontrol vehicles. Nitrogen oxide emissions have been reduced by 76 percent. Emissions from currently available vehicles are quite low with respect to in-place exhaust emission standards. Cold-start emissions constitute about 75 percent of the total emissions measured with the Federal Test Procedure used to certify motor vehicles. There is no currently available 'inherently clean burning fuel'. In 1991, 3052 vehicles were purchased under Washington State contract. Provided that the same number are acquired in 1993, the state will need to purchase 915 vehicles which meet the definition of a 'clean-fueled vehicle'.

  17. Automation of electromagnetic compatability (EMC) test facilities

    NASA Technical Reports Server (NTRS)

    Harrison, C. A.

    1986-01-01

    Efforts to automate electromagnetic compatibility (EMC) test facilities at Marshall Space Flight Center are discussed. The present facility is used to accomplish a battery of nine standard tests (with limited variations) deigned to certify EMC of Shuttle payload equipment. Prior to this project, some EMC tests were partially automated, but others were performed manually. Software was developed to integrate all testing by means of a desk-top computer-controller. Near real-time data reduction and onboard graphics capabilities permit immediate assessment of test results. Provisions for disk storage of test data permit computer production of the test engineer's certification report. Software flexibility permits variation in the tests procedure, the ability to examine more closely those frequency bands which indicate compatibility problems, and the capability to incorporate additional test procedures.

  18. High IQ May "Mask" the Diagnosis of ADHD by Compensating for Deficits in Executive Functions in Treatment-Naïve Adults With ADHD.

    PubMed

    Milioni, Ana Luiza Vidal; Chaim, Tiffany Moukbel; Cavallet, Mikael; de Oliveira, Nathalya Moleda; Annes, Marco; Dos Santos, Bernardo; Louzã, Mario; da Silva, Maria Aparecida; Miguel, Carmen Silvia; Serpa, Mauricio Henriques; Zanetti, Marcus V; Busatto, Geraldo; Cunha, Paulo Jannuzzi

    2017-04-01

    To evaluate and compare the performance of adults with ADHD with high and standard IQ in executive functions (EF) tasks. We investigated the neuropsychological performance of 51 adults with ADHD, compared with 33 healthy controls (HC) while performing a wide battery of neuropsychological tests that measure executive functioning. Adults with clinical diagnosis of ADHD were divided into two groups according to their IQ level (IQ ≥ 110-ADHD group with more elevated IQ, and IQ < 110-ADHD group with standard IQ). The ADHD group with standard IQ presented a worse executive functioning compared with the HC group in the following measures: Stroop 2 ( p = .000) and 3 ( p = .000), Trail Making Test (TMT) B ( p = .005), Wisconsin Card-Sorting Test (WCST)-perseverative errors ( p = .022) and failures to maintain set ( p = .020), Continuous Performance Test (CPT)-omission errors ( p = .005) and commission errors ( p = .000), and Frontal Assessment Battery (FAB)-conceptualization ( p = .016). The ADHD group with more elevated IQ presented only impairments in the CPT-commission errors ( p = .019) when compared with the control group. Adults with ADHD and more elevated IQ show less evidence of executive functioning deficits compared with those with ADHD and standard IQ, suggesting that a higher degree of intellectual efficiency may compensate deficits in executive functions, leading to problems in establishing a precise clinical diagnosis.

  19. Color Vision and the Railways: Part 1. The Railway LED Lantern Test.

    PubMed

    Dain, Stephen J; Casolin, Armand; Long, Jennifer; Hilmi, Mohd Radzi

    2015-02-01

    Lantern tests and practical tests are often used in the assessment of prospective railway employees. The lantern tests rarely embody the actual colors used in signaling on the railways. Practical tests have a number of problems, most notably consistency of application and practicability. This work was carried out to provide the Railway LED Lantern Test (RLLT) as a validated method of assessing the color vision of railway workers. The RLLT, a simulated practical test using the same LEDs (light-emitting diodes) as are used in modern railway signals, was developed. It was tested on 46 color vision-normal (CVN) and 37 color vision-deficient (CVD) subjects. A modified prototype was then tested on 106 CVN subjects. All 106 CVN subjects and most mildly affected CVD subjects passed the modified lantern at 3 m. At 6 m, 1 of the 106 normal color vision subjects failed by missing a single red light. All the CVD subjects failed. The RLLT carried out at 3 m allowed mildly affected CVD subjects to pass and demonstrate adequate color vision for the less demanding railway tasks. Carried out at 6 m, it essentially reinforced normal color vision as the standard. The RLLT is a simply administered test that has a direct link to the actual visual task of the rail worker. The RLLT lantern has been adopted as an approved test in the Australian National Standard for Health Assessment of Rail Safety Workers in place of a practical test. It has the potential to be a valid part of any railway color vision standard.

  20. Language Ability Predicts the Development of Behavior Problems in Children

    PubMed Central

    Petersen, Isaac T.; Bates, John E.; D’Onofrio, Brian M.; Coyne, Claire A.; Lansford, Jennifer E.; Dodge, Kenneth A.; Pettit, Gregory S.; Van Hulle, Carol A.

    2013-01-01

    Prior studies have suggested, but not fully established, that language ability is important for regulating attention and behavior. Language ability may have implications for understanding attention-deficit hyperactivity disorder (ADHD) and conduct disorders, as well as subclinical problems. This article reports findings from two longitudinal studies to test (a) whether language ability has an independent effect on behavior problems, and (b) the direction of effect between language ability and behavior problems. In Study 1 (N = 585), language ability was measured annually from ages 7 to 13 years by language subtests of standardized academic achievement tests administered at the children’s schools. Inattentive-hyperactive (I-H) and externalizing (EXT) problems were reported annually by teachers and mothers. In Study 2 (N = 11,506), language ability (receptive vocabulary) and mother-rated I-H and EXT problems were measured biannually from ages 4 to 12 years. Analyses in both studies showed that language ability predicted within-individual variability in the development of I-H and EXT problems over and above the effects of sex, ethnicity, socioeconomic status (SES), and performance in other academic and intellectual domains (e.g., math, reading comprehension, reading recognition, and short-term memory [STM]). Even after controls for prior levels of behavior problems, language ability predicted later behavior problems more strongly than behavior problems predicted later language ability, suggesting that the direction of effect may be from language ability to behavior problems. The findings suggest that language ability may be a useful target for the prevention or even treatment of attention deficits and EXT problems in children. PMID:23713507

  1. Video Modeling of SBIRT for Alcohol Use Disorders Increases Student Empathy in Standardized Patient Encounters.

    PubMed

    Crisafio, Anthony; Anderson, Victoria; Frank, Julia

    2018-04-01

    The purpose of this study was to assess the usefulness of adding video models of brief alcohol assessment and counseling to a standardized patient (SP) curriculum that covers and tests acquisition of this skill. The authors conducted a single-center, retrospective cohort study of third- and fourth-year medical students between 2013 and 2015. All students completed a standardized patient (SP) encounter illustrating the diagnosis of alcohol use disorder, followed by an SP exam on the same topic. Beginning in August 2014, the authors supplemented the existing formative SP exercise on problem drinking with one of two 5-min videos demonstrating screening, brief intervention, and referral for treatment (SBIRT). P values and Z tests were performed to evaluate differences between students who did and did not see the video in knowledge and skills related to alcohol use disorders. One hundred ninety-four students were included in this analysis. Compared to controls, subjects did not differ in their ability to uncover and accurately characterize an alcohol problem during a standardized encounter (mean exam score 41.29 vs 40.93, subject vs control, p = 0.539). However, the SPs' rating of students' expressions of empathy were significantly higher for the group who saw the video (81.63 vs 69.79%, p < 0.05). The findings did not confirm the original hypothesis that the videos would improve students' recognition and knowledge of alcohol-related conditions. However, feedback from the SPs produced the serendipitous finding that the communication skills demonstrated in the videos had a sustained effect in enhancing students' professional behavior.

  2. Neurobehavioral effects among inhabitants around mobile phone base stations.

    PubMed

    Abdel-Rassoul, G; El-Fateh, O Abou; Salem, M Abou; Michael, A; Farahat, F; El-Batanouny, M; Salem, E

    2007-03-01

    There is a general concern on the possible hazardous health effects of exposure to radiofrequency electromagnetic radiations (RFR) emitted from mobile phone base station antennas on the human nervous system. To identify the possible neurobehavioral deficits among inhabitants living nearby mobile phone base stations. A cross-sectional study was conducted on (85) inhabitants living nearby the first mobile phone station antenna in Menoufiya governorate, Egypt, 37 are living in a building under the station antenna while 48 opposite the station. A control group (80) participants were matched with the exposed for age, sex, occupation and educational level. All participants completed a structured questionnaire containing: personal, educational and medical histories; general and neurological examinations; neurobehavioral test battery (NBTB) [involving tests for visuomotor speed, problem solving, attention and memory]; in addition to Eysenck personality questionnaire (EPQ). The prevalence of neuropsychiatric complaints as headache (23.5%), memory changes (28.2%), dizziness (18.8%), tremors (9.4%), depressive symptoms (21.7%), and sleep disturbance (23.5%) were significantly higher among exposed inhabitants than controls: (10%), (5%), (5%), (0%), (8.8%) and (10%), respectively (P<0.05). The NBTB indicated that the exposed inhabitants exhibited a significantly lower performance than controls in one of the tests of attention and short-term auditory memory [Paced Auditory Serial Addition Test (PASAT)]. Also, the inhabitants opposite the station exhibited a lower performance in the problem solving test (block design) than those under the station. All inhabitants exhibited a better performance in the two tests of visuomotor speed (Digit symbol and Trailmaking B) and one test of attention (Trailmaking A) than controls. The last available measures of RFR emitted from the first mobile phone base station antennas in Menoufiya governorate were less than the allowable standard level. Inhabitants living nearby mobile phone base stations are at risk for developing neuropsychiatric problems and some changes in the performance of neurobehavioral functions either by facilitation or inhibition. So, revision of standard guidelines for public exposure to RER from mobile phone base station antennas and using of NBTB for regular assessment and early detection of biological effects among inhabitants around the stations are recommended.

  3. Inner ear problems of Thai priest at Priest Hospital.

    PubMed

    Karnchanakas, Taweporn; Tantanavat, Are; Sinsakontavat, Jamjan

    2008-01-01

    The inner ear problems of Thai priest at Priest Hospital had never been reported previously, so Department of Ear Nose Throat try to correlate the metebotic disorder with inner ear problems. 1) To study the fasting blood sugar (FBS), total cholesterol (T. Chol), low density lipoprotein (LDL), and triglyceride (TG), the factors expected to involve in inner ear problems of priests at Priest Hospital. 2) To compare the FBS, T. Chol, HDL, LDL, and TG of priests with inner ear problems at Priest Hospital. 3) To find the percentage of abnormal from FBS, T. Chol, LDL, and TG. The study using 83 sampling of priests with inner ear problems and 107 priests as a controlled group. The research instruments used to collect data was the questionnaire which composed of general information, physical, ear-nose-throat and neurological examination, pure tone audiometry, brainstem evoke response audiometry (BERA) and the blood tests:FBS, T. Chol, TG, and LDL. The inner ear problems were composed of: 1) Dizziness 2) Hearing Loss 3) Tinnitus Aurium. The descriptive statistics were used to analyze the data from questionnaires and utilized frequency, percentage, standard deviation (S.D.) and t-test to achieve desired results. Priest at middle age and elderly with inner ear problems had greater FBS and TG than expected values of the control group. The middle age and elderly priests who had greater FBS and TG than expected values were sick with inner ear problems that causing dizziness, hearing loss and tinnitus aurium.

  4. A minority perspective in the diagnosis of child language disorders.

    PubMed

    Seymour, H N; Bland, L

    1991-01-01

    The effective diagnosis and treatment of persons from diverse minority language backgrounds has become an important issue in the field of speech and language pathology. Yet, many SLPs have had little or no formal training in minority language, there is a paucity of normative data on language acquisition in minority groups, and there are few standardized speech and language tests appropriate for these groups. We described a diagnostic process that addresses these problems. The diagnostic protocol we have proposed for a child from a Black English-speaking background characterizes many of the major issues in treating minority children. In summary, we proposed four assessment strategies: gathering referral source data; making direct observations; using standardized tests of non-speech and language behavior (cognition, perception, motor, etc.); and eliciting language samples and probes.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.

    In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less

  6. A High-Order Low-Order Algorithm with Exponentially Convergent Monte Carlo for Thermal Radiative Transfer

    DOE PAGES

    Bolding, Simon R.; Cleveland, Mathew Allen; Morel, Jim E.

    2016-10-21

    In this paper, we have implemented a new high-order low-order (HOLO) algorithm for solving thermal radiative transfer problems. The low-order (LO) system is based on the spatial and angular moments of the transport equation and a linear-discontinuous finite-element spatial representation, producing equations similar to the standard S 2 equations. The LO solver is fully implicit in time and efficiently resolves the nonlinear temperature dependence at each time step. The high-order (HO) solver utilizes exponentially convergent Monte Carlo (ECMC) to give a globally accurate solution for the angular intensity to a fixed-source pure-absorber transport problem. This global solution is used tomore » compute consistency terms, which require the HO and LO solutions to converge toward the same solution. The use of ECMC allows for the efficient reduction of statistical noise in the Monte Carlo solution, reducing inaccuracies introduced through the LO consistency terms. Finally, we compare results with an implicit Monte Carlo code for one-dimensional gray test problems and demonstrate the efficiency of ECMC over standard Monte Carlo in this HOLO algorithm.« less

  7. Strategies to promote a climate of academic integrity and minimize student cheating and plagiarism.

    PubMed

    Scanlan, Craig L

    2006-01-01

    Student academic misconduct is a growing problem for colleges and universities, including those responsible for preparing health professionals. Although the implementation of honor codes has had a positive impact on this problem, further reduction in student cheating and plagiarism can be achieved only via a comprehensive strategy that promotes an institutional culture of academic integrity. Such a strategy must combine efforts both to deter and detect academic misconduct, along with fair but rigorous application of sanctions against such behaviors. Methods useful in preventing or deterring dishonest behaviors among students include early integrity training complemented with course-level reinforcement, faculty role-modeling, and the application of selected testing/assignment preventive strategies, including honor pledges and honesty declarations. Giving students more responsibility for oversight of academic integrity also may help address this problem and better promote the culture needed to uphold its principles. Successful enforcement requires that academic administration provide strong and visible support for upholding academic integrity standards, including the provision of a clear and fair process and the consistent application of appropriate sanctions against those whose conduct is found to violate these standards.

  8. Memory and accurate processing brain rehabilitation for the elderly: LEGO robot and iPad case study.

    PubMed

    Lopez-Samaniego, Leire; Garcia-Zapirain, Begonya; Mendez-Zorrilla, Amaia

    2014-01-01

    This paper presents the results of research that applies cognitive therapies associated with memory and mathematical problem-solving in elderly people. The exercises are programmed in an iPad and can be performed both from the Tablet and in an interactive format with a LEGO robot. The system has been tested with 2 men and 7 women over the age of 65 who have slight physical and cognitive impairment. Evaluation with the SUS resulted in a mean of 48.45 with a standard deviation of 5.82. The score of overall satisfaction was 84.37 with a standard deviation of 18.6. Interaction with the touch screen caused some usability problems due to the elderly people's visual difficulties and clicking accuracy. Future versions will include visualization with more color contrast and less use of the keyboard.

  9. Improved Bat Algorithm Applied to Multilevel Image Thresholding

    PubMed Central

    2014-01-01

    Multilevel image thresholding is a very important image processing technique that is used as a basis for image segmentation and further higher level processing. However, the required computational time for exhaustive search grows exponentially with the number of desired thresholds. Swarm intelligence metaheuristics are well known as successful and efficient optimization methods for intractable problems. In this paper, we adjusted one of the latest swarm intelligence algorithms, the bat algorithm, for the multilevel image thresholding problem. The results of testing on standard benchmark images show that the bat algorithm is comparable with other state-of-the-art algorithms. We improved standard bat algorithm, where our modifications add some elements from the differential evolution and from the artificial bee colony algorithm. Our new proposed improved bat algorithm proved to be better than five other state-of-the-art algorithms, improving quality of results in all cases and significantly improving convergence speed. PMID:25165733

  10. Feasibility of using the Omaha System to represent public health nurse manager interventions.

    PubMed

    Monsen, Karen A; Newsom, Eric T

    2011-01-01

    To test the feasibility of representing public health nurse (PHN) manager interventions using a recognized standardized nursing terminology. A nurse manager in a Midwest local public health agency documented nurse manager interventions using the Omaha System for 5 months. ANALYTIC STRATEGY: The data were analyzed and the results were compared with the results from a parallel analysis of existing PHN intervention data. Interventions for 79 "clients" (projects, teams, or individuals) captured 76% of recorded work hours, and addressed 43% of Omaha System problems. Most problems were addressed at the "community" level (87.1%) versus the "individual" level (12.9%). Nursing practice differed between the 2 knowledge domains of public health family home visiting nursing and public health nursing management. Standardized nursing terminologies have the potential to represent, describe, and quantify nurse manager interventions for future evaluation and research. © 2011 Wiley Periodicals, Inc.

  11. Study of the Alsys implementation of the Catalogue of Interface Features and Options for the Ada language for 80386 Unix

    NASA Technical Reports Server (NTRS)

    Gibson, James S.; Barnes, Michael J.; Ostermiller, Daniel L.

    1993-01-01

    A set of programs was written to test the functionality and performance of the Alsys Ada implementation of the Catalogue of Interface Features and Options (CIFO), a set of optional Ada packages for real-time applications. No problems were found with the task id, preemption control, or shared-data packages. Minor problems were found with the dispatching control, dynamic priority, events, non-waiting entry call, semaphore, and scheduling packages. The Alsys implementation is derived mostly from Release 2 of the CIFO standard, but includes some of the features of Release 3 and some modifications unique to Alsys. Performance measurements show that the semaphore and shared-data features are an order-of-magnitude faster than the same mechanisms using an Ada rendezvous. The non-waiting entry call is slightly faster than a standard rendezvous. The existence of errors in the implementation, the incompleteness of the documentation from the published standard impair the usefulness of this implementation. Despite those short-comings, the Alsys CIFO implementation might be of value in the development of real-time applications.

  12. Effects of work-related stress on work ability index among refinery workers

    PubMed Central

    Habibi, Ehsanollah; Dehghan, Habibollah; Safari, Shahram; Mahaki, Behzad; Hassanzadeh, Akbar

    2014-01-01

    Introduction: Work-related stress is one of the basic problems in industrial also top 10 work-related health problems and it is increasingly implicated in the development a number of problems such as cardiovascular disease, musculoskeletal diseases, early retirement to employees. On the other hand, early retirement to employees from the workplace has increased on the problems of today's industries. Hereof, improving work ability is one of the most effective ways to enhance the ability and preventing disability and early retirement. The aim of This study is determine the relationship between job stress score and work ability index (WAI) at the refinery workers. Materials and Methods: This is a cross-sectional study in which 171 workers from a refinery in isfahan in 2012 who were working in different occupational groups participated. Based on appropriate assignment sampling, 33 office workers, 69 operational workers, and 69 maintenance workers, respectively, were invited to participate in this study. Two questionnaires including work related-stress and WAI were filled in. Finally, the information was analyzed using the SPSS-20 and statistic tests namely, analysis of covariance Kruskal-Wallis test. Pearson correlation coefficient, ANOVA and t-test. Results: Data analysis revealed that 86% and 14% of participants had moderate and severe stress respectively. Average score of stress and standard deviation was 158.7 ± 17.3 that was in extreme stress range. Average score and standard deviation of WAI questionnaire were 37.18 and 3.86 respectively. That placed in a good range. Pearson correlation coefficient showed that WAI score had significant reversed relationship with a score of stress. Conclusion: According to the results, mean stress score among refinery worker was high and one fator that affect work abiity was high stress, hence training on communication skills and safe working environment in order to decreses stress, enhance the work ability of workers. PMID:24741658

  13. Effects of work-related stress on work ability index among refinery workers.

    PubMed

    Habibi, Ehsanollah; Dehghan, Habibollah; Safari, Shahram; Mahaki, Behzad; Hassanzadeh, Akbar

    2014-01-01

    Work-related stress is one of the basic problems in industrial also top 10 work-related health problems and it is increasingly implicated in the development a number of problems such as cardiovascular disease, musculoskeletal diseases, early retirement to employees. On the other hand, early retirement to employees from the workplace has increased on the problems of today's industries. Hereof, improving work ability is one of the most effective ways to enhance the ability and preventing disability and early retirement. The aim of This study is determine the relationship between job stress score and work ability index (WAI) at the refinery workers. This is a cross-sectional study in which 171 workers from a refinery in isfahan in 2012 who were working in different occupational groups participated. Based on appropriate assignment sampling, 33 office workers, 69 operational workers, and 69 maintenance workers, respectively, were invited to participate in this study. Two questionnaires including work related-stress and WAI were filled in. Finally, the information was analyzed using the SPSS-20 and statistic tests namely, analysis of covariance Kruskal-Wallis test. Pearson correlation coefficient, ANOVA and t-test. Data analysis revealed that 86% and 14% of participants had moderate and severe stress respectively. Average score of stress and standard deviation was 158.7 ± 17.3 that was in extreme stress range. Average score and standard deviation of WAI questionnaire were 37.18 and 3.86 respectively. That placed in a good range. Pearson correlation coefficient showed that WAI score had significant reversed relationship with a score of stress. According to the results, mean stress score among refinery worker was high and one fator that affect work abiity was high stress, hence training on communication skills and safe working environment in order to decreses stress, enhance the work ability of workers.

  14. Accuracy of self-report in detecting taste dysfunction.

    PubMed

    Soter, Ana; Kim, John; Jackman, Alexis; Tourbier, Isabelle; Kaul, Arti; Doty, Richard L

    2008-04-01

    To determine the sensitivity, specificity, and positive and negative predictive value of responses to the following questionnaire statements in detecting taste loss: "I can detect salt in chips, pretzels, or salted nuts," "I can detect sourness in vinegar, pickles, or lemon," "I can detect sweetness in soda, cookies, or ice cream," and "I can detect bitterness, in coffee, beer, or tonic water." Responses to an additional item, "I can detect chocolate in cocoa, cake or candy," was examined to determine whether patients clearly differentiate between taste loss and flavor loss secondary to olfactory dysfunction. A total of 469 patients (207 men, mean age = 54 years, standard deviation = 15 years; and 262 women, mean age = 54 years, standard deviation = 14 years) were administered a questionnaire containing these questions with the response categories of "easily," "somewhat," and "not at all," followed by a comprehensive taste and smell test battery. The questionnaire items poorly detected bona fide taste problems. However, they were sensitive in detecting persons without such problems (i.e., they exhibited low positive but high negative predictive value). Dysfunction categories of the University of Pennsylvania Smell Identification Test (UPSIT) were not meaningfully related to subjects' responses to the questionnaire statements. Both sex and age influenced performance on most of the taste tests, with older persons performing more poorly than younger ones and women typically outperforming men. Although it is commonly assumed that straight-forward questions concerning taste may be useful in detecting taste disorders, this study suggests this is not the case. However, patients who specifically report having no problems with taste perception usually do not exhibit taste dysfunction. The difficulty in detecting true taste problems by focused questionnaire items likely reflects a combination of factors. These include the relatively low prevalence of taste deficits in the general population and the tendency of patients to confuse loss of olfaction-related flavor sensations with taste-bud mediated deficits.

  15. apGA: An adaptive parallel genetic algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liepins, G.E.; Baluja, S.

    1991-01-01

    We develop apGA, a parallel variant of the standard generational GA, that combines aggressive search with perpetual novelty, yet is able to preserve enough genetic structure to optimally solve variably scaled, non-uniform block deceptive and hierarchical deceptive problems. apGA combines elitism, adaptive mutation, adaptive exponential scaling, and temporal memory. We present empirical results for six classes of problems, including the DeJong test suite. Although we have not investigated hybrids, we note that apGA could be incorporated into other recent GA variants such as GENITOR, CHC, and the recombination stage of mGA. 12 refs., 2 figs., 2 tabs.

  16. Surmounting a PCR challenge using a Contradictory matrix from the Theory of Inventive Problem Solving (TRIZ).

    PubMed

    Drábek, Jiří

    2016-01-01

    In this paper I tested whether Contradictory Matrix with 40 Inventive Principles, the simplest instrument from the Theory of Inventive Problem Solving (TRIZ), is a useful approach to a real-life PCR scenario. The PCR challenge consisted of standardization of fluorescence melting curve measurements in Competitive Amplification of Differentially Melting Amplicons (CADMA) PCR for multiple targets. Here I describe my way of using the TRIZ Matrix to generate seven alternative solutions from which I can choose the successful solution, consisting of repeated cycles of amplification and melting in a single PCR run.

  17. Multitasking TORT under UNICOS: Parallel performance models and measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Barnett, A.; Azmy, Y.Y.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  18. Multitasking TORT Under UNICOS: Parallel Performance Models and Measurements

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Azmy, Y.Y.; Barnett, D.A.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates were updated to function in a UNI-COS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  19. The Miller Assessment for Preschoolers: A Longitudinal and Predictive Study. Final Report.

    ERIC Educational Resources Information Center

    Foundation for Knowledge in Development, Littleton, CO.

    The study reported here sought to establish the predictive validity of the Miller Assessment for Preschoolers (MAP), an instrument designed to identify preschool children at risk for school-related problems in the primary years. Children (N=338) in 11 states who were originally tested in 1980 as part of the MAP standardization project were given a…

  20. Module Validity of Peer Counselor Character Service in State University of Medan

    ERIC Educational Resources Information Center

    Dewi, Rosmala; Rahmadana, Muhammad Fitri; Dalimunthe, Muhammad Bukhori

    2016-01-01

    Many ways can be done to address the problem of students, one of them involving the students themselves (peer counselor). It required a standard model that can be applied by students as guidelines for the implementation of the guidance. Validity of the module it must be done according to the rules of various scientific tests. State University of…

  1. Improving First Grade Academic Skills through the Integration of Music into the First Grade Curriculum.

    ERIC Educational Resources Information Center

    Hart-Davis, Charity

    This study designed a music program for improving academic skills of first grade students after the teaching staff found the students doing average work in the classroom. The school involved in the study was located in an urban, middle class community in Northern Illinois. Results of standardized tests showed the extent of the academic problems of…

  2. Learning Styles, Learning Abilities and Learning Problems in College: An Exploration of Learning Disabilities in College Students. Final Report.

    ERIC Educational Resources Information Center

    Goldberg, Renee L.; Zern, David S.

    The study examined differences between 57 learning disabled (LD) and 24 non-LD college students on measures of psychoeducational assessment. In addition, differences between LD students with good and poor academic performance were studied, and coping strategies were identified for both sub-groups. A variety of standardized tests were administered…

  3. A Case Study of Middle School Teachers' Preparations for High-Stakes Assessments

    ERIC Educational Resources Information Center

    Yeary, David Lee

    2017-01-01

    Students, educators, and schools across the country have been presented with challenges as a result of rigorous standards and high-complexity tests. The problem addressed in this case study was that teachers in a rural middle school in a southeastern state were preparing students to take a new high-stakes state-mandated assessment in English…

  4. An Analysis of Governance Policies and Practices in One School District Regarding English Learners

    ERIC Educational Resources Information Center

    Lysko, V. Lynn

    2012-01-01

    In a large, urban, high school district, secondary English-learning students are not achieving at the same rates as other identified subgroups on state and local standardized tests. This gap compounds economic and social inequities in the region. A solution to the problem is important to educators and policy makers in providing an equitable…

  5. Integration of Reading and Writing Strategies in Primary Level Special Education Resource Students To Improve Reading Performance.

    ERIC Educational Resources Information Center

    Peterson, Kathy S.

    A program was developed for improving the reading level of primary special education resource students in a progressive suburban community in the midwest. The problem was originally noted by an increase in the need for support services and low standardized test scores. Analysis of probable cause data revealed that students lacked knowledge of the…

  6. A new standard model for milk yield in dairy cows based on udder physiology at the milking-session level.

    PubMed

    Gasqui, Patrick; Trommenschlager, Jean-Marie

    2017-08-21

    Milk production in dairy cow udders is a complex and dynamic physiological process that has resisted explanatory modelling thus far. The current standard model, Wood's model, is empirical in nature, represents yield in daily terms, and was published in 1967. Here, we have developed a dynamic and integrated explanatory model that describes milk yield at the scale of the milking session. Our approach allowed us to formally represent and mathematically relate biological features of known relevance while accounting for stochasticity and conditional elements in the form of explicit hypotheses, which could then be tested and validated using real-life data. Using an explanatory mathematical and biological model to explore a physiological process and pinpoint potential problems (i.e., "problem finding"), it is possible to filter out unimportant variables that can be ignored, retaining only those essential to generating the most realistic model possible. Such modelling efforts are multidisciplinary by necessity. It is also helpful downstream because model results can be compared with observed data, via parameter estimation using maximum likelihood and statistical testing using model residuals. The process in its entirety yields a coherent, robust, and thus repeatable, model.

  7. The Airbag as a Supplement to Standard Restraint Systems in the AH-1 and AH-64 Attack Helicopters and Its Role in Reducing Head Strikes of the Copilot/ Gunner. Volume 1

    DTIC Science & Technology

    1991-01-01

    shelf and locally available automotive airbag system was selected for the tests. The system was a driver’s side airbag designed by Honda Motor Company...allowance for hardware redesign or modifi- cation. Despite these limitations, the study succeeded in demonstrating a problem exists and a supplemental airbag ...JSAARL Repqrt No. 91-8 AD-A233 349 Volume’ I(3 The Airbag as a Supplement to Standard Restraint Systems in the AH-1 and AH-64 Attack Helicopters and

  8. Archival-grade optical disc design and international standards

    NASA Astrophysics Data System (ADS)

    Fujii, Toru; Kojyo, Shinichi; Endo, Akihisa; Kodaira, Takuo; Mori, Fumi; Shimizu, Atsuo

    2015-09-01

    Optical discs currently on the market exhibit large variations in life span among discs, making them unsuitable for certain business applications. To assess and potentially mitigate this problem, we performed accelerated degradation testing under standard ISO conditions, determined the probable disc failure mechanisms, and identified the essential criteria necessary for a stable disc composition. With these criteria as necessary conditions, we analyzed the physical and chemical changes that occur in the disc components, on the basis of which we determined technological measures to reduce these degradation processes. By applying these measures to disc fabrication, we were able to develop highly stable optical discs.

  9. NGSS, disposability, and the ambivalence of science in/under neoliberalism

    NASA Astrophysics Data System (ADS)

    Weinstein, Matthew

    2017-12-01

    This paper explores the ambivalence of the Next Generation Science Standards (NGSS) and its Framework towards neoliberal governance. The paper examines the ways that the NGSS serves as a mechanism within neoliberal governance: in its production of disposable populations through testing and through the infusion of engineering throughout the NGSS to resolve social problems through technical fixes. However, the NGSS, like earlier standards, is reactionary to forces diminishing the power of institutional science (e.g., the AAAS) including neoliberal prioritizing market value over evidence. The NGSS explicitly takes on neoliberal junk science such as the anti-global-warming Heartland Institute.

  10. Hardware synthesis from DDL description. [simulating a digital system for computerized design of large scale integrated circuits

    NASA Technical Reports Server (NTRS)

    Shiva, S. G.; Shah, A. M.

    1980-01-01

    The details of digital systems can be conveniently input into the design automation system by means of hardware description language (HDL). The computer aided design and test (CADAT) system at NASA MSFC is used for the LSI design. The digital design language (DDL) was selected as HDL for the CADAT System. DDL translator output can be used for the hardware implementation of the digital design. Problems of selecting the standard cells from the CADAT standard cell library to realize the logic implied by the DDL description of the system are addressed.

  11. The laboratory diagnosis of bacterial vaginosis

    PubMed Central

    Money, Deborah

    2005-01-01

    Bacterial vaginosis (BV) is an extremely common health problem for women. In addition to the troublesome symptoms often associated with a disruption in the balance of vaginal flora, BV is associated with adverse gynecological and pregnancy outcomes. Although not technically a sexually transmitted infection, BV is a sexually associated condition. Diagnostic tests include real-time clinical/microbiological diagnosis, and the current gold standard, the standardized evaluation of morphotypes on Gram stain analysis. The inappropriate use of vaginal culture can be misleading. Future developments into molecular-based diagnostics will be important to further understand this complex endogenous flora disruption. PMID:18159532

  12. Solving the vehicle routing problem by a hybrid meta-heuristic algorithm

    NASA Astrophysics Data System (ADS)

    Yousefikhoshbakht, Majid; Khorram, Esmaile

    2012-08-01

    The vehicle routing problem (VRP) is one of the most important combinational optimization problems that has nowadays received much attention because of its real application in industrial and service problems. The VRP involves routing a fleet of vehicles, each of them visiting a set of nodes such that every node is visited by exactly one vehicle only once. So, the objective is to minimize the total distance traveled by all the vehicles. This paper presents a hybrid two-phase algorithm called sweep algorithm (SW) + ant colony system (ACS) for the classical VRP. At the first stage, the VRP is solved by the SW, and at the second stage, the ACS and 3-opt local search are used for improving the solutions. Extensive computational tests on standard instances from the literature confirm the effectiveness of the presented approach.

  13. Experimental metaphysics2 : The double standard in the quantum-information approach to the foundations of quantum theory

    NASA Astrophysics Data System (ADS)

    Hagar, Amit

    Among the alternatives of non-relativistic quantum mechanics (NRQM) there are those that give different predictions than quantum mechanics in yet-untested circumstances, while remaining compatible with current empirical findings. In order to test these predictions, one must isolate one's system from environmental induced decoherence, which, on the standard view of NRQM, is the dynamical mechanism that is responsible for the 'apparent' collapse in open quantum systems. But while recent advances in condensed-matter physics may lead in the near future to experimental setups that will allow one to test the two hypotheses, namely genuine collapse vs. decoherence, hence make progress toward a solution to the quantum measurement problem, those philosophers and physicists who are advocating an information-theoretic approach to the foundations of quantum mechanics are still unwilling to acknowledge the empirical character of the issue at stake. Here I argue that in doing so they are displaying an unwarranted double standard.

  14. Replacement of the International Standard for Tetanus Antitoxin and the Use of the Standard in the Flocculation Test

    PubMed Central

    Spaun, J.; Lyng, J.

    1970-01-01

    Since 1935 the International Unit for Tetanus Antitoxin has been defined as the activity contained in a certain weight of the first International Standard for Tetanus Antitoxin. As stocks of this standard had become depleted, 11 laboratories in 8 countries were requested to participate in a collaborative assay of a preparation proposed as a replacement. The assay results were analysed and presented to the WHO Expert Committee on Biological Standardization in 1969 which established the preparation studied as the second International Standard for Tetanus Antitoxin and defined the International Unit for Tetanus Antitoxin as the activity contained in 0.03384 mg of the second International Standard for Tetanus Antitoxin. This definition would ensure the continuity of the size of this international unit. The analysis of the collaborative studies also showed that the second International Standard for Tetanus Antitoxin has suitable properties for use in the flocculation test for the determination of the antigen content of tetanus toxoids in Lf values. The designation Lf-equivalent is described and the problems relating to the use of this term for the expression of results of in vitro assays are analysed in relation to the use of international units for expressing results of in vivo assays. As the second International Standard for Tetanus Antitoxin has an in vivo/in vitro ratio of 1.4, the Lf-equivalent of this antitoxin is 1.4 times less than its unitage. PMID:5310949

  15. Analysis of mathematical literacy ability based on self-efficacy in model eliciting activities using metaphorical thinking approach

    NASA Astrophysics Data System (ADS)

    Setiani, C.; Waluya, S. B.; Wardono

    2018-03-01

    The purposes of this research are: (1) to identify learning quality in Model Eliciting Activities (MEAs) using a Metaphorical Thinking (MT) approach regarding qualitative and quantitative; (2) to analyze mathematical literacy of students based on Self-Efficacy (SE). This research is mixed method concurrent embedded design with qualitative research as the primary method. The quantitative research used quasi-experimental with non-equivalent control group design. The population is VIII grade students of SMP Negeri 3 Semarang Indonesia. Quantitative data is examined by conducting completeness mean test, standard completeness test, mean differentiation test and proportional differentiation test. Qualitative data is analyzed descriptively. The result of this research shows that MEAs learning using MT approach accomplishes good criteria both quantitatively and qualitatively. Students with low self-efficacy can identify problems, but they are lack ability to arrange problem-solving strategy on mathematical literacy questions. Students with medium self-efficacy can identify information provided in issues, but they find difficulties to use math symbols in making a representation. Students with high self-efficacy are excellent to represent problems into mathematical models as well as figures by using appropriate symbols and tools, so they can arrange strategy easily to solve mathematical literacy questions.

  16. Data-driven non-linear elasticity: constitutive manifold construction and problem discretization

    NASA Astrophysics Data System (ADS)

    Ibañez, Ruben; Borzacchiello, Domenico; Aguado, Jose Vicente; Abisset-Chavanne, Emmanuelle; Cueto, Elias; Ladeveze, Pierre; Chinesta, Francisco

    2017-11-01

    The use of constitutive equations calibrated from data has been implemented into standard numerical solvers for successfully addressing a variety problems encountered in simulation-based engineering sciences (SBES). However, the complexity remains constantly increasing due to the need of increasingly detailed models as well as the use of engineered materials. Data-Driven simulation constitutes a potential change of paradigm in SBES. Standard simulation in computational mechanics is based on the use of two very different types of equations. The first one, of axiomatic character, is related to balance laws (momentum, mass, energy,\\ldots ), whereas the second one consists of models that scientists have extracted from collected, either natural or synthetic, data. Data-driven (or data-intensive) simulation consists of directly linking experimental data to computers in order to perform numerical simulations. These simulations will employ laws, universally recognized as epistemic, while minimizing the need of explicit, often phenomenological, models. The main drawback of such an approach is the large amount of required data, some of them inaccessible from the nowadays testing facilities. Such difficulty can be circumvented in many cases, and in any case alleviated, by considering complex tests, collecting as many data as possible and then using a data-driven inverse approach in order to generate the whole constitutive manifold from few complex experimental tests, as discussed in the present work.

  17. Long Life Nickel Electrodes for Nickel-Hydrogen Cells: Fiber Substrates Nickel Electrodes

    NASA Technical Reports Server (NTRS)

    Rogers, Howard H.

    2000-01-01

    Samples of nickel fiber mat electrodes were investigated over a wide range of fiber diameters, electrode thickness, porosity and active material loading levels. Thickness' were 0.040, 0.060 and 0.080 inches for the plaque: fiber diameters were primarily 2, 4, and 8 micron and porosity was 85, 90, and 95%. Capacities of 3.5 in. diameter electrodes were determined in the flooded condition with both 26 and 31% potassium hydroxide solution. These capacity tests indicated that the highest capacities per unit weight were obtained at the 90% porosity level with a 4 micron diameter fiber plaque. It appeared that the thinner electrodes had somewhat better performance, consistent with sintered electrode history. Limited testing with two-positive-electrode boiler plate cells was also carried out. Considerable difficulty with constructing the cells was encountered with short circuits the major problem. Nevertheless, four cells were tested. The cell with 95% porosity electrodes failed during conditioning cycling due to high voltage during charge. Discharge showed that this cell had lost nearly all of its capacity. The other three cells after 20 conditioning cycles showed capacities consistent with the flooded capacities of the electrodes. Positive electrodes made from fiber substrates may well show a weight advantage of standard sintered electrodes, but need considerably more work to prove this statement. A major problem to be investigated is the lower strength of the substrate compared to standard sintered electrodes. Problems with welding of leads were significant and implications that the electrodes would expand more than sintered electrodes need to be investigated. Loading levels were lower than had been expected based on sintered electrode experiences and the lower loading led to lower capacity values. However, lower loading causes less expansion and contraction during cycling so that stress on the substrate is reduced.

  18. Building Large Collections of Chinese and English Medical Terms from Semi-Structured and Encyclopedia Websites

    PubMed Central

    Xu, Yan; Wang, Yining; Sun, Jian-Tao; Zhang, Jianwen; Tsujii, Junichi; Chang, Eric

    2013-01-01

    To build large collections of medical terms from semi-structured information sources (e.g. tables, lists, etc.) and encyclopedia sites on the web. The terms are classified into the three semantic categories, Medical Problems, Medications, and Medical Tests, which were used in i2b2 challenge tasks. We developed two systems, one for Chinese and another for English terms. The two systems share the same methodology and use the same software with minimum language dependent parts. We produced large collections of terms by exploiting billions of semi-structured information sources and encyclopedia sites on the Web. The standard performance metric of recall (R) is extended to three different types of Recall to take the surface variability of terms into consideration. They are Surface Recall (), Object Recall (), and Surface Head recall (). We use two test sets for Chinese. For English, we use a collection of terms in the 2010 i2b2 text. Two collections of terms, one for English and the other for Chinese, have been created. The terms in these collections are classified as either of Medical Problems, Medications, or Medical Tests in the i2b2 challenge tasks. The English collection contains 49,249 (Problems), 89,591 (Medications) and 25,107 (Tests) terms, while the Chinese one contains 66,780 (Problems), 101,025 (Medications), and 15,032 (Tests) terms. The proposed method of constructing a large collection of medical terms is both efficient and effective, and, most of all, independent of language. The collections will be made publicly available. PMID:23874426

  19. Building large collections of Chinese and English medical terms from semi-structured and encyclopedia websites.

    PubMed

    Xu, Yan; Wang, Yining; Sun, Jian-Tao; Zhang, Jianwen; Tsujii, Junichi; Chang, Eric

    2013-01-01

    To build large collections of medical terms from semi-structured information sources (e.g. tables, lists, etc.) and encyclopedia sites on the web. The terms are classified into the three semantic categories, Medical Problems, Medications, and Medical Tests, which were used in i2b2 challenge tasks. We developed two systems, one for Chinese and another for English terms. The two systems share the same methodology and use the same software with minimum language dependent parts. We produced large collections of terms by exploiting billions of semi-structured information sources and encyclopedia sites on the Web. The standard performance metric of recall (R) is extended to three different types of Recall to take the surface variability of terms into consideration. They are Surface Recall (R(S)), Object Recall (R(O)), and Surface Head recall (R(H)). We use two test sets for Chinese. For English, we use a collection of terms in the 2010 i2b2 text. Two collections of terms, one for English and the other for Chinese, have been created. The terms in these collections are classified as either of Medical Problems, Medications, or Medical Tests in the i2b2 challenge tasks. The English collection contains 49,249 (Problems), 89,591 (Medications) and 25,107 (Tests) terms, while the Chinese one contains 66,780 (Problems), 101,025 (Medications), and 15,032 (Tests) terms. The proposed method of constructing a large collection of medical terms is both efficient and effective, and, most of all, independent of language. The collections will be made publicly available.

  20. Basic problems of serological laboratory diagnosis.

    PubMed

    Fierz, Walter

    2004-01-01

    Serological laboratory diagnosis of infectious diseases is inflicted with several kinds of basic problems. One difficulty relates to the fact that the serological diagnosis of infectious diseases is double indirect: The first indirect aim in diagnosing an infectious disease is to identify the microbial agent that caused the disease. The second indirect aim is to identify this infectious agent by measuring the patient's immune response to the potential agent. Thus, the serological test is neither measuring directly disease nor the cause of the disease, but the patient's immune system. The latter poses another type of problem, because each person's immune system is unique. The immune response to an infectious agent is usually of polyclonal nature, and the exact physicochemical properties of antibodies are unique for each clone of antibody. The clonal makeup and composition and, therefore, the way an individual's immune system sees an infectious agent, depends not only on the genetic background of the person but also on the individual experience from former encounters with various infectious agents. In consequence, the reaction of a patient's serum in an analytical system is not precisely predictable. Also, the antigenic makeup of an infectious agent is not always foreseeable. Antigenic variations leading to different serotypes is a quite common phenomenon. Altogether, these biological problems lead to complexities in selecting the appropriate tests and strategies for testing, in interpreting the results, and in standardizing serological test systems. For that reason, a close collaboration of the laboratory with the clinic is mandatory to avoid erroneous conclusions from serological test results, which might lead to wrong decisions in patient care.

  1. The validity of three tests of temperament in guppies (Poecilia reticulata).

    PubMed

    Burns, James G

    2008-11-01

    Differences in temperament (consistent differences among individuals in behavior) can have important effects on fitness-related activities such as dispersal and competition. However, evolutionary ecologists have put limited effort into validating their tests of temperament. This article attempts to validate three standard tests of temperament in guppies: the open-field test, emergence test, and novel-object test. Through multiple reliability trials, and comparison of results between different types of test, this study establishes the confidence that can be placed in these temperament tests. The open-field test is shown to be a good test of boldness and exploratory behavior; the open-field test was reliable when tested in multiple ways. There were problems with the emergence test and novel-object test, which leads one to conclude that the protocols used in this study should not be considered valid tests for this species. (PsycINFO Database Record (c) 2008 APA, all rights reserved).

  2. Pre-service mathematics teachers’ ability in solving well-structured problem

    NASA Astrophysics Data System (ADS)

    Paradesa, R.

    2018-01-01

    This study aimed to describe the mathematical problem-solving ability of undergraduate students of mathematics education in solving the well-structured problem. The type of this study was qualitative descriptive. The subjects in this study were 100 undergraduate students of Mathematics Education at one of the private universities in Palembang city. The data in this study was collected through two test items with essay form. The results of this study showed that, from the first problem, only 8% students can solve it, but do not check back again to validate the process. Based on a scoring rubric that follows Polya strategy, their answer satisfied 2 4 2 0 patterns. But, from the second problem, 45% students satisfied it. This is because the second problem imitated from the example that was given in learning process. The average score of undergraduate students mathematical problem-solving ability in solving well-structured problems showed 56.00 with standard deviation was 13.22. It means that, from 0 - 100 scale, undergraduate students mathematical problem-solving ability can be categorized low. From this result, the conclusion was undergraduate students of mathematics education in Palembang still have a problem in solving mathematics well-structured problem.

  3. Performance metrics for the evaluation of hyperspectral chemical identification systems

    NASA Astrophysics Data System (ADS)

    Truslow, Eric; Golowich, Steven; Manolakis, Dimitris; Ingle, Vinay

    2016-02-01

    Remote sensing of chemical vapor plumes is a difficult but important task for many military and civilian applications. Hyperspectral sensors operating in the long-wave infrared regime have well-demonstrated detection capabilities. However, the identification of a plume's chemical constituents, based on a chemical library, is a multiple hypothesis testing problem which standard detection metrics do not fully describe. We propose using an additional performance metric for identification based on the so-called Dice index. Our approach partitions and weights a confusion matrix to develop both the standard detection metrics and identification metric. Using the proposed metrics, we demonstrate that the intuitive system design of a detector bank followed by an identifier is indeed justified when incorporating performance information beyond the standard detection metrics.

  4. Chemical and engineering properties of fired bricks containing 50 weight percent of class F fly ash

    USGS Publications Warehouse

    Chou, I.-Ming; Patel, V.; Laird, C.J.; Ho, K.K.

    2001-01-01

    The generation of fly ash during coal combustion represents a considerable solid waste disposal problem in the state of Illinois and nationwide. In fact, the majority of the three million tons of fly ash produced from burning Illinois bituminous coals is disposed of in landfills. The purpose of this study was to obtain a preliminary assessment of the technical feasibility of mitigating this solid waste problem by making fired bricks with the large volume of fly ash generated from burning Illinois coals. Test bricks were produced by the extrusion method with increasing amounts (20-50% by weight) of fly ash as a replacement for conventional raw materials. The chemical characteristics and engineering properties of the test bricks produced with and without 50 wt% of fly ash substitutions were analyzed and compared. The properties of the test bricks containing fly ash were at least comparable to, if not better than, those of standard test bricks made without fly ash and met the commercial specifications for fired bricks. The positive results of this study suggest that further study on test bricks with fly ash substitutions of greater than 50wt% is warranted. Successful results could have an important impact in reducing the waste disposal problem related to class F fly ash while providing the brick industry with a new low cost raw material. Copyright ?? 2001 Taylor & Francis.

  5. High order methods for the integration of the Bateman equations and other problems of the form of y‧ = F(y,t)y

    NASA Astrophysics Data System (ADS)

    Josey, C.; Forget, B.; Smith, K.

    2017-12-01

    This paper introduces two families of A-stable algorithms for the integration of y‧ = F (y , t) y: the extended predictor-corrector (EPC) and the exponential-linear (EL) methods. The structure of the algorithm families are described, and the method of derivation of the coefficients presented. The new algorithms are then tested on a simple deterministic problem and a Monte Carlo isotopic evolution problem. The EPC family is shown to be only second order for systems of ODEs. However, the EPC-RK45 algorithm had the highest accuracy on the Monte Carlo test, requiring at least a factor of 2 fewer function evaluations to achieve a given accuracy than a second order predictor-corrector method (center extrapolation / center midpoint method) with regards to Gd-157 concentration. Members of the EL family can be derived to at least fourth order. The EL3 and the EL4 algorithms presented are shown to be third and fourth order respectively on the systems of ODE test. In the Monte Carlo test, these methods did not overtake the accuracy of EPC methods before statistical uncertainty dominated the error. The statistical properties of the algorithms were also analyzed during the Monte Carlo problem. The new methods are shown to yield smaller standard deviations on final quantities as compared to the reference predictor-corrector method, by up to a factor of 1.4.

  6. Testing stellar evolution models with detached eclipsing binaries

    NASA Astrophysics Data System (ADS)

    Higl, J.; Weiss, A.

    2017-12-01

    Stellar evolution codes, as all other numerical tools, need to be verified. One of the standard stellar objects that allow stringent tests of stellar evolution theory and models, are detached eclipsing binaries. We have used 19 such objects to test our stellar evolution code, in order to see whether standard methods and assumptions suffice to reproduce the observed global properties. In this paper we concentrate on three effects that contain a specific uncertainty: atomic diffusion as used for standard solar model calculations, overshooting from convective regions, and a simple model for the effect of stellar spots on stellar radius, which is one of the possible solutions for the radius problem of M dwarfs. We find that in general old systems need diffusion to allow for, or at least improve, an acceptable fit, and that systems with convective cores indeed need overshooting. Only one system (AI Phe) requires the absence of it for a successful fit. To match stellar radii for very low-mass stars, the spot model proved to be an effective approach, but depending on model details, requires a high percentage of the surface being covered by spots. We briefly discuss improvements needed to further reduce the freedom in modelling and to allow an even more restrictive test by using these objects.

  7. Problems in diagnosis and treatment of tuberculosis infection.

    PubMed

    Tsara, V; Serasli, E; Christaki, P

    2009-01-01

    Tuberculosis is still a major health problem in industrialized countries due to specific socioeconomic factors and there is the growing need of new rapid and accurate diagnostic methods, in order to achieve higher sensitivity and specificity compared to traditional methods of microscopic sputum examination and culture. Such methods, recently introduced, are nucleic acid amplification (NAA) tests, used directly on clinical specimens and blood tests (QuantiFERON-TB, T-SPOT.TB test), measuring the IFN-gamma released by stimulated T cells. Furthermore, new drugs for the disease need to be developed, aiming to better treatment results and to prevention of Multiple Drug Resistance (MDR) cases. Critical aspects in the management of drug resistance cases should be the careful choices of drugs combination, the close follow up of the patients alongside with the patients adherence to therapy. The role of national and international tuberculosis programs is invaluable in TB control and therapy, as well as the collaboration of all the health system departments. However, most of the clinical problems that may arise are addressed by the International Standards for Tuberculosis Care-ISTC and these guidelines should be taken into consideration, at least until future research provides more promising diagnostic and therapeutic modalities for control of the disease.

  8. TRAC posttest calculations of Semiscale Test S-06-3. [PWR

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ireland, J.R.; Bleiweis, P.B.

    A comparison of Transient Reactor Analysis Code (TRAC) steady-state and transient results with Semiscale Test S-06-3 (US Standard Problem 8) experimental data is discussed. The TRAC model used employs fewer mesh cells than normal data comparison models so that TRAC's ability to obtain reasonable results with less computer time can be assessed. In general, the TRAC results are in good agreement with the data and the major phenomena found in the experiment are reproduced by the code with a substantial reduction in computing times.

  9. Do post-trauma symptoms mediate the relation between neurobiological stress parameters and conduct problems in girls?

    PubMed

    Babel, Kimberly A; Jambroes, Tijs; Oostermeijer, Sanne; van de Ven, Peter M; Popma, Arne; Vermeiren, Robert R J M; Doreleijers, Theo A H; Jansen, Lucres M C

    2016-01-01

    Attenuated activity of stress-regulating systems has consistently been reported in boys with conduct problems. Results in studies of girls are inconsistent, which may result from the high prevalence of comorbid post-trauma symptoms. Therefore, the aim of the present study is to investigate post-trauma symptoms as a potential mediator in the relation between stress-regulation systems functioning and conduct problems in female adolescents. The sample consisted of 78 female adolescents (mean age 15.4; SD 1.1) admitted to a closed treatment institution. The diagnosis of disruptive behaviour disorder (DBD) was assessed by a structured interview-the diagnostic interview schedule for children version IV (DISC-IV). To assess post-trauma symptoms and externalizing behaviour problems, self-report questionnaires, youth self report (YSR) and the trauma symptom checklist for Children (TSCC) were used. The cortisol awakenings response (CAR) measured hypothalamic-pituitary-adrenal (HPA) axis activity, whereas autonomous nervous system (ANS) activity was assessed by heart rate (HR), pre-ejection period (PEP) and respiratory sinus arrhythmia (RSA). Independent t-tests were used to compare girls with and without DBD, while path analyses tested for the mediating role of post- trauma symptoms in the relation between stress regulating systems and externalizing behaviour. Females with DBD (n = 37) reported significantly higher rates of post-trauma symptoms and externalizing behaviour problems than girls without DBD (n = 39). Path analysis found no relation between CAR and externalizing behaviour problems. With regard to ANS activity, positive direct effects on externalizing behaviour problems were present for HR (standardized β = 0.306, p = 0.020) and PEP (standardized β = -0.323, p = 0.031), though not for RSA. Furthermore, no relation-whether direct or indirect-could be determined from post-trauma symptoms. Present findings demonstrate that the neurobiological characteristics of female externalizing behaviour differ from males, since girls showed heightened instead of attenuated ANS activity. While the prevalence of post-trauma symptoms was high in girls with DBD, it did not mediate the relation between stress parameters and externalizing behaviour. Clinical implications and future directions are discussed.

  10. Case finding of lifestyle and mental health disorders in primary care: validation of the ‘CHAT’ tool

    PubMed Central

    Goodyear-Smith, Felicity; Coupe, Nicole M; Arroll, Bruce; Elley, C Raina; Sullivan, Sean; McGill, Anne-Thea

    2008-01-01

    Background Primary care is accessible and ideally placed for case finding of patients with lifestyle and mental health risk factors and subsequent intervention. The short self-administered Case-finding and Help Assessment Tool (CHAT) was developed for lifestyle and mental health assessment of adult patients in primary health care. This tool checks for tobacco use, alcohol and other drug misuse, problem gambling, depression, anxiety and stress, abuse, anger problems, inactivity, and eating disorders. It is well accepted by patients, GPs and nurses. Aim To assess criterion-based validity of CHAT against a composite gold standard. Design of study Conducted according to the Standards for Reporting of Diagnostic Accuracy statement for diagnostic tests. Setting Primary care practices in Auckland, New Zealand. Method One thousand consecutive adult patients completed CHAT and a composite gold standard. Sensitivities, specificities, positive and negative predictive values, and likelihood ratios were calculated. Results Response rates for each item ranged from 79.6 to 99.8%. CHAT was sensitive and specific for almost all issues screened, except exercise and eating disorders. Sensitivity ranged from 96% (95% confidence interval [CI] = 87 to 99%) for major depression to 26% (95% CI = 22 to 30%) for exercise. Specificity ranged from 97% (95% CI = 96 to 98%) for problem gambling and problem drug use to 40% (95% CI = 36 to 45%) for exercise. All had high likelihood ratios (3–30), except exercise and eating disorders. Conclusion CHAT is a valid and acceptable case-finding tool for most common lifestyle and mental health conditions. PMID:18186993

  11. Beta-Test Data On An Assessment Of Textbook Problem Solving Ability: An Argument For Right/Wrong Grading?

    NASA Astrophysics Data System (ADS)

    Cummings, Karen; Marx, Jeffrey D.

    2010-10-01

    We have developed an assessment of students' ability to solve standard textbook style problems and are currently engaged in the validation and revision process. The assessment covers the topics of force and motion, conservation of momentum and conservation of energy at a level consistent with most calculus-based, introductory physics courses. This tool is discussed in more detail in an accompanying paper by Marx and Cummings. [1] Here we present preliminary beta-test data collected at four schools during the 2009/2010 academic year. Data include both pre- and post-instruction results for introductory physics courses as well as results for physics majors in later years. In addition, we present evidence that right/wrong grading may well be a perfectly acceptable grading procedure for a course-level assessment of this type.

  12. Perform qualify reliability-power tests by shooting common mistakes: practical problems and standard answers per Telcordia/Bellcore requests

    NASA Astrophysics Data System (ADS)

    Yu, Zheng

    2002-08-01

    Facing the new demands of the optical fiber communications market, almost all the performance and reliability of optical network system are dependent on the qualification of the fiber optics components. So, how to comply with the system requirements, the Telcordia / Bellcore reliability and high-power testing has become the key issue for the fiber optics components manufacturers. The qualification of Telcordia / Bellcore reliability or high-power testing is a crucial issue for the manufacturers. It is relating to who is the outstanding one in the intense competition market. These testing also need maintenances and optimizations. Now, work on the reliability and high-power testing have become the new demands in the market. The way is needed to get the 'Triple-Win' goal expected by the component-makers, the reliability-testers and the system-users. To those who are meeting practical problems for the testing, there are following seven topics that deal with how to shoot the common mistakes to perform qualify reliability and high-power testing: ¸ Qualification maintenance requirements for the reliability testing ¸ Lots control for preparing the reliability testing ¸ Sampling select per the reliability testing ¸ Interim measurements during the reliability testing ¸ Basic referencing factors relating to the high-power testing ¸ Necessity of re-qualification testing for the changing of producing ¸ Understanding the similarity for product family by the definitions

  13. Advanced Computational Methods for Security Constrained Financial Transmission Rights

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Kalsi, Karanjit; Elbert, Stephen T.; Vlachopoulou, Maria

    Financial Transmission Rights (FTRs) are financial insurance tools to help power market participants reduce price risks associated with transmission congestion. FTRs are issued based on a process of solving a constrained optimization problem with the objective to maximize the FTR social welfare under power flow security constraints. Security constraints for different FTR categories (monthly, seasonal or annual) are usually coupled and the number of constraints increases exponentially with the number of categories. Commercial software for FTR calculation can only provide limited categories of FTRs due to the inherent computational challenges mentioned above. In this paper, first an innovative mathematical reformulationmore » of the FTR problem is presented which dramatically improves the computational efficiency of optimization problem. After having re-formulated the problem, a novel non-linear dynamic system (NDS) approach is proposed to solve the optimization problem. The new formulation and performance of the NDS solver is benchmarked against widely used linear programming (LP) solvers like CPLEX™ and tested on both standard IEEE test systems and large-scale systems using data from the Western Electricity Coordinating Council (WECC). The performance of the NDS is demonstrated to be comparable and in some cases is shown to outperform the widely used CPLEX algorithms. The proposed formulation and NDS based solver is also easily parallelizable enabling further computational improvement.« less

  14. Highly Parallel Alternating Directions Algorithm for Time Dependent Problems

    NASA Astrophysics Data System (ADS)

    Ganzha, M.; Georgiev, K.; Lirkov, I.; Margenov, S.; Paprzycki, M.

    2011-11-01

    In our work, we consider the time dependent Stokes equation on a finite time interval and on a uniform rectangular mesh, written in terms of velocity and pressure. For this problem, a parallel algorithm based on a novel direction splitting approach is developed. Here, the pressure equation is derived from a perturbed form of the continuity equation, in which the incompressibility constraint is penalized in a negative norm induced by the direction splitting. The scheme used in the algorithm is composed of two parts: (i) velocity prediction, and (ii) pressure correction. This is a Crank-Nicolson-type two-stage time integration scheme for two and three dimensional parabolic problems in which the second-order derivative, with respect to each space variable, is treated implicitly while the other variable is made explicit at each time sub-step. In order to achieve a good parallel performance the solution of the Poison problem for the pressure correction is replaced by solving a sequence of one-dimensional second order elliptic boundary value problems in each spatial direction. The parallel code is implemented using the standard MPI functions and tested on two modern parallel computer systems. The performed numerical tests demonstrate good level of parallel efficiency and scalability of the studied direction-splitting-based algorithm.

  15. High School Class for Gifted Pupils in Physics and Sciences and Pupils' Skills Measured by Standard and Pisa Test

    NASA Astrophysics Data System (ADS)

    Djordjevic, G. S.; Pavlovic-Babic, D.

    2010-01-01

    The "High school class for students with special abilities in physics" was founded in Nis, Serbia (www.pmf.ni.ac.yu/f_odeljenje) in 2003. The basic aim of this project has been introducing a broadened curriculum of physics, mathematics, computer science, as well as chemistry and biology. Now, six years after establishing of this specialized class, and 3 years after the previous report, we present analyses of the pupils' skills in solving rather problem oriented test, as PISA test, and compare their results with the results of pupils who study under standard curricula. More precisely results are compared to the progress results of the pupils in a standard Grammar School and the corresponding classes of the Mathematical Gymnasiums in Nis. Analysis of achievement data should clarify what are benefits of introducing in school system track for gifted students. Additionally, item analysis helps in understanding and improvement of learning strategies' efficacy. We make some conclusions and remarks that may be useful for the future work that aims to increase pupils' intrinsic and instrumental motivation for physics and sciences, as well as to increase the efficacy of teaching physics and science.

  16. A new exact and more powerful unconditional test of no treatment effect from binary matched pairs.

    PubMed

    Lloyd, Chris J

    2008-09-01

    We consider the problem of testing for a difference in the probability of success from matched binary pairs. Starting with three standard inexact tests, the nuisance parameter is first estimated and then the residual dependence is eliminated by maximization, producing what I call an E+M P-value. The E+M P-value based on McNemar's statistic is shown numerically to dominate previous suggestions, including partially maximized P-values as described in Berger and Sidik (2003, Statistical Methods in Medical Research 12, 91-108). The latter method, however, may have computational advantages for large samples.

  17. A multicenter study to standardize reporting and analyses of fluorescence-activated cell-sorted murine intestinal epithelial cells

    PubMed Central

    Magness, Scott T.; Puthoff, Brent J.; Crissey, Mary Ann; Dunn, James; Henning, Susan J.; Houchen, Courtney; Kaddis, John S.; Kuo, Calvin J.; Li, Linheng; Lynch, John; Martin, Martin G.; May, Randal; Niland, Joyce C.; Olack, Barbara; Qian, Dajun; Stelzner, Matthias; Swain, John R.; Wang, Fengchao; Wang, Jiafang; Wang, Xinwei; Yan, Kelley; Yu, Jian

    2013-01-01

    Fluorescence-activated cell sorting (FACS) is an essential tool for studies requiring isolation of distinct intestinal epithelial cell populations. Inconsistent or lack of reporting of the critical parameters associated with FACS methodologies has complicated interpretation, comparison, and reproduction of important findings. To address this problem a comprehensive multicenter study was designed to develop guidelines that limit experimental and data reporting variability and provide a foundation for accurate comparison of data between studies. Common methodologies and data reporting protocols for tissue dissociation, cell yield, cell viability, FACS, and postsort purity were established. Seven centers tested the standardized methods by FACS-isolating a specific crypt-based epithelial population (EpCAM+/CD44+) from murine small intestine. Genetic biomarkers for stem/progenitor (Lgr5 and Atoh 1) and differentiated cell lineages (lysozyme, mucin2, chromogranin A, and sucrase isomaltase) were interrogated in target and control populations to assess intra- and intercenter variability. Wilcoxon's rank sum test on gene expression levels showed limited intracenter variability between biological replicates. Principal component analysis demonstrated significant intercenter reproducibility among four centers. Analysis of data collected by standardized cell isolation methods and data reporting requirements readily identified methodological problems, indicating that standard reporting parameters facilitate post hoc error identification. These results indicate that the complexity of FACS isolation of target intestinal epithelial populations can be highly reproducible between biological replicates and different institutions by adherence to common cell isolation methods and FACS gating strategies. This study can be considered a foundation for continued method development and a starting point for investigators that are developing cell isolation expertise to study physiology and pathophysiology of the intestinal epithelium. PMID:23928185

  18. UV scale calibration transfer from an improved pyroelectric detector standard to field UV-A meters and 365 nm excitation sources

    NASA Astrophysics Data System (ADS)

    Eppeldauer, G. P.; Podobedov, V. B.; Cooksey, C. C.

    2017-05-01

    Calibration of the emitted radiation from UV sources peaking at 365 nm, is necessary to perform the ASTM required 1 mW/cm2 minimum irradiance in certain military material (ships, airplanes etc) tests. These UV "black lights" are applied for crack-recognition using fluorescent liquid penetrant inspection. At present, these nondestructive tests are performed using Hg-lamps. Lack of a proper standard and the different spectral responsivities of the available UV meters cause significant measurement errors even if the same UV-365 source is measured. A pyroelectric radiometer standard with spectrally flat (constant) response in the UV-VIS range has been developed to solve the problem. The response curve of this standard determined from spectral reflectance measurement, is converted into spectral irradiance responsivity with <0.5% (k=2) uncertainty as a result of using an absolute tie point from a Si-trap detector traceable to the primary standard cryogenic radiometer. The flat pyroelectric radiometer standard can be used to perform uniform integrated irradiance measurements from all kinds of UV sources (with different peaks and distributions) without using any source standard. Using this broadband calibration method, yearly spectral calibrations for the reference UV (LED) sources and irradiance meters is not needed. Field UV sources and meters can be calibrated against the pyroelectric radiometer standard for broadband (integrated) irradiance and integrated responsivity. Using the broadband measurement procedure, the UV measurements give uniform results with significantly decreased uncertainties.

  19. Astrophysical tests for radiative decay of neutrinos and fundamental physics implications

    NASA Technical Reports Server (NTRS)

    Stecker, F. W.; Brown, R. W.

    1981-01-01

    The radiative lifetime tau for the decay of massious neutrinos was calculated using various physical models for neutrino decay. The results were then related to the astrophysical problem of the detectability of the decay photons from cosmic neutrinos. Conversely, the astrophysical data were used to place lower limits on tau. These limits are all well below predicted values. However, an observed feature at approximately 1700 A in the ultraviolet background radiation at high galactic latitudes may be from the decay of neutrinos with mass approximately 14 eV. This would require a decay rate much larger than the predictions of standard models but could be indicative of a decay rate possible in composite models or other new physics. Thus an important test for substructure in leptons and quarks or other physics beyond the standard electroweak model may have been found.

  20. A Brief Assessment of Learning for Orphaned and Abandoned Children in Low and Middle Income Countries

    PubMed Central

    O’Donnell, Karen; Murphy, Robert; Ostermann, Jan; Masnick, Max; Whetten, Rachel A.; Madden, Elisabeth; Thielman, Nathan M.; Whetten, Kathryn

    2013-01-01

    Assessment of children’s learning and performance in low and middle income countries has been critiqued as lacking a gold standard, an appropriate norm reference group, and demonstrated applicability of assessment tasks to the context. This study was designed to examine the performance of three nonverbal and one adapted verbal measure of children’s problem solving, memory, motivation, and attention across five culturally diverse sites. The goal was to evaluate the tests as indicators of individual differences affected by life events and care circumstances for vulnerable children. We conclude that the measures can be successfully employed with fidelity in non-standard settings in LMICs, and are associated with child age and educational experience across the settings. The tests can be useful in evaluating variability in vulnerable child outcomes. PMID:21538088

  1. Improvements to the ejector expansion refrigeration cycle

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Menegay, P.; Kornhauser, A.A.

    1996-12-31

    The ejector expansion refrigeration cycle (EERC) is a variant of the standard vapor compression cycle in which an ejector is used to recover part of the work that would otherwise be lost in the expansion valve. In initial testing EERC performance was poor, mainly due to thermodynamic non-equilibrium conditions in the ejector motive nozzle. Modifications were made to correct this problem, and significant performance improvements were found.

  2. Fire Technology Abstracts, volume 4, issue 1, August, 1981

    NASA Astrophysics Data System (ADS)

    Holtschlag, L. J.; Kuvshinoff, B. W.; Jernigan, J. B.

    This bibliography contains over 400 citations with abstracts addressing various aspects of fire technology. Subjects cover the dynamics of fire, behavior and properties of materials, fire modeling and test burns, fire protection, fire safety, fire service organization, apparatus and equipment, fire prevention, suppression, planning, human behavior, medical problems, codes and standards, hazard identification, safe handling of materials, insurance, economics of loss and prevention, and more.

  3. Experiments to Generate New Data about School Choice: Commentary on "Defining Continuous Improvement and Cost Minimization Possibilities through School Choice Experiments" and Merrifield's Reply

    ERIC Educational Resources Information Center

    Berg, Nathan; Merrifield, John

    2009-01-01

    Benefiting from new data provided by experimental economists, behavioral economics is now moving beyond empirical tests of standard behavioral assumptions to the problem of designing improved institutions that are tuned to fit real-world behavior. It is therefore worthwhile to consider the potential for new experiments to advance school choice…

  4. Evaluation of Advanced Stirling Convertor Net Heat Input Correlation Methods Using a Thermal Standard

    NASA Technical Reports Server (NTRS)

    Briggs, Maxwell; Schifer, Nicholas

    2011-01-01

    Test hardware used to validate net heat prediction models. Problem: Net Heat Input cannot be measured directly during operation. Net heat input is a key parameter needed in prediction of efficiency for convertor performance. Efficiency = Electrical Power Output (Measured) divided by Net Heat Input (Calculated). Efficiency is used to compare convertor designs and trade technology advantages for mission planning.

  5. Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition

    NASA Astrophysics Data System (ADS)

    Daluge, D. R.; Ruedger, W. H.

    1981-06-01

    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered.

  6. To Determine the Percentage of Copper in a Brass Screw Using a Computer Interface with a Colorimeter.

    ERIC Educational Resources Information Center

    Horgan, Joan; Hedge, Robyn

    1997-01-01

    Year 11 students investigated the real-world problem of whether screws are really brass. It allowed them to use the colorimeter and computer interface in a way that was easily understood and models normal practice in testing laboratories. Screws were dissolved in nitric acid and their absorbance of red light was compared with a standard curve.…

  7. Problem Space Matters: Evaluation of a German Enrichment Program for Gifted Children.

    PubMed

    Welter, Marisete M; Jaarsveld, Saskia; Lachmann, Thomas

    2018-01-01

    We studied the development of cognitive abilities related to intelligence and creativity ( N = 48, 6-10 years old), using a longitudinal design (over one school year), in order to evaluate an Enrichment Program for gifted primary school children initiated by the government of the German federal state of Rhineland-Palatinate ( Entdeckertag Rheinland Pfalz , Germany; ET; Day of Discoverers). A group of German primary school children ( N = 24), identified earlier as intellectually gifted and selected to join the ET program was compared to a gender-, class- and IQ- matched group of control children that did not participate in this program. All participants performed the Standard Progressive Matrices (SPM) test, which measures intelligence in well-defined problem space; the Creative Reasoning Task (CRT), which measures intelligence in ill-defined problem space; and the test of creative thinking-drawing production (TCT-DP), which measures creativity, also in ill-defined problem space. Results revealed that problem space matters: the ET program is effective only for the improvement of intelligence operating in well-defined problem space. An effect was found for intelligence as measured by SPM only, but neither for intelligence operating in ill-defined problem space (CRT) nor for creativity (TCT-DP). This suggests that, depending on the type of problem spaces presented, different cognitive abilities are elicited in the same child. Therefore, enrichment programs for gifted, but also for children attending traditional schools, should provide opportunities to develop cognitive abilities related to intelligence, operating in both well- and ill-defined problem spaces, and to creativity in a parallel, using an interactive approach.

  8. Problem Space Matters: Evaluation of a German Enrichment Program for Gifted Children

    PubMed Central

    Welter, Marisete M.; Jaarsveld, Saskia; Lachmann, Thomas

    2018-01-01

    We studied the development of cognitive abilities related to intelligence and creativity (N = 48, 6–10 years old), using a longitudinal design (over one school year), in order to evaluate an Enrichment Program for gifted primary school children initiated by the government of the German federal state of Rhineland-Palatinate (Entdeckertag Rheinland Pfalz, Germany; ET; Day of Discoverers). A group of German primary school children (N = 24), identified earlier as intellectually gifted and selected to join the ET program was compared to a gender-, class- and IQ- matched group of control children that did not participate in this program. All participants performed the Standard Progressive Matrices (SPM) test, which measures intelligence in well-defined problem space; the Creative Reasoning Task (CRT), which measures intelligence in ill-defined problem space; and the test of creative thinking-drawing production (TCT-DP), which measures creativity, also in ill-defined problem space. Results revealed that problem space matters: the ET program is effective only for the improvement of intelligence operating in well-defined problem space. An effect was found for intelligence as measured by SPM only, but neither for intelligence operating in ill-defined problem space (CRT) nor for creativity (TCT-DP). This suggests that, depending on the type of problem spaces presented, different cognitive abilities are elicited in the same child. Therefore, enrichment programs for gifted, but also for children attending traditional schools, should provide opportunities to develop cognitive abilities related to intelligence, operating in both well- and ill-defined problem spaces, and to creativity in a parallel, using an interactive approach. PMID:29740367

  9. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.

    When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modularmore » In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.« less

  10. Applying Quantum Monte Carlo to the Electronic Structure Problem

    NASA Astrophysics Data System (ADS)

    Powell, Andrew D.; Dawes, Richard

    2016-06-01

    Two distinct types of Quantum Monte Carlo (QMC) calculations are applied to electronic structure problems such as calculating potential energy curves and producing benchmark values for reaction barriers. First, Variational and Diffusion Monte Carlo (VMC and DMC) methods using a trial wavefunction subject to the fixed node approximation were tested using the CASINO code.[1] Next, Full Configuration Interaction Quantum Monte Carlo (FCIQMC), along with its initiator extension (i-FCIQMC) were tested using the NECI code.[2] FCIQMC seeks the FCI energy for a specific basis set. At a reduced cost, the efficient i-FCIQMC method can be applied to systems in which the standard FCIQMC approach proves to be too costly. Since all of these methods are statistical approaches, uncertainties (error-bars) are introduced for each calculated energy. This study tests the performance of the methods relative to traditional quantum chemistry for some benchmark systems. References: [1] R. J. Needs et al., J. Phys.: Condensed Matter 22, 023201 (2010). [2] G. H. Booth et al., J. Chem. Phys. 131, 054106 (2009).

  11. Fault Management Practice: A Roadmap for Improvement

    NASA Technical Reports Server (NTRS)

    Fesq, Lorraine M.; Oberhettinger, David

    2010-01-01

    Autonomous fault management (FM) is critical for deep space and planetary missions where the limited communication opportunities may prevent timely intervention by ground control. Evidence of pervasive architecture, design, and verification/validation problems with NASA FM engineering has been revealed both during technical reviews of spaceflight missions and in flight. These problems include FM design changes required late in the life-cycle, insufficient project insight into the extent of FM testing required, unexpected test results that require resolution, spacecraft operational limitations because certain functions were not tested, and in-flight anomalies and mission failures attributable to fault management. A recent NASA initiative has characterized the FM state-of-practice throughout the spacecraft development community and identified common NASA, DoD, and commercial concerns that can be addressed in the near term through the development of a FM Practitioner's Handbook and the formation of a FM Working Group. Initial efforts will focus on standardizing FM terminology, establishing engineering processes and tools, and training.

  12. Experimental testing and numerical simulation on natural composite for aerospace applications

    NASA Astrophysics Data System (ADS)

    Kumar, G. Raj; Vijayanandh, R.; Kumar, M. Senthil; Kumar, S. Sathish

    2018-05-01

    Nowadays polymers are commonly used in various applications, which make it difficult to avoid its usage even though it causes environmental problems. Natural fibers are best alternate to overcome the polymer based environmental issues. Natural fibers play an important role in developing high performing fully newline biodegradable green composites which will be a key material to solve environmental problems in future. In this paper deals the properties analysis of banana fiber is combined with epoxy resin in order to create a natural composite, which has special characteristics for aerospace applications. The objective of this paper is to investigate the characteristics of failure modes and strength of natural composite using experimental and numerical methods. The test specimen of natural composite has been fabricated as per ASTM standard, which undergoes tensile and compression tests using Tinius Olsen UTM in order to determine mechanical and physical properties. The reference model has been designed by CATIA, and then numerical simulation has been carried out by Ansys Workbench 16.2 for the given boundary conditions.

  13. Heavy drinking is associated with deficient response inhibition in women but not in men.

    PubMed

    Nederkoorn, Chantal; Baltus, Marcus; Guerrieri, Ramona; Wiers, Reinout W

    2009-09-01

    Poor response inhibition has been associated with a wide range of problem behaviours, including addictive behaviours, and could represent a general vulnerability factor. Standard tests of response inhibition have used neutral stimuli. Here we tested whether a deficit in response inhibition in heavy drinkers would be stronger for stimuli related to their problem (alcohol) or not. Response inhibition was assessed with a stop signal task, using four classes of pictures: alcohol-related, soft drinks, erotic (control appetitive categories) and neutral pictures. Participants were 32 heavy and 32 light drinkers. An equal amount of men and women were tested in both drinking groups, in view of recent studies reporting that response disinhibition may be most pronounced in heavy drinking women. Main results were first that no domain-specific differences in response inhibition were found in both groups. Second, heavy drinking females showed stronger response inhibition deficits than other groups. Results are discussed in light of a possible gender difference in response inhibition as a risk factor for addictive behaviours.

  14. [Methodologic and clinical comparison of four different ergospirometry systems].

    PubMed

    Winter, U J; Fritsch, J; Gitt, A K; Pothoff, G; Berge, P G; Hilger, H H

    1994-01-01

    The clinician who uses cardio-pulmonary exercise testing (CPX) systems relies on the technical informations from the device producers. In this paper, the practicability, the accuracy and the safety of four different, available CPX systems are compared in the clinical area, using clinically orientated criteria. The exercise tests were performed in healthy subjects, in patients with cardiac and/or pulmonary disease as well as in young or old people. The comparison study showed, that there were partially large differences in device design and measurement accuracy. Furthermore, our investigation demonstrated that beneath repetitive calibrations of the CPX systems a frequent validation of the devices by means of a metabolic simulator is necessary. Problems in calibration can be caused by an inadequate performance or by unclean calibration gases. Problems in validation can be due to incompatibility of the CPX device and the validator. The comparison study of the four different systems showed that in the future standards for CPX testing should be defined.

  15. The role of competing knowledge structures in undermining learning: Newton's second and third laws

    NASA Astrophysics Data System (ADS)

    Low, David J.; Wilson, Kate F.

    2017-01-01

    We investigate the development of student understanding of Newton's laws using a pre-instruction test (the Force Concept Inventory), followed by a series of post-instruction tests and interviews. While some students' somewhat naive, pre-existing models of Newton's third law are largely eliminated following a semester of teaching, we find that a particular inconsistent model is highly resilient to, and may even be strengthened by, instruction. If test items contain words that cue students to think of Newton's second law, then students are more likely to apply a "net force" approach to solving problems, even if it is inappropriate to do so. Additional instruction, reinforcing physical concepts in multiple settings and from multiple sources, appears to help students develop a more connected and consistent level of understanding. We recommend explicitly encouraging students to check their work for consistency with physical principles, along with the standard checks for dimensionality and order of magnitude, to encourage reflective and rigorous problem solving.

  16. Implementation and effect of life space crisis intervention in special schools with residential treatment for students with emotional and behavioral disorders (EBD).

    PubMed

    DOosterlinck, Franky; Goethals, Ilse; Broekaert, Eric; Boekaert, Eric; Schuyten, Gilberte; De Maeyer, Jessica

    2008-03-01

    The increase of violence in present-day society calls for adequate crisis interventions for students with behavioral problems. Life Space Crisis Intervention (LSCI) is a systematic and formatted response to a student's crisis, based on cognitive, behavioral, psychodynamic and developmental theory. The following research article evaluates a LSCI Program with students referred to special schools with residential treatment because of severe behavioral problems. The evaluation was conducted using a quasi experimental pre-test-post-test control group design. Thirty-one match paired students were pre-tested before the interventions started and post-tested after a period of 11 months. Five standardized questionnaires were examined to assess the effectiveness of the LSCI Program. General Linear Model (GLM) with repeated measures was used to analyze all data. For the total group of subjects (n = 62) it was found that students' perception about their athletic competence decrease significantly after 11 months in residential care. A positive effect of LSCI was found on direct aggression and social desirability.

  17. Firefly algorithm for cardinality constrained mean-variance portfolio optimization problem with entropy diversity constraint.

    PubMed

    Bacanin, Nebojsa; Tuba, Milan

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results.

  18. Firefly Algorithm for Cardinality Constrained Mean-Variance Portfolio Optimization Problem with Entropy Diversity Constraint

    PubMed Central

    2014-01-01

    Portfolio optimization (selection) problem is an important and hard optimization problem that, with the addition of necessary realistic constraints, becomes computationally intractable. Nature-inspired metaheuristics are appropriate for solving such problems; however, literature review shows that there are very few applications of nature-inspired metaheuristics to portfolio optimization problem. This is especially true for swarm intelligence algorithms which represent the newer branch of nature-inspired algorithms. No application of any swarm intelligence metaheuristics to cardinality constrained mean-variance (CCMV) portfolio problem with entropy constraint was found in the literature. This paper introduces modified firefly algorithm (FA) for the CCMV portfolio model with entropy constraint. Firefly algorithm is one of the latest, very successful swarm intelligence algorithm; however, it exhibits some deficiencies when applied to constrained problems. To overcome lack of exploration power during early iterations, we modified the algorithm and tested it on standard portfolio benchmark data sets used in the literature. Our proposed modified firefly algorithm proved to be better than other state-of-the-art algorithms, while introduction of entropy diversity constraint further improved results. PMID:24991645

  19. [Quality Management and Quality Specifications of Laboratory Tests in Clinical Studies--Challenges in Pre-Analytical Processes in Clinical Laboratories].

    PubMed

    Ishibashi, Midori

    2015-01-01

    The cost, speed, and quality are the three important factors recently indicated by the Ministry of Health, Labour and Welfare (MHLW) for the purpose of accelerating clinical studies. Based on this background, the importance of laboratory tests is increasing, especially in the evaluation of clinical study participants' entry and safety, and drug efficacy. To assure the quality of laboratory tests, providing high-quality laboratory tests is mandatory. For providing adequate quality assurance in laboratory tests, quality control in the three fields of pre-analytical, analytical, and post-analytical processes is extremely important. There are, however, no detailed written requirements concerning specimen collection, handling, preparation, storage, and shipping. Most laboratory tests for clinical studies are performed onsite in a local laboratory; however, a part of laboratory tests is done in offsite central laboratories after specimen shipping. As factors affecting laboratory tests, individual and inter-individual variations are well-known. Besides these factors, standardizing the factors of specimen collection, handling, preparation, storage, and shipping, may improve and maintain the high quality of clinical studies in general. Furthermore, the analytical method, units, and reference interval are also important factors. It is concluded that, to overcome the problems derived from pre-analytical processes, it is necessary to standardize specimen handling in a broad sense.

  20. Astronomy Village: Innovative Uses of Planetary Astronomy Images and Data

    NASA Astrophysics Data System (ADS)

    Croft, S. K.; Pompea, S. M.

    2008-06-01

    Teaching and learning science is best done by hands-on experience with real scientific data and real scientific problems. Getting such experiences into public and home-schooling classrooms is a challenge. Here we describe two award-winning multimedia products that embody one successful solution to the problem: Astronomy Village: Investigating the Universe, and Astronomy Village: Investigating the Solar System. Each Village provides a virtual environment for inquiry-based scientific exploration of ten planetary and astronomical problems such as ``Mission to Pluto'' and ``Search for a Supernova.'' Both Villages are standards-based and classroom tested. Investigating the Solar System is designed for middle and early high school students, while Investigating the Universe is at the high school and introductory college level. The objective of both Villages is to engage students in scientific inquiry by having them acquire, explore, and analyze real scientific data and images drawn from real scientific problems.

  1. The Dysexecutive Questionnaire advanced: item and test score characteristics, 4-factor solution, and severity classification.

    PubMed

    Bodenburg, Sebastian; Dopslaff, Nina

    2008-01-01

    The Dysexecutive Questionnaire (DEX, , Behavioral assessment of the dysexecutive syndrome, 1996) is a standardized instrument to measure possible behavioral changes as a result of the dysexecutive syndrome. Although initially intended only as a qualitative instrument, the DEX has also been used increasingly to address quantitative problems. Until now there have not been more fundamental statistical analyses of the questionnaire's testing quality. The present study is based on an unselected sample of 191 patients with acquired brain injury and reports on the data relating to the quality of the items, the reliability and the factorial structure of the DEX. Item 3 displayed too great an item difficulty, whereas item 11 was not sufficiently discriminating. The DEX's reliability in self-rating is r = 0.85. In addition to presenting the statistical values of the tests, a clinical severity classification of the overall scores of the 4 found factors and of the questionnaire as a whole is carried out on the basis of quartile standards.

  2. Dealing with Processing Chapter 10 Files from Multiple Vendors

    NASA Technical Reports Server (NTRS)

    Knudtson, Kevin Mark

    2011-01-01

    This presentation discusses the experiences of the NASA Dryden Flight Research Center's (DFRC) Western Aeronautical Test Range (WATR) in dealing with the problems encountered while performing post flight data processing using the WATR's data collection/processing system on Chapter 10 files from different Chapter 10 recorders. The transition to Chapter 10 recorders has brought Vvith it an assortment of issues that must be addressed: the ambiguities of language in the Chapter 10 standard, the unrealistic near-term expectations of the Chapter 10 standard, the incompatibility of data products generated from Chapter 10 recorders, and the unavailability of mature Chapter 10 applications. Some of these issues properly belong to the users of Chapter 10 recorders, some to the manufacturers, and some to the flight test community at large. The goal of this presentation is to share the WATR's lesson learned in processing data products from various Chapter 10 recorder vendors. The WATR could benefit greatly in the open forum Vvith lessons learned discussions with other members of the flight test community.

  3. The use of subjective expert opinions in cost optimum design of aerospace structures. [probabilistic failure models

    NASA Technical Reports Server (NTRS)

    Thomas, J. M.; Hanagud, S.

    1975-01-01

    The results of two questionnaires sent to engineering experts are statistically analyzed and compared with objective data from Saturn V design and testing. Engineers were asked how likely it was for structural failure to occur at load increments above and below analysts' stress limit predictions. They were requested to estimate the relative probabilities of different failure causes, and of failure at each load increment given a specific cause. Three mathematical models are constructed based on the experts' assessment of causes. The experts' overall assessment of prediction strength fits the Saturn V data better than the models do, but a model test option (T-3) based on the overall assessment gives more design change likelihood to overstrength structures than does an older standard test option. T-3 compares unfavorably with the standard option in a cost optimum structural design problem. The report reflects a need for subjective data when objective data are unavailable.

  4. A Natural Fit: Problem-based Learning and Technology Standards.

    ERIC Educational Resources Information Center

    Sage, Sara M.

    2000-01-01

    Discusses the use of problem-based learning to meet technology standards. Highlights include technology as a tool for locating and organizing information; the Wolf Wars problem for elementary and secondary school students that provides resources, including Web sites, for information; Web-based problems; and technology as assessment and as a…

  5. Study of twenty preparations of human albumin solution which failed in quality control testing due to elevated sodium content, a poor internal quality control at manufacturing unit.

    PubMed

    Prasad, J P; Madhu, Y; Singh, Surinder; Soni, G R; Agnihotri, N; Singh, Varsha; Kumar, Pradeep; Jain, Nidhi; Prakash, Anu; Singh, Varun

    2016-11-01

    Current study is conducted in our laboratory due to failure in quality control testing of twenty batches of Human Albumin solution in which sodium content is higher than the prescribed limit. These batches are received in short duration from indigenous manufacturer and is the first incident of failure of Human albumin preparation in sodium content of manufacturer. On request of manufacturer, study is conducted to rule out the cause. Repeat testing of each out of specification batch is conducted and a trend analysis is drawn between our findings and manufacturer's results, also study of trend analysis of manufacturer for the last one year. Trend analysis data indicated towards poor consistency of batches with major shift at various time intervals in sodium content of human albumin preparation. Further analysis rule out that non-traceable quality of standard used in the internal quality control testing by manufacturer is the root cause of the problem. Copyright © 2016 International Alliance for Biological Standardization. Published by Elsevier Ltd. All rights reserved.

  6. Visuomotor Performance in KCNJ11-Related Neonatal Diabetes Is Impaired in Children With DEND-Associated Mutations and May Be Improved by Early Treatment With Sulfonylureas

    PubMed Central

    Shah, Reshma P.; Spruyt, Karen; Kragie, Brigette C.; Greeley, Siri Atma W.; Msall, Michael E.

    2012-01-01

    OBJECTIVE To assess performance on an age-standardized neuromotor coordination task among sulfonylurea-treated KCNJ11-related neonatal diabetic patients. RESEARCH DESIGN AND METHODS Nineteen children carrying KCNJ11 mutations associated with isolated diabetes (R201H; n = 8), diabetes with neurodevelopmental impairment (V59M or V59A [V59M/A]; n = 8), or diabetes not consistently associated with neurodevelopmental disability (Y330C, E322K, or R201C; n = 3) were studied using the age-standardized Beery-Buktenica Developmental Test of Visual-Motor Integration (VMI). RESULTS Although R201H subjects tested in the normal range (median standard score = 107), children with V59M/A mutations had significantly lower than expected VMI standard scores (median = 49). The scores for all three groups were significantly different from each other (P = 0.0017). The age of sulfonylurea initiation was inversely correlated with VMI scores in the V59M/A group (P < 0.05). CONCLUSIONS Neurodevelopmental disability in KCNJ11-related diabetes includes visuomotor problems that may be ameliorated by early sulfonylurea treatment. Comprehensive longitudinal assessment on larger samples will be imperative. PMID:22855734

  7. Minimum Information about T Regulatory Cells: A Step toward Reproducibility and Standardization.

    PubMed

    Fuchs, Anke; Gliwiński, Mateusz; Grageda, Nathali; Spiering, Rachel; Abbas, Abul K; Appel, Silke; Bacchetta, Rosa; Battaglia, Manuela; Berglund, David; Blazar, Bruce; Bluestone, Jeffrey A; Bornhäuser, Martin; Ten Brinke, Anja; Brusko, Todd M; Cools, Nathalie; Cuturi, Maria Cristina; Geissler, Edward; Giannoukakis, Nick; Gołab, Karolina; Hafler, David A; van Ham, S Marieke; Hester, Joanna; Hippen, Keli; Di Ianni, Mauro; Ilic, Natasa; Isaacs, John; Issa, Fadi; Iwaszkiewicz-Grześ, Dorota; Jaeckel, Elmar; Joosten, Irma; Klatzmann, David; Koenen, Hans; van Kooten, Cees; Korsgren, Olle; Kretschmer, Karsten; Levings, Megan; Marek-Trzonkowska, Natalia Maria; Martinez-Llordella, Marc; Miljkovic, Djordje; Mills, Kingston H G; Miranda, Joana P; Piccirillo, Ciriaco A; Putnam, Amy L; Ritter, Thomas; Roncarolo, Maria Grazia; Sakaguchi, Shimon; Sánchez-Ramón, Silvia; Sawitzki, Birgit; Sofronic-Milosavljevic, Ljiljana; Sykes, Megan; Tang, Qizhi; Vives-Pi, Marta; Waldmann, Herman; Witkowski, Piotr; Wood, Kathryn J; Gregori, Silvia; Hilkens, Catharien M U; Lombardi, Giovanna; Lord, Phillip; Martinez-Caceres, Eva M; Trzonkowski, Piotr

    2017-01-01

    Cellular therapies with CD4+ T regulatory cells (Tregs) hold promise of efficacious treatment for the variety of autoimmune and allergic diseases as well as posttransplant complications. Nevertheless, current manufacturing of Tregs as a cellular medicinal product varies between different laboratories, which in turn hampers precise comparisons of the results between the studies performed. While the number of clinical trials testing Tregs is already substantial, it seems to be crucial to provide some standardized characteristics of Treg products in order to minimize the problem. We have previously developed reporting guidelines called minimum information about tolerogenic antigen-presenting cells, which allows the comparison between different preparations of tolerance-inducing antigen-presenting cells. Having this experience, here we describe another minimum information about Tregs (MITREG). It is important to note that MITREG does not dictate how investigators should generate or characterize Tregs, but it does require investigators to report their Treg data in a consistent and transparent manner. We hope this will, therefore, be a useful tool facilitating standardized reporting on the manufacturing of Tregs, either for research purposes or for clinical application. This way MITREG might also be an important step toward more standardized and reproducible testing of the Tregs preparations in clinical applications.

  8. Cognitive impairment in heart failure: issues of measurement and etiology.

    PubMed

    Riegel, Barbara; Bennett, Jill A; Davis, Andra; Carlson, Beverly; Montague, John; Robin, Howard; Glaser, Dale

    2002-11-01

    Clinicians need easy methods of screening for cognitive impairment in patients with heart failure. If correlates of cognitive impairment could be identified, more patients with early cognitive impairment could be treated before the problem interfered with adherence to treatment. To describe cognitive impairment in patients with heart failure, to explore the usefulness of 4 measures of cognitive impairment, and to assess correlates of cognitive impairment. A descriptive, correlational design was used. Four screening measures of cognition were assessed in 42 patients with heart failure: Commands subtest and Complex Ideational Material subtest of the Boston Diagnostic Aphasia Examination, Mini-Mental State Examination, and Draw-a-Clock Test. Cognitive impairment was defined as performance less than the standardized (T-score) cutoff point on at least 1 of the 4 measures. Possible correlates of cognitive impairment included age, education, hypotension, fluid overload (serum osmolality < 269 mOsm/kg), and dehydration (serum osmolality > or = 295 mOsm/kg). Cognitive impairment was detected in 12 (28.6%) of 42 participants. The 4 screening tests varied in effectiveness, but the Draw-a-Clock Test indicated impairment in 50% of the 12 impaired patients. A summed standardized score for the 4 measures was not significantly associated with age, education, hypotension, fluid overload, or dehydration in this sample. Cognitive impairment is relatively common in patients with heart failure. The Draw-a-Clock Test was most useful in detecting cognitive impairment, although it cannot be used to detect problems with verbal learning or delayed recall and should not be used as the sole screening method for patients with heart failure. Correlates of cognitive impairment require further study.

  9. Methodology and technical requirements of the galectin-3 test for the preoperative characterization of thyroid nodules.

    PubMed

    Bartolazzi, Armando; Bellotti, Carlo; Sciacchitano, Salvatore

    2012-01-01

    In the last decade, the β-galactosyl binding protein galectin-3 has been the object of extensive molecular, structural, and functional studies aimed to clarify its biological role in cancer. Multicenter studies also contributed to discover the potential clinical value of galectin-3 expression analysis in distinguishing, preoperatively, benign from malignant thyroid nodules. As a consequence galectin-3 is receiving significant attention as tumor marker for thyroid cancer diagnosis, but some conflicting results mostly owing to methodological problems have been published. The possibility to apply preoperatively a reliable galectin-3 test method on fine needle aspiration biopsy (FNA)-derived thyroid cells represents an important achievement. When correctly applied, the method reduces consistently the gray area of thyroid FNA cytology, contributing to avoid unnecessary thyroid surgery. Although the efficacy and reliability of the galectin-3 test method have been extensively proved in several studies, its translation in the clinical setting requires well-standardized reagents and procedures. After a decade of experimental work on galectin-3-related basic and translational research projects, the major methodological problems that may potentially impair the diagnostic performance of galectin-3 immunotargeting are highlighted and discussed in detail. A standardized protocol for a reliable galectin-3 expression analysis is finally provided. The aim of this contribution is to improve the clinical management of patients with thyroid nodules, promoting the preoperative use of a reliable galectin-3 test method as ancillary technique to conventional thyroid FNA cytology. The final goal is to decrease unnecessary thyroid surgery and its related social costs.

  10. Performance evaluation of OpenFOAM on many-core architectures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brzobohatý, Tomáš; Říha, Lubomír; Karásek, Tomáš, E-mail: tomas.karasek@vsb.cz

    In this article application of Open Source Field Operation and Manipulation (OpenFOAM) C++ libraries for solving engineering problems on many-core architectures is presented. Objective of this article is to present scalability of OpenFOAM on parallel platforms solving real engineering problems of fluid dynamics. Scalability test of OpenFOAM is performed using various hardware and different implementation of standard PCG and PBiCG Krylov iterative methods. Speed up of various implementations of linear solvers using GPU and MIC accelerators are presented in this paper. Numerical experiments of 3D lid-driven cavity flow for several cases with various number of cells are presented.

  11. Asymmetrically dominated choice problems, the isolation hypothesis and random incentive mechanisms.

    PubMed

    Cox, James C; Sadiraj, Vjollca; Schmidt, Ulrich

    2014-01-01

    This paper presents an experimental study of the random incentive mechanisms which are a standard procedure in economic and psychological experiments. Random incentive mechanisms have several advantages but are incentive-compatible only if responses to the single tasks are independent. This is true if either the independence axiom of expected utility theory or the isolation hypothesis of prospect theory holds. We present a simple test of this in the context of choice under risk. In the baseline (one task) treatment we observe risk behavior in a given choice problem. We show that by integrating a second, asymmetrically dominated choice problem in a random incentive mechanism risk behavior can be manipulated systematically. This implies that the isolation hypothesis is violated and the random incentive mechanism does not elicit true preferences in our example.

  12. Pedagogy and/or technology: Making difference in improving students' problem solving skills

    NASA Astrophysics Data System (ADS)

    Hrepic, Zdeslav; Lodder, Katherine; Shaw, Kimberly A.

    2013-01-01

    Pen input computers combined with interactive software may have substantial potential for promoting active instructional methodologies and for facilitating students' problem solving ability. An excellent example is a study in which introductory physics students improved retention, conceptual understanding and problem solving abilities when one of three weekly lectures was replaced with group problem solving sessions facilitated with Tablet PCs and DyKnow software [1,2]. The research goal of the present study was to isolate the effect of the methodology itself (using additional time to teach problem solving) from that of the involved technology. In Fall 2011 we compared the performance of students taking the same introductory physics lecture course while enrolled in two separate problem-solving sections. One section used pen-based computing to facilitate group problem solving while the other section used low-tech methods for one third of the semester (covering Kinematics), and then traded technologies for the middle third of the term (covering Dynamics). Analysis of quiz, exam and standardized pre-post test results indicated no significant difference in scores of the two groups. Combining this result with those of previous studies implies primacy of pedagogy (collaborative problem solving itself) over technology for student learning in problem solving recitations.

  13. The weaker points of fish acute toxicity tests and how tests on embryos can solve some issues.

    PubMed

    Wedekind, Claus; von Siebenthal, Beat; Gingold, Ruth

    2007-07-01

    Fish acute toxicity tests play an important role in environmental risk assessment and hazard classification because they allow for first estimates of the relative toxicity of various chemicals in various species. However, such tests need to be carefully interpreted. Here we shortly summarize the main issues which are linked to the genetics and the condition of the test animals, the standardized test situations, the uncertainty about whether a given test species can be seen as representative to a given fish fauna, the often missing knowledge about possible interaction effects, especially with micropathogens, and statistical problems like small sample sizes and, in some cases, pseudoreplication. We suggest that multi-factorial embryo tests on ecologically relevant species solve many of these issues, and we shortly explain how such tests could be done to avoid the weaker points of fish acute toxicity tests.

  14. Benefits of a Pharmacology Antimalarial Reference Standard and Proficiency Testing Program Provided by the Worldwide Antimalarial Resistance Network (WWARN)

    PubMed Central

    Lourens, Chris; Lindegardh, Niklas; Barnes, Karen I.; Guerin, Philippe J.; Sibley, Carol H.; White, Nicholas J.

    2014-01-01

    Comprehensive assessment of antimalarial drug resistance should include measurements of antimalarial blood or plasma concentrations in clinical trials and in individual assessments of treatment failure so that true resistance can be differentiated from inadequate drug exposure. Pharmacometric modeling is necessary to assess pharmacokinetic-pharmacodynamic relationships in different populations to optimize dosing. To accomplish both effectively and to allow comparison of data from different laboratories, it is essential that drug concentration measurement is accurate. Proficiency testing (PT) of laboratory procedures is necessary for verification of assay results. Within the Worldwide Antimalarial Resistance Network (WWARN), the goal of the quality assurance/quality control (QA/QC) program is to facilitate and sustain high-quality antimalarial assays. The QA/QC program consists of an international PT program for pharmacology laboratories and a reference material (RM) program for the provision of antimalarial drug standards, metabolites, and internal standards for laboratory use. The RM program currently distributes accurately weighed quantities of antimalarial drug standards, metabolites, and internal standards to 44 pharmacology, in vitro, and drug quality testing laboratories. The pharmacology PT program has sent samples to eight laboratories in four rounds of testing. WWARN technical experts have provided advice for correcting identified problems to improve performance of subsequent analysis and ultimately improved the quality of data. Many participants have demonstrated substantial improvements over subsequent rounds of PT. The WWARN QA/QC program has improved the quality and value of antimalarial drug measurement in laboratories globally. It is a model that has potential to be applied to strengthening laboratories more widely and improving the therapeutics of other infectious diseases. PMID:24777099

  15. Motorcycle pollution control in Taiwan, Republic of China

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, H.W.; Hsiao, H.C.; Walsh, M.P.

    1998-12-31

    The Taiwan EPA has developed a comprehensive approach to motor vehicle pollution control. Building on its early adoption of US `83 standards for light duty vehicles (starting July 1, 1990) it recently moved to US `87 requirements, which include the 0.2 gram per mile particulate standard, as of July 1, 1995. Heavy duty diesel particulate standards almost as stringent as US `90, 6.0 grams per brake horsepower hour NO{sub x} and 0.7 particulate, using the US transient test procedure, went into effect on July 1, 1993. It is intended that US`94 standards, 5.0 NO{sub x} and 0.25 particulate, will bemore » adopted soon. Clearly the most distinctive feature of the Taiwan program, however, is its motorcycle control effort, reflecting the fact that motorcycles dominate the vehicle fleet and are a substantial source of emissions. This paper will summarize Taiwan`s extensive efforts to address this problem.« less

  16. The Effect of Metacognitive Instruction on Problem Solving Skills in Iranian Students of Health Sciences

    PubMed Central

    Safari, Yahya; Meskini, Habibeh

    2016-01-01

    Background: Learning requires application of such processes as planning, supervision, monitoring and reflection that are included in the metacognition. Studies have shown that metacognition is associated with problem solving skills. The current research was conducted to investigate the impact of metacognitive instruction on students’ problem solving skills. Methods: The study sample included 40 students studying in the second semester at Kermanshah University of Medical Sciences, 2013-2014. They were selected through convenience sampling technique and were randomly assigned into two equal groups of experimental and control. For the experimental group, problem solving skills were taught through metacognitive instruction during ten two-hour sessions and for the control group, problem solving skills were taught via conventional teaching method. The instrument for data collection included problem solving inventory (Heppner, 1988), which was administered before and after instruction. The validity and reliability of the questionnaire had been previously confirmed. The collected data were analyzed by descriptive statistics, mean and standard deviation and the hypotheses were tested by t-test and ANCOVA. Results: The findings of the posttest showed that the total mean scores of problem solving skills in the experimental and control groups were 151.90 and 101.65, respectively, indicating a significant difference between them (p<0.001). This difference was also reported to be statistically significant between problem solving skills and its components, including problem solving confidence, orientation-avoidance coping style and personal control (p<0.001). No significant difference, however, was found between the students’ mean scores in terms of gender and major. Conclusion: Since metacognitive instruction has positive effects on students’ problem solving skills and is required to enhance academic achievement, metacognitive strategies are recommended to be taught to the students. PMID:26234970

  17. The Effect of Metacognitive Instruction on Problem Solving Skills in Iranian Students of Health Sciences.

    PubMed

    Safari, Yahya; Meskini, Habibeh

    2015-05-17

    Learning requires application of such processes as planning, supervision, monitoring and reflection that are included in the metacognition. Studies have shown that metacognition is associated with problem solving skills. The current research was conducted to investigate the impact of metacognitive instruction on students' problem solving skills. The study sample included 40 students studying in the second semester at Kermanshah University of Medical Sciences, 2013-2014. They were selected through convenience sampling technique and were randomly assigned into two equal groups of experimental and control. For the experimental group, problem solving skills were taught through metacognitive instruction during ten two-hour sessions and for the control group, problem solving skills were taught via conventional teaching method. The instrument for data collection included problem solving inventory (Heppner, 1988), which was administered before and after instruction. The validity and reliability of the questionnaire had been previously confirmed. The collected data were analyzed by descriptive statistics, mean and standard deviation and the hypotheses were tested by t-test and ANCOVA. The findings of the posttest showed that the total mean scores of problem solving skills in the experimental and control groups were 151.90 and 101.65, respectively, indicating a significant difference between them (p<0.001). This difference was also reported to be statistically significant between problem solving skills and its components, including problem solving confidence, orientation-avoidance coping style and personal control (p<0.001). No significant difference, however, was found between the students' mean scores in terms of gender and major. Since metacognitive instruction has positive effects on students' problem solving skills and is required to enhance academic achievement, metacognitive strategies are recommended to be taught to the students.

  18. Methodology for sensitivity analysis, approximate analysis, and design optimization in CFD for multidisciplinary applications

    NASA Technical Reports Server (NTRS)

    Taylor, Arthur C., III; Hou, Gene W.

    1993-01-01

    In this study involving advanced fluid flow codes, an incremental iterative formulation (also known as the delta or correction form) together with the well-known spatially-split approximate factorization algorithm, is presented for solving the very large sparse systems of linear equations which are associated with aerodynamic sensitivity analysis. For smaller 2D problems, a direct method can be applied to solve these linear equations in either the standard or the incremental form, in which case the two are equivalent. Iterative methods are needed for larger 2D and future 3D applications, however, because direct methods require much more computer memory than is currently available. Iterative methods for solving these equations in the standard form are generally unsatisfactory due to an ill-conditioning of the coefficient matrix; this problem can be overcome when these equations are cast in the incremental form. These and other benefits are discussed. The methodology is successfully implemented and tested in 2D using an upwind, cell-centered, finite volume formulation applied to the thin-layer Navier-Stokes equations. Results are presented for two sample airfoil problems: (1) subsonic low Reynolds number laminar flow; and (2) transonic high Reynolds number turbulent flow.

  19. Quality Test of Flexible Flat Cable (FFC) With Short Open Test Using Law Ohm Approach through Embedded Fuzzy Logic Based On Open Source Arduino Data Logger

    NASA Astrophysics Data System (ADS)

    Rohmanu, Ajar; Everhard, Yan

    2017-04-01

    A technological development, especially in the field of electronics is very fast. One of the developments in the electronics hardware device is Flexible Flat Cable (FFC), which serves as a media liaison between the main boards with other hardware parts. The production of Flexible Flat Cable (FFC) will go through the process of testing and measuring of the quality Flexible Flat Cable (FFC). Currently, the testing and measurement is still done manually by observing the Light Emitting Diode (LED) by the operator, so there were many problems. This study will be made of test quality Flexible Flat Cable (FFC) computationally utilize Open Source Embedded System. The method used is the measurement with Short Open Test method using Ohm’s Law approach to 4-wire (Kelvin) and fuzzy logic as a decision maker measurement results based on Open Source Arduino Data Logger. This system uses a sensor current INA219 as a sensor to read the voltage value thus obtained resistance value Flexible Flat Cable (FFC). To get a good system we will do the Black-box testing as well as testing the accuracy and precision with the standard deviation method. In testing the system using three models samples were obtained the test results in the form of standard deviation for the first model of 1.921 second model of 4.567 and 6.300 for the third model. While the value of the Standard Error of Mean (SEM) for the first model of the model 0.304 second at 0.736 and 0.996 of the third model. In testing this system, we will also obtain the average value of the measurement tolerance resistance values for the first model of - 3.50% 4.45% second model and the third model of 5.18% with the standard measurement of prisoners and improve productivity becomes 118.33%. From the results of the testing system is expected to improve the quality and productivity in the process of testing Flexible Flat Cable (FFC).

  20. A method of semi-quantifying β-AP in brain PET-CT 11C-PiB images.

    PubMed

    Jiang, Jiehui; Lin, Xiaoman; Wen, Junlin; Huang, Zhemin; Yan, Zhuangzhi

    2014-01-01

    Alzheimer's disease (AD) is a common health problem for elderly populations. Positron emission tomography-computed tomography (PET-CT)11C-PiB for beta-P (amyloid-β peptide, β-AP) imaging is an advanced method to diagnose AD in early stage. However, in practice radiologists lack a standardized value to semi-quantify β-AP. This paper proposes such a standardized value: SVβ-AP. This standardized value measures the mean ratio between the dimension of β-AP areas in PET and CT images. A computer aided diagnosis approach is also proposed to achieve SVβ-AP. A simulation experiment was carried out to pre-test the technical feasibility of the CAD approach and SVβ-AP. The experiment results showed that it is technically feasible.

  1. Using a flipped classroom in an algebra-based physics course

    NASA Astrophysics Data System (ADS)

    Smith, Leigh

    2013-03-01

    The algebra-based physics course is taken by Biology students, Pre-Pharmacy, Pre-Medical, and other health related majors such as medical imaging, physical therapy, and so on. Nearly 500 students take the course each Semester. Student learning is adversely impacted by poor math backgrounds as well as extensive work schedules outside of the classroom. We have been researching the use of an intensive flipped-classroom approach where students spend one to two hours each week preparing for class by reading the book, completing a series of conceptual problems, and viewing videos which describe the material. In class, the new response system Learning Catalytics is used which allows much richer problems to be posed in class and includes sketching figures, numerical or symbolic entries, short answers, highlighting text, etc in addition to the standard multiple choice questions. We make direct comparison of student learning for 1200 sudents who have taken the same tests, 25% of which used the flipped classroom approach, and 75% who took a more standard lecture. There is significant evidence of improvements in student learning for students taking the flipped classroom approach over standard lectures. These benefits appear to impact students at all math backgrounds.

  2. Finite element modeling of ROPS in static testing and rear overturns.

    PubMed

    Harris, J R; Mucino, V H; Etherton, J R; Snyder, K A; Means, K H

    2000-08-01

    Even with the technological advances of the last several decades, agricultural production remains one of the most hazardous occupations in the United States. Death due to tractor rollover is a prime contributor to this hazard. Standards for rollover protective structures (ROPS) performance and certification have been developed by groups such as the Society of Automotive Engineers (SAE) and the American Society of Agricultural Engineers (ASAE) to combat these problems. The current ROPS certification standard, SAE J2194, requires either a dynamic or static testing sequence or both. Although some ROPS manufacturers perform both the dynamic and static phases of SAE J2194 testing, it is possible for a ROPS to be certified for field operation using static testing alone. This research compared ROPS deformation response from a simulated SAE J2194 static loading sequence to ROPS deformation response as a result of a simulated rearward tractor rollover. Finite element analysis techniques for plastic deformation were used to simulate both the static and dynamic rear rollover scenarios. Stress results from the rear rollover model were compared to results from simulated static testing per SAE J2194. Maximum stress values from simulated rear rollovers exceeded maximum stress values recorded during simulated static testing for half of the elements comprising the uprights. In the worst case, the static model underpredicts dynamic model results by approximately 7%. In the best case, the static model overpredicts dynamic model results by approximately 32%. These results suggest the need for additional experimental work to characterize ROPS stress levels during staged overturns and during testing according to the SAE standard.

  3. Standardizing clinical laboratory data for secondary use.

    PubMed

    Abhyankar, Swapna; Demner-Fushman, Dina; McDonald, Clement J

    2012-08-01

    Clinical databases provide a rich source of data for answering clinical research questions. However, the variables recorded in clinical data systems are often identified by local, idiosyncratic, and sometimes redundant and/or ambiguous names (or codes) rather than unique, well-organized codes from standard code systems. This reality discourages research use of such databases, because researchers must invest considerable time in cleaning up the data before they can ask their first research question. Researchers at MIT developed MIMIC-II, a nearly complete collection of clinical data about intensive care patients. Because its data are drawn from existing clinical systems, it has many of the problems described above. In collaboration with the MIT researchers, we have begun a process of cleaning up the data and mapping the variable names and codes to LOINC codes. Our first step, which we describe here, was to map all of the laboratory test observations to LOINC codes. We were able to map 87% of the unique laboratory tests that cover 94% of the total number of laboratory tests results. Of the 13% of tests that we could not map, nearly 60% were due to test names whose real meaning could not be discerned and 29% represented tests that were not yet included in the LOINC table. These results suggest that LOINC codes cover most of laboratory tests used in critical care. We have delivered this work to the MIMIC-II researchers, who have included it in their standard MIMIC-II database release so that researchers who use this database in the future will not have to do this work. Published by Elsevier Inc.

  4. The Impact of Group Drumming on Social-Emotional Behavior in Low-Income Children

    PubMed Central

    Ho, Ping; Tsao, Jennie C. I.; Bloch, Lian; Zeltzer, Lonnie K.

    2011-01-01

    Low-income youth experience social-emotional problems linked to chronic stress that are exacerbated by lack of access to care. Drumming is a non-verbal, universal activity that builds upon a collectivistic aspect of diverse cultures and does not bear the stigma of therapy. A pretest-post-test non-equivalent control group design was used to assess the effects of 12 weeks of school counselor-led drumming on social-emotional behavior in two fifth-grade intervention classrooms versus two standard education control classrooms. The weekly intervention integrated rhythmic and group counseling activities to build skills, such as emotion management, focus and listening. The Teacher's Report Form was used to assess each of 101 participants (n = 54 experimental, n = 47 control, 90% Latino, 53.5% female, mean age 10.5 years, range 10–12 years). There was 100% retention. ANOVA testing showed that intervention classrooms improved significantly compared to the control group in broad-band scales (total problems (P < .01), internalizing problems (P < .02)), narrow-band syndrome scales (withdrawn/depression (P < .02), attention problems (P < .01), inattention subscale (P < .001)), Diagnostic and Statistical Manual of Mental Disorders-oriented scales (anxiety problems (P < .01), attention deficit/hyperactivity problems (P < .01), inattention subscale (P < .001), oppositional defiant problems (P < .03)), and other scales (post-traumatic stress problems (P < .01), sluggish cognitive tempo (P < .001)). Participation in group drumming led to significant improvements in multiple domains of social-emotional behavior. This sustainable intervention can foster positive youth development and increase student-counselor interaction. These findings underscore the potential value of the arts as a therapeutic tool. PMID:21660091

  5. A Radiation Transfer Solver for Athena Using Short Characteristics

    NASA Astrophysics Data System (ADS)

    Davis, Shane W.; Stone, James M.; Jiang, Yan-Fei

    2012-03-01

    We describe the implementation of a module for the Athena magnetohydrodynamics (MHD) code that solves the time-independent, multi-frequency radiative transfer (RT) equation on multidimensional Cartesian simulation domains, including scattering and non-local thermodynamic equilibrium (LTE) effects. The module is based on well known and well tested algorithms developed for modeling stellar atmospheres, including the method of short characteristics to solve the RT equation, accelerated Lambda iteration to handle scattering and non-LTE effects, and parallelization via domain decomposition. The module serves several purposes: it can be used to generate spectra and images, to compute a variable Eddington tensor (VET) for full radiation MHD simulations, and to calculate the heating and cooling source terms in the MHD equations in flows where radiation pressure is small compared with gas pressure. For the latter case, the module is combined with the standard MHD integrators using operator splitting: we describe this approach in detail, including a new constraint on the time step for stability due to radiation diffusion modes. Implementation of the VET method for radiation pressure dominated flows is described in a companion paper. We present results from a suite of test problems for both the RT solver itself and for dynamical problems that include radiative heating and cooling. These tests demonstrate that the radiative transfer solution is accurate and confirm that the operator split method is stable, convergent, and efficient for problems of interest. We demonstrate there is no need to adopt ad hoc assumptions of questionable accuracy to solve RT problems in concert with MHD: the computational cost for our general-purpose module for simple (e.g., LTE gray) problems can be comparable to or less than a single time step of Athena's MHD integrators, and only few times more expensive than that for more general (non-LTE) problems.

  6. Influence of valproate on language functions in children with epilepsy.

    PubMed

    Doo, Jin Woong; Kim, Soon Chul; Kim, Sun Jun

    2018-01-01

    The aim of the current study was to assess the influences of valproate (VPA) on the language functions in newly diagnosed pediatric patients with epilepsy. We reviewed medical records of 53 newly diagnosed patients with epilepsy, who were being treated with VPA monotherapy (n=53; 22 male patients and 31 female patients). The subjects underwent standardized language tests, at least twice, before and after the initiation of VPA. The standardized language tests used were The Test of Language Problem Solving Abilities, a Korean version of The Expressive/Receptive Language Function Test, and the Urimal Test of Articulation and Phonology. Since all the patients analyzed spoke Korean as their first language, we used Korean language tests to reduce the bias within the data. All the language parameters of the Test of Language Problem Solving Abilities slightly improved after the initiation of VPA in the 53 pediatric patients with epilepsy (mean age: 11.6±3.2years), but only "prediction" was statistically significant (determining cause, 14.9±5.1 to 15.5±4.3; making inference, 16.1±5.8 to 16.9±5.6; prediction, 11.1±4.9 to 11.9±4.2; total score of TOPS, 42.0±14.4 to 44.2±12.5). The patients treated with VPA also exhibited a small extension in mean length of utterance in words (MLU-w) when responding, but this was not statistically significant (determining cause, 5.4±2.0 to 5.7±1.6; making inference, 5.8±2.2 to 6.0±1.8; prediction, 5.9±2.5 to 5.9±2.1; total, 5.7±2.1 to 5.9±1.7). The administration of VPA led to a slight, but not statistically significant, improvement in the receptive language function (range: 144.7±41.1 to 148.2±39.7). Finally, there were no statistically significant changes in the percentage of articulation performance after taking VPA. Therefore, our data suggested that VPA did not have negative impact on the language function, but rather slightly improved problem-solving abilities. Copyright © 2017 Elsevier Inc. All rights reserved.

  7. Mechanical waves conceptual survey: Its modification and conversion to a standard multiple-choice test

    NASA Astrophysics Data System (ADS)

    Barniol, Pablo; Zavala, Genaro

    2016-06-01

    In this article we present several modifications of the mechanical waves conceptual survey, the most important test to date that has been designed to evaluate university students' understanding of four main topics in mechanical waves: propagation, superposition, reflection, and standing waves. The most significant changes are (i) modification of several test questions that had some problems in their original design, (ii) standardization of the number of options for each question to five, (iii) conversion of the two-tier questions to multiple-choice questions, and (iv) modification of some questions to make them independent of others. To obtain a final version of the test, we administered both the original and modified versions several times to students at a large private university in Mexico. These students were completing a course that covers the topics tested by the survey. The final modified version of the test was administered to 234 students. In this study we present the modifications for each question, and discuss the reasons behind them. We also analyze the results obtained by the final modified version and offer a comparison between the original and modified versions. In the Supplemental Material we present the final modified version of the test. It can be used by teachers and researchers to assess students' understanding of, and learning about, mechanical waves.

  8. Advancing MODFLOW Applying the Derived Vector Space Method

    NASA Astrophysics Data System (ADS)

    Herrera, G. S.; Herrera, I.; Lemus-García, M.; Hernandez-Garcia, G. D.

    2015-12-01

    The most effective domain decomposition methods (DDM) are non-overlapping DDMs. Recently a new approach, the DVS-framework, based on an innovative discretization method that uses a non-overlapping system of nodes (the derived-nodes), was introduced and developed by I. Herrera et al. [1, 2]. Using the DVS-approach a group of four algorithms, referred to as the 'DVS-algorithms', which fulfill the DDM-paradigm (i.e. the solution of global problems is obtained by resolution of local problems exclusively) has been derived. Such procedures are applicable to any boundary-value problem, or system of such equations, for which a standard discretization method is available and then software with a high degree of parallelization can be constructed. In a parallel talk, in this AGU Fall Meeting, Ismael Herrera will introduce the general DVS methodology. The application of the DVS-algorithms has been demonstrated in the solution of several boundary values problems of interest in Geophysics. Numerical examples for a single-equation, for the cases of symmetric, non-symmetric and indefinite problems were demonstrated before [1,2]. For these problems DVS-algorithms exhibited significantly improved numerical performance with respect to standard versions of DDM algorithms. In view of these results our research group is in the process of applying the DVS method to a widely used simulator for the first time, here we present the advances of the application of this method for the parallelization of MODFLOW. Efficiency results for a group of tests will be presented. References [1] I. Herrera, L.M. de la Cruz and A. Rosas-Medina. Non overlapping discretization methods for partial differential equations, Numer Meth Part D E, (2013). [2] Herrera, I., & Contreras Iván "An Innovative Tool for Effectively Applying Highly Parallelized Software To Problems of Elasticity". Geofísica Internacional, 2015 (In press)

  9. A Standard Platform for Testing and Comparison of MDAO Architectures

    NASA Technical Reports Server (NTRS)

    Gray, Justin S.; Moore, Kenneth T.; Hearn, Tristan A.; Naylor, Bret A.

    2012-01-01

    The Multidisciplinary Design Analysis and Optimization (MDAO) community has developed a multitude of algorithms and techniques, called architectures, for performing optimizations on complex engineering systems which involve coupling between multiple discipline analyses. These architectures seek to efficiently handle optimizations with computationally expensive analyses including multiple disciplines. We propose a new testing procedure that can provide a quantitative and qualitative means of comparison among architectures. The proposed test procedure is implemented within the open source framework, OpenMDAO, and comparative results are presented for five well-known architectures: MDF, IDF, CO, BLISS, and BLISS-2000. We also demonstrate how using open source soft- ware development methods can allow the MDAO community to submit new problems and architectures to keep the test suite relevant.

  10. Report on JANNAF panel on shotgun/relative quickness testing

    NASA Technical Reports Server (NTRS)

    Gould, R. A.

    1980-01-01

    As the need for more energetic solid propellants continues, a number of problems arises. One of these is the tendency of high energy propellants to transition from burning (deflagration) to detonation in regions where the propellant is present in small particle sizes; e.g., in case bonding areas of a motor after a rapid depressurization causes a shear zone at the bond interface as the stressed propellant and motor case relax at different rates. In an effort to determine the susceptibility of propellants to high strain rate break up (friability), and subsequent DDT, the propulsion community uses the shotgun/relative quickness test as one of a number of screening tests for new propellant formulations. Efforts to standardize test techniques and equipment are described.

  11. Cross-reactivity among epoxy acrylates and bisphenol F epoxy resins in patients with bisphenol A epoxy resin sensitivity.

    PubMed

    Lee, Han N; Pokorny, Christopher D; Law, Sandra; Pratt, Melanie; Sasseville, Denis; Storrs, Frances J

    2002-09-01

    The study's objective was 2-fold: first, to evaluate the potential cross-reactivity between Bis-A epoxy resins and epoxy acrylates and second, to study the cross reactivity between Bis-A epoxy resins and newer Bis-F epoxy resins in patients with allergic contact dermatitis to epoxy resins and had positive patch test to the standard epoxy resin based on bisphenol A. Forty-one patients were patch tested to 23 chemicals including epoxy acrylates, Bis-A epoxy resins, and Bis-F epoxy resins, as well as reactive diluents and nonbisphenol epoxy resins. Questions concerning exposure to epoxy resins, occupational history, and problems with dental work were completed. All patients included in the study had positive reactions to the standard Bis-A epoxy resin. Twenty percent (8 of 41) of the patients reacted to at least one of the epoxy acrylates; the most common reaction was to Bis-GMA. Five of 8 patients who reacted to the epoxy acrylates had dental work, but only one patient had problems from her dental work. Six of 8 patients (75%) who reacted to epoxy resins and epoxy acrylates did not react to aliphatic acrylates. Thirty-two percent (13 of 41) reacted to tosylamide epoxy resin, and none reacted to triglycidyl isocyanurate resin. In addition, all patients (100%) had positive reactions to at least one of the Bis-F epoxy resins that were tested. Most patients with sensitivity to Bis-A epoxy resins do not cross-react with epoxy acrylates. Patients with positive patch test reactions to epoxy acrylates used in dentistry usually do not have symptoms from their dental work. To our knowledge, this is the largest series of patients with sensitivity to the standard Bis-A epoxy resin that have been patch tested with the more recently introduced Bis-F epoxy resins. There is significant cross-reactivity between Bis-A and Bis-F epoxy resins, which can be explained by their structural similarity. Copyright 2002, Elsevier Science (USA). All rights reserved.

  12. ASRDI oxygen technology survey. Volume 6: Flow measurement instrumentation

    NASA Technical Reports Server (NTRS)

    Mann, D. B.

    1974-01-01

    A summary is provided of information available on liquid and gaseous oxygen flowmetering including an evaluation of commercial meters. The instrument types, physical principles of measurement, and performance characteristics are described. Problems concerning flow measurements of less than plus or minus two percent uncertainty are reviewed. Recommendations concerning work on flow reference systems, the use of surrogate fluids, and standard tests for oxygen flow measurements are also presented.

  13. Machine Learning with Distances

    DTIC Science & Technology

    2015-02-16

    of training class-wise densities p(x|y) to test input density p′(x). For the fitting of qπ to p ′, we may use the Kullback - Leibler (KL...Problems of Information Transmission, 23(9):95–101, 1987. [103] S. Kullback and R. A. Leibler . On information and sufficiency. The Annals of ...distributions. The Kullback - Leibler (KL) distance is the de-facto standard distance measure in statis- tics and machine learning, because

  14. F-100A on lakebed

    NASA Technical Reports Server (NTRS)

    1959-01-01

    North American F-100A (52-5778) Super Sabre sitting on the Rogers dry lakebed, 1959. Pitch-up could be overcome by several 'fixes', but the problems and cost outweighed the potential benefits. The obvious conclusion from NACA High-Speed Flight Station testing as to the desirability of the low horizontal tail surface led to that configuration's becoming standard on the first-generation supersonic sweptwing fighters such as the F-100 Super Sabre.

  15. Fire technology abstracts, volume 4. Cumulative indexes

    NASA Astrophysics Data System (ADS)

    1982-03-01

    Cumulative subject, author, publisher, and report number indexes referencing articles, books, reports, and patents are provided. The dynamics of fire, behavior and properties of materials, fire modeling and test burns, fire protection, fire safety, fire service organization, apparatus and equipment, fire prevention suppression, planning, human behavior, medical problems, codes and standards, hazard identification, safe handling of materials, and insurance economics of loss and prevention are among the subjects covered.

  16. Military Operations in Built-Up Areas (MOBA).

    DTIC Science & Technology

    1979-01-01

    jhe interim report dated 18 April 1977) "the most fundamental problem with Army C is the lack of an enforced, Systems Architecture/ Systems Engineering...materiel developer and the combat developer, data has not been collected or evaluated to adequately address this spectrum of system performance. Testing...within the MOB.A environment should be institutionalized for all systems as a standard, routine requirement. , Training for MOBA has been cursory at best

  17. Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition. [NEEDS Information Adaptive System

    NASA Technical Reports Server (NTRS)

    Daluge, D. R.; Ruedger, W. H.

    1981-01-01

    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered.

  18. A cognitive neuropsychological approach to the study of delusions in late-onset schizophrenia.

    PubMed

    Phillips, M L; Howard, R; David, A S

    1997-09-01

    Hypotheses to explain delusion formation include distorted perceptual processing of meaningful stimuli (e.g. faces), abnormal reasoning, or a combination of both. The study investigated these hypotheses using standardized neuropsychological tests. A three-patient case-study, compared with a small group (n = 8) of age-matched normal control subjects. Hospital in- and outpatients. Age-matched normal controls were from local residential homes. Three subjects with late-onset schizophrenia, two currently deluded and one in remission. Both deluded subjects had persecutory beliefs. One had a delusion of misidentification. All subjects were administered standardized neuropsychological tests of facial processing and tests of verbal reasoning. The test scores of the three patients were compared with published normal values and the age-matched control data. The tests demonstrated impaired matching of unfamiliar faces in deluded subjects, particularly in the subject with delusional misidentification. Increasing the emotional content of logical reasoning problems had a significant effect on the deluded subjects' reasoning but not that of the normal controls. The findings suggest impaired visual processing plus abnormal reasoning in deluded subjects. However, these impairments are relatively subtle given the severity of psychiatric disorder in the patients studied.

  19. Comparitive Study of High-Order Positivity-Preserving WENO Schemes

    NASA Technical Reports Server (NTRS)

    Kotov, D. V.; Yee, H. C.; Sjogreen, B.

    2014-01-01

    In gas dynamics and magnetohydrodynamics flows, physically, the density ? and the pressure p should both be positive. In a standard conservative numerical scheme, however, the computed internal energy is The ideas of Zhang & Shu (2012) and Hu et al. (2012) precisely address the aforementioned issue. Zhang & Shu constructed a new conservative positivity-preserving procedure to preserve positive density and pressure for high-order Weighted Essentially Non-Oscillatory (WENO) schemes by the Lax-Friedrichs flux (WENO/LLF). In general, WENO/LLF is obtained by subtracting the kinetic energy from the total energy, resulting in a computed p that may be negative. Examples are problems in which the dominant energy is kinetic. Negative ? may often emerge in computing blast waves. In such situations the computed eigenvalues of the Jacobian will become imaginary. Consequently, the initial value problem for the linearized system will be ill posed. This explains why failure of preserving positivity of density or pressure may cause blow-ups of the numerical algorithm. The adhoc methods in numerical strategy which modify the computed negative density and/or the computed negative pressure to be positive are neither a conservative cure nor a stable solution. Conservative positivity-preserving schemes are more appropriate for such flow problems. too dissipative for flows such as turbulence with strong shocks computed in direct numerical simulations (DNS) and large eddy simulations (LES). The new conservative positivity-preserving procedure proposed in Hu et al. (2012) can be used with any high-order shock-capturing scheme, including high-order WENO schemes using the Roe's flux (WENO/Roe). The goal of this study is to compare the results obtained by non-positivity-preserving methods with the recently developed positivity-preserving schemes for representative test cases. In particular the more di cult 3D Noh and Sedov problems are considered. These test cases are chosen because of the negative pressure/density most often exhibited by standard high-order shock-capturing schemes. The simulation of a hypersonic nonequilibrium viscous shock tube that is related to the NASA Electric Arc Shock Tube (EAST) is also included. EAST is a high-temperature and high Mach number viscous nonequilibrium ow consisting of 13 species. In addition, as most common shock-capturing schemes have been developed for problems without source terms, when applied to problems with nonlinear and/or sti source terms these methods can result in spurious solutions, even when solving a conservative system of equations with a conservative scheme. This kind of behavior can be observed even for a scalar case as well as for the case consisting of two species and one reaction.. This EAST example indicated that standard high-order shock-capturing methods exhibit instability of density/pressure in addition to grid-dependent discontinuity locations with insufficient grid points. The evaluation of these test cases is based on the stability of the numerical schemes together with the accuracy of the obtained solutions.

  20. Optimisation of flight dynamic control based on many-objectives meta-heuristic: a comparative study

    NASA Astrophysics Data System (ADS)

    Bureerat, Sujin; Pholdee, Nantiwat; Radpukdee, Thana

    2018-05-01

    Development of many objective meta-heuristics (MnMHs) is a currently interesting topic as they are suitable to real applications of optimisation problems which usually require many ob-jectives. However, most of MnMHs have been mostly developed and tested based on stand-ard testing functions while the use of MnMHs to real applications is rare. Therefore, in this work, MnMHs are applied for optimisation design of flight dynamic control. The design prob-lem is posed to find control gains for minimising; the control effort, the spiral root, the damp-ing in roll root, sideslip angle deviation, and maximising; the damping ratio of the dutch-roll complex pair, the dutch-roll frequency, bank angle at pre-specified times 1 seconds and 2.8 second subjected to several constraints based on Military Specifications (1969) requirement. Several established many-objective meta-heuristics (MnMHs) are used to solve the problem while their performances are compared. With this research work, performance of several MnMHs for flight control is investigated. The results obtained will be the baseline for future development of flight dynamic and control.

  1. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Singleton, Jr., Robert; Israel, Daniel M.; Doebling, Scott William

    For code verification, one compares the code output against known exact solutions. There are many standard test problems used in this capacity, such as the Noh and Sedov problems. ExactPack is a utility that integrates many of these exact solution codes into a common API (application program interface), and can be used as a stand-alone code or as a python package. ExactPack consists of python driver scripts that access a library of exact solutions written in Fortran or Python. The spatial profiles of the relevant physical quantities, such as the density, fluid velocity, sound speed, or internal energy, are returnedmore » at a time specified by the user. The solution profiles can be viewed and examined by a command line interface or a graphical user interface, and a number of analysis tools and unit tests are also provided. We have documented the physics of each problem in the solution library, and provided complete documentation on how to extend the library to include additional exact solutions. ExactPack’s code architecture makes it easy to extend the solution-code library to include additional exact solutions in a robust, reliable, and maintainable manner.« less

  2. Payload and General Support Computer (PGSC) Detailed Test Objective (DTO) number 795 postflight report: STS-41

    NASA Technical Reports Server (NTRS)

    Adolf, Jurine A.; Beberness, Benjamin J.; Holden, Kritina L.

    1991-01-01

    Since 1983, the Space Transportation System (STS) had routinely flown the GRiD 1139 (80286) laptop computer as a portable onboard computing resource. In the spring of 1988, the GRiD 1530, an 80386 based machine, was chosen to replace the GRiD 1139. Human factors ground evaluations and detailed test objectives (DTO) examined the usability of the available display types under different lighting conditions and various angle deviations. All proved unsuitable due to either flight qualification of usability problems. In 1990, an Electroluminescent (EL) display for the GRiD 1530 became flight qualified and another DTO was undertaken to examine this display on-orbit. Under conditions of indirect sunlight and low ambient light, the readability of the text and graphics was only limited by the observer's distance from the display. Although a problem of direct sunlight viewing still existed, there were no problems with large angular deviations nor dark adaptation. No further evaluations were deemed necessary. The GRiD 1530 with the EL display was accepted by the STS program as the new standard for the PGSC.

  3. Lower bound on the time complexity of local adiabatic evolution

    NASA Astrophysics Data System (ADS)

    Chen, Zhenghao; Koh, Pang Wei; Zhao, Yan

    2006-11-01

    The adiabatic theorem of quantum physics has been, in recent times, utilized in the design of local search quantum algorithms, and has been proven to be equivalent to standard quantum computation, that is, the use of unitary operators [D. Aharonov in Proceedings of the 45th Annual Symposium on the Foundations of Computer Science, 2004, Rome, Italy (IEEE Computer Society Press, New York, 2004), pp. 42-51]. Hence, the study of the time complexity of adiabatic evolution algorithms gives insight into the computational power of quantum algorithms. In this paper, we present two different approaches of evaluating the time complexity for local adiabatic evolution using time-independent parameters, thus providing effective tests (not requiring the evaluation of the entire time-dependent gap function) for the time complexity of newly developed algorithms. We further illustrate our tests by displaying results from the numerical simulation of some problems, viz. specially modified instances of the Hamming weight problem.

  4. LLNL Small-Scale Friction sensitivity (BAM) Test

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Simpson, L.R.; Foltz, M.F.

    1996-06-01

    Small-scale safety testing of explosives, propellants and other energetic materials, is done to determine their sensitivity to various stimuli including friction, static spark, and impact. Testing is done to discover potential handling problems for either newly synthesized materials of unknown behavior, or materials that have been stored for long periods of time. This report describes the existing {open_quotes}BAM{close_quotes} Small-Scale Friction Test, and the methods used to determine the friction sensitivity pertinent to handling energetic materials. The accumulated data for the materials tested is not listed here - that information is in a database. Included is, however, a short list ofmore » (1) materials that had an unusual response, and (2), a few {open_quotes}standard{close_quotes} materials representing the range of typical responses usually seen.« less

  5. Quality of life in patients with Parkinson's disease: development of a questionnaire.

    PubMed Central

    de Boer, A G; Wijker, W; Speelman, J D; de Haes, J C

    1996-01-01

    OBJECTIVES--To develop and test a questionnaire for measuring quality of life in patients with Parkinson's disease. METHODS--An item pool was developed based on the experience of patients with Parkinson's disease and of neurologists; medical literature on the problems of patients with Parkinson's disease; and other quality of life questionnaires. To reduce the item pool, 13 patients identified items that were a problem to them and rated their importance. Items which were most often chosen and rated most important were included in the Parkinson's disease quality of life questionnaire (PDQL). The PDQL consists of 37 items. To evaluate the discriminant validity of the PDQL three groups of severity of disease were compared. To test for convergent validity, the scores of the PDQL were tested for correlation with standard indices of quality of life. RESULTS--The PDQL was filled out by 384 patients with Parkinson's disease. It consisted of four subscales: parkinsonian symptoms, systemic symptoms, emotional functioning, and social functioning. The internal-consistency reliability coefficients of the PDQL subscales were high (0.80-0.87). Patients with higher disease severity had significantly lower quality of life on all PDQL subscales (P < 0.05). Almost all PDQL subscales correlated highly (P < 0.001) with the corresponding scales of the standard quality of life indices. CONCLUSION--The PDQL is a relevant, reliable, and valid measure of the quality of life of patients with Parkinson's disease. Images PMID:8676165

  6. Preventing Behavioral Disorders via Supporting Social and Emotional Competence at Preschool Age.

    PubMed

    Schell, Annika; Albers, Lucia; von Kries, Rüdiger; Hillenbrand, Clemens; Hennemann, Thomas

    2015-09-25

    13-18% of all preschool children have severe behavioral problems at least transiently, sometimes with long-term adverse consequences. In this study, the social training program "Lubo aus dem All! - Vorschulalter" (Lubo from Outer Space, Preschool Version) was evaluated in a kindergarten setting. 15 kindergartens were randomly assigned to either an intervention group or a control group, in a 2:1 ratio. The intervention was designed to strengthen emotional knowledge and regulation, the ability to take another person's point of view, communication skills, and social problem solving. The control group continued with conventional kindergarten activities. The primary endpoint was improvement in social-cognitive problem solving strategies, as assessed with the Wally Social Skills and Problem Solving Game (Wally). Secondary endpoints were improvement in prosocial behavior and reduction in problematic behavior, as assessed with the Preschool Social Behavior Questionnaire (PSBQ) and the Caregiver-Teacher Report Form (C-TRF). Data were collected before and after the intervention and also 5 months later. Mixed models were calculated with random effects to take account of the cluster design and for adjustment for confounding variables. 221 children in kindergarten, aged 5-6 years, were included in the study. Randomization was unsuccessful: the children in the intervention group performed markedly worse on the tests carried out before the intervention. Five months after the end of the intervention, the social-cognitive problem solving strategies of the children in the intervention group had improved more than those of the children in the control group: the intergroup difference in improvement was 0.79 standard deviations of the Wally test (95% confidence interval [CI] 0.13-1.46). This effect was just as marked 5 months later (0.63, 95% CI 0.03-1.23). Prosocial behavior, as measured by the PSBQ, also improved more in the intervention group, with an intergroup difference of 0.37 standard deviations (95% CI 0.05-0.71). An age-appropriate program to prevent behavioral disorders among kindergarten children improved both the children's knowledge of prosocial problem solving strategies and their prosocial behavior.

  7. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gerritsma, Marc; Bochev, Pavel

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  8. Tough2{_}MP: A parallel version of TOUGH2

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Zhang, Keni; Wu, Yu-Shu; Ding, Chris

    2003-04-09

    TOUGH2{_}MP is a massively parallel version of TOUGH2. It was developed for running on distributed-memory parallel computers to simulate large simulation problems that may not be solved by the standard, single-CPU TOUGH2 code. The new code implements an efficient massively parallel scheme, while preserving the full capacity and flexibility of the original TOUGH2 code. The new software uses the METIS software package for grid partitioning and AZTEC software package for linear-equation solving. The standard message-passing interface is adopted for communication among processors. Numerical performance of the current version code has been tested on CRAY-T3E and IBM RS/6000 SP platforms. Inmore » addition, the parallel code has been successfully applied to real field problems of multi-million-cell simulations for three-dimensional multiphase and multicomponent fluid and heat flow, as well as solute transport. In this paper, we will review the development of the TOUGH2{_}MP, and discuss the basic features, modules, and their applications.« less

  9. A spectral mimetic least-squares method for the Stokes equations with no-slip boundary condition

    DOE PAGES

    Gerritsma, Marc; Bochev, Pavel

    2016-03-22

    Formulation of locally conservative least-squares finite element methods (LSFEMs) for the Stokes equations with the no-slip boundary condition has been a long standing problem. Existing LSFEMs that yield exactly divergence free velocities require non-standard boundary conditions (Bochev and Gunzburger, 2009 [3]), while methods that admit the no-slip condition satisfy the incompressibility equation only approximately (Bochev and Gunzburger, 2009 [4, Chapter 7]). Here we address this problem by proving a new non-standard stability bound for the velocity–vorticity–pressure Stokes system augmented with a no-slip boundary condition. This bound gives rise to a norm-equivalent least-squares functional in which the velocity can be approximatedmore » by div-conforming finite element spaces, thereby enabling a locally-conservative approximations of this variable. Here, we also provide a practical realization of the new LSFEM using high-order spectral mimetic finite element spaces (Kreeft et al., 2011) and report several numerical tests, which confirm its mimetic properties.« less

  10. Accelerated forgetting? An evaluation on the use of long-term forgetting rates in patients with memory problems

    PubMed Central

    Geurts, Sofie; van der Werf, Sieberen P.; Kessels, Roy P. C.

    2015-01-01

    The main focus of this review was to evaluate whether long-term forgetting rates (delayed tests, days, to weeks, after initial learning) are more sensitive measures than standard delayed recall measures to detect memory problems in various patient groups. It has been suggested that accelerated forgetting might be characteristic for epilepsy patients, but little research has been performed in other populations. Here, we identified eleven studies in a wide range of brain injured patient groups, whose long-term forgetting patterns were compared to those of healthy controls. Signs of accelerated forgetting were found in three studies. The results of eight studies showed normal forgetting over time for the patient groups. However, most of the studies used only a recognition procedure, after optimizing initial learning. Based on these results, we recommend the use of a combined recall and recognition procedure to examine accelerated forgetting and we discuss the relevance of standard and optimized learning procedures in clinical practice. PMID:26106343

  11. Validation of dilution of plasma samples with phosphate buffered saline to eliminate the problem of small volumes associated with children infected with HIV-1 for viral load testing using Cobas AmpliPrep/COBAS TaqMan HIV-1 test, version 2.0 (CAP CTM HIV v2.0).

    PubMed

    Mine, Madisa; Nkoane, Tapologo; Sebetso, Gaseene; Sakyi, Bright; Makhaola, Kgomotso; Gaolathe, Tendani

    2013-12-01

    The sample requirement of 1 mL for the Roche COBAS AmpliPrep/COBAS TaqMan HIV-1 test, version 2.0 (CAP CTM HIV v2.0) limits its utility in measuring plasma HIV-1 RNA levels for small volume samples from children infected with HIV-1. Viral load monitoring is the standard of care for HIV-1-infected patients on antiretroviral therapy in Botswana. The study aimed to validate the dilution of small volume samples with phosphate buffered saline (1× PBS) when quantifying HIV-1 RNA in patient plasma. HIV RNA concentrations were determined in undiluted and diluted pairs of samples comprising panels of quality assessment standards (n=52) as well as patient samples (n=325). There was strong correlation (R(2)) of 0.98 and 0.95 within the dynamic range of the CAP CTM HIV v2.0 test between undiluted and diluted samples from quality assessment standards and patients, respectively. The difference between viral load measurements of diluted and undiluted pairs of quality assessment standards and patient samples using the Altman-Bland test showed that the 95% limits of agreement were between -0.40 Log 10 and 0.49 Log 10. This difference was within the 0.5 Log 10 which is generally considered as normal assay variation of plasma RNA levels. Dilution of samples with 1× PBS produced comparable viral load measurements to undiluted samples. Copyright © 2013 Elsevier B.V. All rights reserved.

  12. Effects of a Clown-Nurse Educational Intervention on the Reduction of Postoperative Anxiety and Pain Among Preschool Children and Their Accompanying Parents in South Korea.

    PubMed

    Yun, O Bok; Kim, Shin-Jeong; Jung, Dukyoo

    2015-01-01

    This study examined the effects of a clown-nurse educational intervention on children undergoing day surgery for strabismus. This was a quasi-experimental study, using a nonequivalent control group, non-synchronized design. Fifty preschool children and their parents were invited to participate. The children in the intervention group (n=23) received clown therapy and subsequently reported significantly lower states of physiological anxiety, which was evidenced by systolic blood pressure, standardized behavioral anxiety tests, and post-surgery pain, than the control group (n=27). In addition, the parents in the experimental group showed a low state of physiological anxiety, evidenced by systolic blood pressure, pulse rates, standardized behavioral anxiety tests, and state-trait anxiety. The use of preoperative clown intervention may alleviate postoperative problems, not only for children, but also for their parents. Copyright © 2015 Elsevier Inc. All rights reserved.

  13. The advanced orbiting systems testbed program: Results to date

    NASA Technical Reports Server (NTRS)

    Newsome, Penny A.; Otranto, John F.

    1993-01-01

    The Consultative Committee for Space Data Systems Recommendations for Packet Telemetry and Advanced Orbiting Systems (AOS) propose standard solutions to data handling problems common to many types of space missions. The Recommendations address only space/ground and space/space data handling systems. Goddard Space Flight Center's AOS Testbed (AOST) Program was initiated to better understand the Recommendations and their impact on real-world systems, and to examine the extended domain of ground/ground data handling systems. Central to the AOST Program are the development of an end-to-end Testbed and its use in a comprehensive testing program. Other Program activities include flight-qualifiable component development, supporting studies, and knowledge dissemination. The results and products of the Program will reduce the uncertainties associated with the development of operational space and ground systems that implement the Recommendations. The results presented in this paper include architectural issues, a draft proposed standardized test suite and flight-qualifiable components.

  14. The use of cognitive ability measures as explanatory variables in regression analysis.

    PubMed

    Junker, Brian; Schofield, Lynne Steuerle; Taylor, Lowell J

    2012-12-01

    Cognitive ability measures are often taken as explanatory variables in regression analysis, e.g., as a factor affecting a market outcome such as an individual's wage, or a decision such as an individual's education acquisition. Cognitive ability is a latent construct; its true value is unobserved. Nonetheless, researchers often assume that a test score , constructed via standard psychometric practice from individuals' responses to test items, can be safely used in regression analysis. We examine problems that can arise, and suggest that an alternative approach, a "mixed effects structural equations" (MESE) model, may be more appropriate in many circumstances.

  15. Implementation and verification of global optimization benchmark problems

    NASA Astrophysics Data System (ADS)

    Posypkin, Mikhail; Usov, Alexander

    2017-12-01

    The paper considers the implementation and verification of a test suite containing 150 benchmarks for global deterministic box-constrained optimization. A C++ library for describing standard mathematical expressions was developed for this purpose. The library automate the process of generating the value of a function and its' gradient at a given point and the interval estimates of a function and its' gradient on a given box using a single description. Based on this functionality, we have developed a collection of tests for an automatic verification of the proposed benchmarks. The verification has shown that literary sources contain mistakes in the benchmarks description. The library and the test suite are available for download and can be used freely.

  16. The role of physical content in piagetian spatial tasks: Sex differences in spatial knowledge?

    NASA Astrophysics Data System (ADS)

    Golbeck, Susan L.

    Sex-related differences on Piagetian horizontality (water level) and verticality (plumb line) tasks were examined in 64 college students. It was hypothesized that females' difficulties on these Euclidean spatial problems are due not to differences in underlying spatial competence, but rather to differences in knowledge of task specific information about the physical properties of water levels and plumb lines. This was tested by presenting subjects with the standard water level and plumb line problems and also modified problems not requiring knowledge of physical principles (i.e., drawing straight up and down or straight across lines inside tipped rectangles). While males were expected to outperform females on the standard tasks, no sex differences were expected on the modified tasks. Results of an ANOVA on scores for horizontality and verticality each showed main effects for sex and task version but failed to reveal the hypothesized interaction. However, performance on the Euclidean spatial tasks was also considered in terms of overall success versus failure. While males were more successful than females in the standard format, males and females were equally successful in the modified, nonphysical, format. Hence, college aged males and females generally do not differ in spatial competence although they may be differentially influenced by task content. Findings are discussed in terms of their implications for theory and practice. It is emphasized that science educators must be especially aware of such task influences for females so that performance deficits are not mistaken for competence deficits.

  17. Effectiveness of the Treatment Readiness and Induction Program for Increasing Adolescent Motivation for Change

    PubMed Central

    Becan, Jennifer E.; Knight, Danica K.; Crawley, Rachel D.; Joe, George W.; Flynn, Patrick M.

    2014-01-01

    Success in substance abuse treatment is improved by problem recognition, desire to seek help, and readiness to engage in treatment, all of which are important aspects of motivation. Interventions that facilitate these at treatment induction for adolescents are especially needed. The purpose of this study is to assess the effectiveness of TRIP (Treatment Readiness and Induction Program) in promoting treatment motivation. Data represent 519 adolescents from 6 residential programs who completed assessments at treatment intake (Time 1) and 35 days after admission (Time 2). The design consisted of a comparison sample (n = 281) that had enrolled in treatment prior to implementation of TRIP (standard operating practice) and a sample of clients that had entered treatment after TRIP began and received standard operating practice enhanced by TRIP (n = 238). Repeated measures ANCOVAs were conducted using each Time 2 motivation scale as a dependent measure. Motivation scales were conceptualized as representing sequential stages of change. LISREL was used to test a structural model involving TRIP participation, gender, drug use severity, juvenile justice involvement, age, race-ethnicity, prior treatment, and urgency as predictors of the stages of treatment motivation. Compared to standard practice, adolescents receiving TRIP demonstrated greater gains in problem recognition, even after controlling for the other variables in the model. The model fit was adequate, with TRIP directly affecting problem recognition and indirectly affecting later stages of change (desire for help and treatment readiness). Future studies should examine which specific components of TRIP affect change in motivation. PMID:25456094

  18. Ill-defined problem solving in amnestic mild cognitive impairment: linking episodic memory to effective solution generation.

    PubMed

    Sheldon, S; Vandermorris, S; Al-Haj, M; Cohen, S; Winocur, G; Moscovitch, M

    2015-02-01

    It is well accepted that the medial temporal lobes (MTL), and the hippocampus specifically, support episodic memory processes. Emerging evidence suggests that these processes also support the ability to effectively solve ill-defined problems which are those that do not have a set routine or solution. To test the relation between episodic memory and problem solving, we examined the ability of individuals with single domain amnestic mild cognitive impairment (aMCI), a condition characterized by episodic memory impairment, to solve ill-defined social problems. Participants with aMCI and age and education matched controls were given a battery of tests that included standardized neuropsychological measures, the Autobiographical Interview (Levine et al., 2002) that scored for episodic content in descriptions of past personal events, and a measure of ill-defined social problem solving. Corroborating previous findings, the aMCI group generated less episodically rich narratives when describing past events. Individuals with aMCI also generated less effective solutions when solving ill-defined problems compared to the control participants. Correlation analyses demonstrated that the ability to recall episodic elements from autobiographical memories was positively related to the ability to effectively solve ill-defined problems. The ability to solve these ill-defined problems was related to measures of activities of daily living. In conjunction with previous reports, the results of the present study point to a new functional role of episodic memory in ill-defined goal-directed behavior and other non-memory tasks that require flexible thinking. Our findings also have implications for the cognitive and behavioural profile of aMCI by suggesting that the ability to effectively solve ill-defined problems is related to sustained functional independence. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Science, politics, and health in the brave new world of pharmaceutical carcinogenic risk assessment: Technical progress or cycle of regulatory capture?

    PubMed Central

    Abraham, John; Ballinger, Rachel

    2012-01-01

    The carcinogenicity (cancer-inducing potential) of pharmaceuticals is an important risk factor for health when considering whether thousands of patients on drug trials or millions/billions of consumers in the marketplace should be exposed to a new drug. Drawing on fieldwork involving over 50 interviews and documentary research spanning 2002–2010 in Europe and the US, and on regulatory capture theory, this article investigates how the techno-regulatory standards for carcinogenicity testing of pharmaceuticals have altered since 1998. It focuses on the replacement of long-term carcinogenicity tests in rodents (especially mice) with shorter-term tests involving genetically-engineered mice (GEM). Based on evidence regarding financial/organizational control, methodological design, and interpretation of the validation and application of these new GEM tests, it is argued that regulatory agencies permitted the drug industry to shape such validation and application in ways that prioritized commercial interests over the need to protect public health. Boundary-work enabling industry scientists to define some standards of public-health policy facilitated such capture. However, as the scientific credibility of GEM tests as tools to protect public health by screening out carcinogens became inescapably problematic, a regulatory resurgence, impelled by reputational concerns, exercised more control over industry’s construction and use of the tests, The extensive problems with GEM tests as public-health protective regulatory science raises the spectre that alterations to pharmaceutical carcinogenicity-testing standards since the 1990s may have been boundary-work in which the political project of decreasing the chance that companies’ products are defined as carcinogenic has masqueraded as techno-science. PMID:22784375

  20. Radiated radiofrequency immunity testing of automated external defibrillators - modifications of applicable standards are needed

    PubMed Central

    2011-01-01

    Background We studied the worst-case radiated radiofrequency (RF) susceptibility of automated external defibrillators (AEDs) based on the electromagnetic compatibility (EMC) requirements of a current standard for cardiac defibrillators, IEC 60601-2-4. Square wave modulation was used to mimic cardiac physiological frequencies of 1 - 3 Hz. Deviations from the IEC standard were a lower frequency limit of 30 MHz to explore frequencies where the patient-connected leads could resonate. Also testing up to 20 V/m was performed. We tested AEDs with ventricular fibrillation (V-Fib) and normal sinus rhythm signals on the patient leads to enable testing for false negatives (inappropriate "no shock advised" by the AED). Methods We performed radiated exposures in a 10 meter anechoic chamber using two broadband antennas to generate E fields in the 30 - 2500 MHz frequency range at 1% frequency steps. An AED patient simulator was housed in a shielded box and delivered normal and fibrillation waveforms to the AED's patient leads. We developed a technique to screen ECG waveforms stored in each AED for electromagnetic interference at all frequencies without waiting for the long cycle times between analyses (normally 20 to over 200 s). Results Five of the seven AEDs tested were susceptible to RF interference, primarily at frequencies below 80 MHz. Some induced errors could cause AEDs to malfunction and effectively inhibit operator prompts to deliver a shock to a patient experiencing lethal fibrillation. Failures occurred in some AEDs exposed to E fields between 3 V/m and 20 V/m, in the 38 - 50 MHz range. These occurred when the patient simulator was delivering a V-Fib waveform to the AED. Also, we found it is not possible to test modern battery-only-operated AEDs for EMI using a patient simulator if the IEC 60601-2-4 defibrillator standard's simulated patient load is used. Conclusions AEDs experienced potentially life-threatening false-negative failures from radiated RF, primarily below the lower frequency limit of present AED standards. Field strengths causing failures were at levels as low as 3 V/m at frequencies below 80 MHz where resonance of the patient leads and the AED input circuitry occurred. This plus problems with the standard's' prescribed patient load make changes to the standard necessary. PMID:21801368

Top