Sample records for validation test cases

  1. Validation Test Report For The CRWMS Analysis and Logistics Visually Interactive Model Calvin Version 3.0, 10074-Vtr-3.0-00

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    S. Gillespie

    2000-07-27

    This report describes the tests performed to validate the CRWMS ''Analysis and Logistics Visually Interactive'' Model (CALVIN) Version 3.0 (V3.0) computer code (STN: 10074-3.0-00). To validate the code, a series of test cases was developed in the CALVIN V3.0 Validation Test Plan (CRWMS M&O 1999a) that exercises the principal calculation models and options of CALVIN V3.0. Twenty-five test cases were developed: 18 logistics test cases and 7 cost test cases. These cases test the features of CALVIN in a sequential manner, so that the validation of each test case is used to demonstrate the accuracy of the input to subsequentmore » calculations. Where necessary, the test cases utilize reduced-size data tables to make the hand calculations used to verify the results more tractable, while still adequately testing the code's capabilities. Acceptance criteria, were established for the logistics and cost test cases in the Validation Test Plan (CRWMS M&O 1999a). The Logistics test cases were developed to test the following CALVIN calculation models: Spent nuclear fuel (SNF) and reactivity calculations; Options for altering reactor life; Adjustment of commercial SNF (CSNF) acceptance rates for fiscal year calculations and mid-year acceptance start; Fuel selection, transportation cask loading, and shipping to the Monitored Geologic Repository (MGR); Transportation cask shipping to and storage at an Interim Storage Facility (ISF); Reactor pool allocation options; and Disposal options at the MGR. Two types of cost test cases were developed: cases to validate the detailed transportation costs, and cases to validate the costs associated with the Civilian Radioactive Waste Management System (CRWMS) Management and Operating Contractor (M&O) and Regional Servicing Contractors (RSCs). For each test case, values calculated using Microsoft Excel 97 worksheets were compared to CALVIN V3.0 scenarios with the same input data and assumptions. All of the test case results compare with the CALVIN V3.0 results within the bounds of the acceptance criteria. Therefore, it is concluded that the CALVIN V3.0 calculation models and options tested in this report are validated.« less

  2. Assessing Discriminative Performance at External Validation of Clinical Prediction Models

    PubMed Central

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W.

    2016-01-01

    Introduction External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. Methods We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. Results The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. Conclusion The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients. PMID:26881753

  3. Assessing Discriminative Performance at External Validation of Clinical Prediction Models.

    PubMed

    Nieboer, Daan; van der Ploeg, Tjeerd; Steyerberg, Ewout W

    2016-01-01

    External validation studies are essential to study the generalizability of prediction models. Recently a permutation test, focusing on discrimination as quantified by the c-statistic, was proposed to judge whether a prediction model is transportable to a new setting. We aimed to evaluate this test and compare it to previously proposed procedures to judge any changes in c-statistic from development to external validation setting. We compared the use of the permutation test to the use of benchmark values of the c-statistic following from a previously proposed framework to judge transportability of a prediction model. In a simulation study we developed a prediction model with logistic regression on a development set and validated them in the validation set. We concentrated on two scenarios: 1) the case-mix was more heterogeneous and predictor effects were weaker in the validation set compared to the development set, and 2) the case-mix was less heterogeneous in the validation set and predictor effects were identical in the validation and development set. Furthermore we illustrated the methods in a case study using 15 datasets of patients suffering from traumatic brain injury. The permutation test indicated that the validation and development set were homogenous in scenario 1 (in almost all simulated samples) and heterogeneous in scenario 2 (in 17%-39% of simulated samples). Previously proposed benchmark values of the c-statistic and the standard deviation of the linear predictors correctly pointed at the more heterogeneous case-mix in scenario 1 and the less heterogeneous case-mix in scenario 2. The recently proposed permutation test may provide misleading results when externally validating prediction models in the presence of case-mix differences between the development and validation population. To correctly interpret the c-statistic found at external validation it is crucial to disentangle case-mix differences from incorrect regression coefficients.

  4. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  5. Validating an artificial intelligence human proximity operations system with test cases

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    2013-05-01

    An artificial intelligence-controlled robot (AICR) operating in close proximity to humans poses risk to these humans. Validating the performance of an AICR is an ill posed problem, due to the complexity introduced by the erratic (noncomputer) actors. In order to prove the AICR's usefulness, test cases must be generated to simulate the actions of these actors. This paper discusses AICR's performance validation in the context of a common human activity, moving through a crowded corridor, using test cases created by an AI use case producer. This test is a two-dimensional simplification relevant to autonomous UAV navigation in the national airspace.

  6. Testing expert systems

    NASA Technical Reports Server (NTRS)

    Chang, C. L.; Stachowitz, R. A.

    1988-01-01

    Software quality is of primary concern in all large-scale expert system development efforts. Building appropriate validation and test tools for ensuring software reliability of expert systems is therefore required. The Expert Systems Validation Associate (EVA) is a validation system under development at the Lockheed Artificial Intelligence Center. EVA provides a wide range of validation and test tools to check correctness, consistency, and completeness of an expert system. Testing a major function of EVA. It means executing an expert system with test cases with the intent of finding errors. In this paper, we describe many different types of testing such as function-based testing, structure-based testing, and data-based testing. We describe how appropriate test cases may be selected in order to perform good and thorough testing of an expert system.

  7. Fatigue after stroke: the development and evaluation of a case definition.

    PubMed

    Lynch, Joanna; Mead, Gillian; Greig, Carolyn; Young, Archie; Lewis, Susan; Sharpe, Michael

    2007-11-01

    While fatigue after stroke is a common problem, it has no generally accepted definition. Our aim was to develop a case definition for post-stroke fatigue and to test its psychometric properties. A case definition with face validity and an associated structured interview was constructed. After initial piloting, the feasibility, reliability (test-retest and inter-rater) and concurrent validity (in relation to four fatigue severity scales) were determined in 55 patients with stroke. All participating patients provided satisfactory answers to all the case definition probe questions demonstrating its feasibility For test-retest reliability, kappa was 0.78 (95% CI, 0.57-0.94, P<.01) and for inter-rater reliability kappa was 0.80 (95% CI, 0.62-0.99, P<.01). Patients fulfilling the case definition also had substantially higher fatigue scores on four fatigue severity scales (P<.001) indicating concurrent validity. The proposed case definition is feasible to administer and reliable in practice, and there is evidence of concurrent validity. It requires further evaluation in different settings.

  8. A case of misdiagnosis of mild cognitive impairment: The utility of symptom validity testing in an outpatient memory clinic.

    PubMed

    Roor, Jeroen J; Dandachi-FitzGerald, Brechje; Ponds, Rudolf W H M

    2016-01-01

    Noncredible symptom reports hinder the diagnostic process. This fact is especially the case for medical conditions that rely on subjective report of symptoms instead of objective measures. Mild cognitive impairment (MCI) primarily relies on subjective report, which makes it potentially susceptible to erroneous diagnosis. In this case report, we describe a 59-year-old female patient diagnosed with MCI 10 years previously. The patient was referred to the neurology department for reexamination by her general practitioner because of cognitive complaints and persistent fatigue. This case study used information from the medical file, a new magnetic resonance imaging brain scan, and neuropsychological assessment. Current neuropsychological assessment, including symptom validity tests, clearly indicated noncredible test performance, thereby invalidating the obtained neuropsychological test data. We conclude that a blind spot for noncredible symptom reports existed in the previous diagnostic assessments. This case highlights the usefulness of formal symptom validity testing in the diagnostic assessment of MCI.

  9. Alternative Vocabularies in the Test Validity Literature

    ERIC Educational Resources Information Center

    Markus, Keith A.

    2016-01-01

    Justification of testing practice involves moving from one state of knowledge about the test to another. Theories of test validity can (a) focus on the beginning of the process, (b) focus on the end, or (c) encompass the entire process. Analyses of four case studies test and illustrate three claims: (a) restrictions on validity entail a supplement…

  10. Validation of software for calculating the likelihood ratio for parentage and kinship.

    PubMed

    Drábek, J

    2009-03-01

    Although the likelihood ratio is a well-known statistical technique, commercial off-the-shelf (COTS) software products for its calculation are not sufficiently validated to suit general requirements for the competence of testing and calibration laboratories (EN/ISO/IEC 17025:2005 norm) per se. The software in question can be considered critical as it directly weighs the forensic evidence allowing judges to decide on guilt or innocence or to identify person or kin (i.e.: in mass fatalities). For these reasons, accredited laboratories shall validate likelihood ratio software in accordance with the above norm. To validate software for calculating the likelihood ratio in parentage/kinship scenarios I assessed available vendors, chose two programs (Paternity Index and familias) for testing, and finally validated them using tests derived from elaboration of the available guidelines for the field of forensics, biomedicine, and software engineering. MS Excel calculation using known likelihood ratio formulas or peer-reviewed results of difficult paternity cases were used as a reference. Using seven testing cases, it was found that both programs satisfied the requirements for basic paternity cases. However, only a combination of two software programs fulfills the criteria needed for our purpose in the whole spectrum of functions under validation with the exceptions of providing algebraic formulas in cases of mutation and/or silent allele.

  11. A Human Proximity Operations System test case validation approach

    NASA Astrophysics Data System (ADS)

    Huber, Justin; Straub, Jeremy

    A Human Proximity Operations System (HPOS) poses numerous risks in a real world environment. These risks range from mundane tasks such as avoiding walls and fixed obstacles to the critical need to keep people and processes safe in the context of the HPOS's situation-specific decision making. Validating the performance of an HPOS, which must operate in a real-world environment, is an ill posed problem due to the complexity that is introduced by erratic (non-computer) actors. In order to prove the HPOS's usefulness, test cases must be generated to simulate possible actions of these actors, so the HPOS can be shown to be able perform safely in environments where it will be operated. The HPOS must demonstrate its ability to be as safe as a human, across a wide range of foreseeable circumstances. This paper evaluates the use of test cases to validate HPOS performance and utility. It considers an HPOS's safe performance in the context of a common human activity, moving through a crowded corridor, and extrapolates (based on this) to the suitability of using test cases for AI validation in other areas of prospective application.

  12. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  13. Radiant Energy Measurements from a Scaled Jet Engine Axisymmetric Exhaust Nozzle for a Baseline Code Validation Case

    NASA Technical Reports Server (NTRS)

    Baumeister, Joseph F.

    1994-01-01

    A non-flowing, electrically heated test rig was developed to verify computer codes that calculate radiant energy propagation from nozzle geometries that represent aircraft propulsion nozzle systems. Since there are a variety of analysis tools used to evaluate thermal radiation propagation from partially enclosed nozzle surfaces, an experimental benchmark test case was developed for code comparison. This paper briefly describes the nozzle test rig and the developed analytical nozzle geometry used to compare the experimental and predicted thermal radiation results. A major objective of this effort was to make available the experimental results and the analytical model in a format to facilitate conversion to existing computer code formats. For code validation purposes this nozzle geometry represents one validation case for one set of analysis conditions. Since each computer code has advantages and disadvantages based on scope, requirements, and desired accuracy, the usefulness of this single nozzle baseline validation case can be limited for some code comparisons.

  14. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  15. Five Data Validation Cases

    ERIC Educational Resources Information Center

    Simkin, Mark G.

    2008-01-01

    Data-validation routines enable computer applications to test data to ensure their accuracy, completeness, and conformance to industry or proprietary standards. This paper presents five programming cases that require students to validate five different types of data: (1) simple user data entries, (2) UPC codes, (3) passwords, (4) ISBN numbers, and…

  16. Parameterization of Model Validating Sets for Uncertainty Bound Optimizations. Revised

    NASA Technical Reports Server (NTRS)

    Lim, K. B.; Giesy, D. P.

    2000-01-01

    Given measurement data, a nominal model and a linear fractional transformation uncertainty structure with an allowance on unknown but bounded exogenous disturbances, easily computable tests for the existence of a model validating uncertainty set are given. Under mild conditions, these tests are necessary and sufficient for the case of complex, nonrepeated, block-diagonal structure. For the more general case which includes repeated and/or real scalar uncertainties, the tests are only necessary but become sufficient if a collinearity condition is also satisfied. With the satisfaction of these tests, it is shown that a parameterization of all model validating sets of plant models is possible. The new parameterization is used as a basis for a systematic way to construct or perform uncertainty tradeoff with model validating uncertainty sets which have specific linear fractional transformation structure for use in robust control design and analysis. An illustrative example which includes a comparison of candidate model validating sets is given.

  17. Author Response to Sabour (2018), "Comment on Hall et al. (2017), 'How to Choose Between Measures of Tinnitus Loudness for Clinical Research? A Report on the Reliability and Validity of an Investigator-Administered Test and a Patient-Reported Measure Using Baseline Data Collected in a Phase IIa Drug Trial'".

    PubMed

    Hall, Deborah A; Mehta, Rajnikant L; Fackrell, Kathryn

    2018-03-08

    The authors respond to a letter to the editor (Sabour, 2018) concerning the interpretation of validity in the context of evaluating treatment-related change in tinnitus loudness over time. The authors refer to several landmark methodological publications and an international standard concerning the validity of patient-reported outcome measurement instruments. The tinnitus loudness rating performed better against our reported acceptability criteria for (face and convergent) validity than did the tinnitus loudness matching test. It is important to distinguish between tests that evaluate the validity of measuring treatment-related change over time and tests that quantify the accuracy of diagnosing tinnitus as a case and non-case.

  18. Validation of asthma recording in electronic health records: a systematic review

    PubMed Central

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-01-01

    Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227

  19. A Framework for Testing Scientific Software: A Case Study of Testing Amsterdam Discrete Dipole Approximation Software

    NASA Astrophysics Data System (ADS)

    Shao, Hongbing

    Software testing with scientific software systems often suffers from test oracle problem, i.e., lack of test oracles. Amsterdam discrete dipole approximation code (ADDA) is a scientific software system that can be used to simulate light scattering of scatterers of various types. Testing of ADDA suffers from "test oracle problem". In this thesis work, I established a testing framework to test scientific software systems and evaluated this framework using ADDA as a case study. To test ADDA, I first used CMMIE code as the pseudo oracle to test ADDA in simulating light scattering of a homogeneous sphere scatterer. Comparable results were obtained between ADDA and CMMIE code. This validated ADDA for use with homogeneous sphere scatterers. Then I used experimental result obtained for light scattering of a homogeneous sphere to validate use of ADDA with sphere scatterers. ADDA produced light scattering simulation comparable to the experimentally measured result. This further validated the use of ADDA for simulating light scattering of sphere scatterers. Then I used metamorphic testing to generate test cases covering scatterers of various geometries, orientations, homogeneity or non-homogeneity. ADDA was tested under each of these test cases and all tests passed. The use of statistical analysis together with metamorphic testing is discussed as a future direction. In short, using ADDA as a case study, I established a testing framework, including use of pseudo oracles, experimental results and the metamorphic testing techniques to test scientific software systems that suffer from test oracle problems. Each of these techniques is necessary and contributes to the testing of the software under test.

  20. Analytic Validation of Immunohistochemistry Assays: New Benchmark Data From a Survey of 1085 Laboratories.

    PubMed

    Stuart, Lauren N; Volmar, Keith E; Nowak, Jan A; Fatheree, Lisa A; Souers, Rhona J; Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - A cooperative agreement between the College of American Pathologists (CAP) and the United States Centers for Disease Control and Prevention was undertaken to measure laboratories' awareness and implementation of an evidence-based laboratory practice guideline (LPG) on immunohistochemical (IHC) validation practices published in 2014. - To establish new benchmark data on IHC laboratory practices. - A 2015 survey on IHC assay validation practices was sent to laboratories subscribed to specific CAP proficiency testing programs and to additional nonsubscribing laboratories that perform IHC testing. Specific questions were designed to capture laboratory practices not addressed in a 2010 survey. - The analysis was based on responses from 1085 laboratories that perform IHC staining. Ninety-six percent (809 of 844) always documented validation of IHC assays. Sixty percent (648 of 1078) had separate procedures for predictive and nonpredictive markers, 42.7% (220 of 515) had procedures for laboratory-developed tests, 50% (349 of 697) had procedures for testing cytologic specimens, and 46.2% (363 of 785) had procedures for testing decalcified specimens. Minimum case numbers were specified by 85.9% (720 of 838) of laboratories for nonpredictive markers and 76% (584 of 768) for predictive markers. Median concordance requirements were 95% for both types. For initial validation, 75.4% (538 of 714) of laboratories adopted the 20-case minimum for nonpredictive markers and 45.9% (266 of 579) adopted the 40-case minimum for predictive markers as outlined in the 2014 LPG. The most common method for validation was correlation with morphology and expected results. Laboratories also reported which assay changes necessitated revalidation and their minimum case requirements. - Benchmark data on current IHC validation practices and procedures may help laboratories understand the issues and influence further refinement of LPG recommendations.

  1. Predictive validity of the Biomedical Admissions Test: an evaluation and case study.

    PubMed

    McManus, I C; Ferguson, Eamonn; Wakeford, Richard; Powis, David; James, David

    2011-01-01

    There has been an increase in the use of pre-admission selection tests for medicine. Such tests need to show good psychometric properties. Here, we use a paper by Emery and Bell [2009. The predictive validity of the Biomedical Admissions Test for pre-clinical examination performance. Med Educ 43:557-564] as a case study to evaluate and comment on the reporting of psychometric data in the field of medical student selection (and the comments apply to many papers in the field). We highlight pitfalls when reliability data are not presented, how simple zero-order associations can lead to inaccurate conclusions about the predictive validity of a test, and how biases need to be explored and reported. We show with BMAT that it is the knowledge part of the test which does all the predictive work. We show that without evidence of incremental validity it is difficult to assess the value of any selection tests for medicine.

  2. Minimizing false positive error with multiple performance validity tests: response to Bilder, Sugar, and Hellemann (2014 this issue).

    PubMed

    Larrabee, Glenn J

    2014-01-01

    Bilder, Sugar, and Hellemann (2014 this issue) contend that empirical support is lacking for use of multiple performance validity tests (PVTs) in evaluation of the individual case, differing from the conclusions of Davis and Millis (2014), and Larrabee (2014), who found no substantial increase in false positive rates using a criterion of failure of ≥ 2 PVTs and/or Symptom Validity Tests (SVTs) out of multiple tests administered. Reconsideration of data presented in Larrabee (2014) supports a criterion of ≥ 2 out of up to 7 PVTs/SVTs, as keeping false positive rates close to and in most cases below 10% in cases with bona fide neurologic, psychiatric, and developmental disorders. Strategies to minimize risk of false positive error are discussed, including (1) adjusting individual PVT cutoffs or criterion for number of PVTs failed, for examinees who have clinical histories placing them at risk for false positive identification (e.g., severe TBI, schizophrenia), (2) using the history of the individual case to rule out conditions known to result in false positive errors, (3) using normal performance in domains mimicked by PVTs to show that sufficient native ability exists for valid performance on the PVT(s) that have been failed, and (4) recognizing that as the number of PVTs/SVTs failed increases, the likelihood of valid clinical presentation decreases, with a corresponding increase in the likelihood of invalid test performance and symptom report.

  3. Real-Time Sensor Validation, Signal Reconstruction, and Feature Detection for an RLV Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Jankovsky, Amy L.; Fulton, Christopher E.; Binder, Michael P.; Maul, William A., III; Meyer, Claudia M.

    1998-01-01

    A real-time system for validating sensor health has been developed in support of the reusable launch vehicle program. This system was designed for use in a propulsion testbed as part of an overall effort to improve the safety, diagnostic capability, and cost of operation of the testbed. The sensor validation system was designed and developed at the NASA Lewis Research Center and integrated into a propulsion checkout and control system as part of an industry-NASA partnership, led by Rockwell International for the Marshall Space Flight Center. The system includes modules for sensor validation, signal reconstruction, and feature detection and was designed to maximize portability to other applications. Review of test data from initial integration testing verified real-time operation and showed the system to perform correctly on both hard and soft sensor failure test cases. This paper discusses the design of the sensor validation and supporting modules developed at LeRC and reviews results obtained from initial test cases.

  4. Validation of 2D flood models with insurance claims

    NASA Astrophysics Data System (ADS)

    Zischg, Andreas Paul; Mosimann, Markus; Bernet, Daniel Benjamin; Röthlisberger, Veronika

    2018-02-01

    Flood impact modelling requires reliable models for the simulation of flood processes. In recent years, flood inundation models have been remarkably improved and widely used for flood hazard simulation, flood exposure and loss analyses. In this study, we validate a 2D inundation model for the purpose of flood exposure analysis at the river reach scale. We validate the BASEMENT simulation model with insurance claims using conventional validation metrics. The flood model is established on the basis of available topographic data in a high spatial resolution for four test cases. The validation metrics were calculated with two different datasets; a dataset of event documentations reporting flooded areas and a dataset of insurance claims. The model fit relating to insurance claims is in three out of four test cases slightly lower than the model fit computed on the basis of the observed inundation areas. This comparison between two independent validation data sets suggests that validation metrics using insurance claims can be compared to conventional validation data, such as the flooded area. However, a validation on the basis of insurance claims might be more conservative in cases where model errors are more pronounced in areas with a high density of values at risk.

  5. Assessing reliability and validity measures in managed care studies.

    PubMed

    Montoya, Isaac D

    2003-01-01

    To review the reliability and validity literature and develop an understanding of these concepts as applied to managed care studies. Reliability is a test of how well an instrument measures the same input at varying times and under varying conditions. Validity is a test of how accurately an instrument measures what one believes is being measured. A review of reliability and validity instructional material was conducted. Studies of managed care practices and programs abound. However, many of these studies utilize measurement instruments that were developed for other purposes or for a population other than the one being sampled. In other cases, instruments have been developed without any testing of the instrument's performance. The lack of reliability and validity information may limit the value of these studies. This is particularly true when data are collected for one purpose and used for another. The usefulness of certain studies without reliability and validity measures is questionable, especially in cases where the literature contradicts itself

  6. WE-DE-201-04: Cross Validation of Knowledge-Based Treatment Planning for Prostate LDR Brachytherapy Using Principle Component Analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Roper, J; Ghavidel, B; Godette, K

    Purpose: To validate a knowledge-based algorithm for prostate LDR brachytherapy treatment planning. Methods: A dataset of 100 cases was compiled from an active prostate seed implant service. Cases were randomized into 10 subsets. For each subset, the 90 remaining library cases were registered to a common reference frame and then characterized on a point by point basis using principle component analysis (PCA). Each test case was converted to PCA vectors using the same process and compared with each library case using a Mahalanobis distance to evaluate similarity. Rank order PCA scores were used to select the best-matched library case. Themore » seed arrangement was extracted from the best-matched case and used as a starting point for planning the test case. Any subsequent modifications were recorded that required input from a treatment planner to achieve V100>95%, V150<60%, V200<20%. To simulate operating-room planning constraints, seed activity was held constant, and the seed count could not increase. Results: The computational time required to register test-case contours and evaluate PCA similarity across the library was 10s. Preliminary analysis of 2 subsets shows that 9 of 20 test cases did not require any seed modifications to obtain an acceptable plan. Five test cases required fewer than 10 seed modifications or a grid shift. Another 5 test cases required approximately 20 seed modifications. An acceptable plan was not achieved for 1 outlier, which was substantially larger than its best match. Modifications took between 5s and 6min. Conclusion: A knowledge-based treatment planning algorithm for prostate LDR brachytherapy is being cross validated using 100 prior cases. Preliminary results suggest that for this size library, acceptable plans can be achieved without planner input in about half of the cases while varying amounts of planner input are needed in remaining cases. Computational time and planning time are compatible with clinical practice.« less

  7. Predictive value of autoantibody testing for validating self-reported diagnoses of rheumatoid arthritis in the Women's Health Initiative.

    PubMed

    Walitt, Brian; Mackey, Rachel; Kuller, Lewis; Deane, Kevin D; Robinson, William; Holers, V Michael; Chang, Yue-Fang; Moreland, Larry

    2013-05-01

    Rheumatoid arthritis (RA) research using large databases is limited by insufficient case validity. Of 161,808 postmenopausal women in the Women's Health Initiative, 15,691 (10.2%) reported having RA, far higher than the expected 1% population prevalence. Since chart review for confirmation of an RA diagnosis is impractical in large cohort studies, the current study (2009-2011) tested the ability of baseline serum measurements of rheumatoid factor and anti-cyclic citrullinated peptide antibodies, second-generation assay (anti-CCP2), to identify physician-validated RA among the chart-review study participants with self-reported RA (n = 286). Anti-CCP2 positivity had the highest positive predictive value (PPV) (80.0%), and rheumatoid factor positivity the lowest (44.6%). Together, use of disease-modifying antirheumatic drugs and anti-CCP2 positivity increased PPV to 100% but excluded all seronegative cases (approximately 15% of all RA cases). Case definitions inclusive of seronegative cases had PPVs between 59.6% and 63.6%. False-negative results were minimized in these test definitions, as evidenced by negative predictive values of approximately 90%. Serological measurements, particularly measurement of anti-CCP2, improved the test characteristics of RA case definitions in the Women's Health Initiative.

  8. 75 FR 53371 - Liquefied Natural Gas Facilities: Obtaining Approval of Alternative Vapor-Gas Dispersion Models

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-08-31

    ... factors as the approved models, are validated by experimental test data, and receive the Administrator's... stage of the MEP involves applying the model against a database of experimental test cases including..., particularly the requirement for validation by experimental test data. That guidance is based on the MEP's...

  9. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature.

    PubMed

    Lippa, Sara M

    2018-04-01

    Over the past two decades, there has been much research on measures of response bias and myriad measures have been validated in a variety of clinical and research samples. This critical review aims to guide clinicians through the use of performance validity tests (PVTs) from test selection and administration through test interpretation and feedback. Recommended cutoffs and relevant test operating characteristics are presented. Other important issues to consider during test selection, administration, interpretation, and feedback are discussed including order effects, coaching, impact on test data, and methods to combine measures and improve predictive power. When interpreting performance validity measures, neuropsychologists must use particular caution in cases of dementia, low intelligence, English as a second language/minority cultures, or low education. PVTs provide valuable information regarding response bias and, under the right circumstances, can provide excellent evidence of response bias. Only after consideration of the entire clinical picture, including validity test performance, can concrete determinations regarding the validity of test data be made.

  10. Cross-cultural adaptation and validation of the sino-nasal outcome test (SNOT-22) for Spanish-speaking patients.

    PubMed

    de los Santos, Gonzalo; Reyes, Pablo; del Castillo, Raúl; Fragola, Claudio; Royuela, Ana

    2015-11-01

    Our objective was to perform translation, cross-cultural adaptation and validation of the sino-nasal outcome test 22 (SNOT-22) to Spanish language. SNOT-22 was translated, back translated, and a pretest trial was performed. The study included 119 individuals divided into 60 cases, who met diagnostic criteria for chronic rhinosinusitis according to the European Position Paper on Rhinosinusitis 2012; and 59 controls, who reported no sino-nasal disease. Internal consistency was evaluated with Cronbach's alpha test, reproducibility with Kappa coefficient, reliability with intraclass correlation coefficient (ICC), validity with Mann-Whitney U test and responsiveness with Wilcoxon test. In cases, Cronbach's alpha was 0.91 both before and after treatment, as for controls, it was 0.90 at their first test assessment and 0.88 at 3 weeks. Kappa coefficient was calculated for each item, with an average score of 0.69. ICC was also performed for each item, with a score of 0.87 in the overall score and an average among all items of 0.71. Median score for cases was 47, and 2 for controls, finding the difference to be highly significant (Mann-Whitney U test, p < 0.001). Clinical changes were observed among treated patients, with a median score of 47 and 13.5 before and after treatment, respectively (Wilcoxon test, p < 0.001). The effect size resulted in 0.14 in treated patients whose status at 3 weeks was unvarying; 1.03 in those who were better and 1.89 for much better group. All controls were unvarying with an effect size of 0.05. The Spanish version of the SNOT-22 has the internal consistency, reliability, reproducibility, validity and responsiveness necessary to be a valid instrument to be used in clinical practice.

  11. Validation of the 'United Registries for Clinical Assessment and Research' [UR-CARE], a European Online Registry for Clinical Care and Research in Inflammatory Bowel Disease.

    PubMed

    Burisch, Johan; Gisbert, Javier P; Siegmund, Britta; Bettenworth, Dominik; Thomsen, Sandra Bohn; Cleynen, Isabelle; Cremer, Anneline; Ding, Nik John Sheng; Furfaro, Federica; Galanopoulos, Michail; Grunert, Philip Christian; Hanzel, Jurij; Ivanovski, Tamara Knezevic; Krustins, Eduards; Noor, Nurulamin; O'Morain, Neil; Rodríguez-Lago, Iago; Scharl, Michael; Tua, Julia; Uzzan, Mathieu; Ali Yassin, Nuha; Baert, Filip; Langholz, Ebbe

    2018-04-27

    The 'United Registries for Clinical Assessment and Research' [UR-CARE] database is an initiative of the European Crohn's and Colitis Organisation [ECCO] to facilitate daily patient care and research studies in inflammatory bowel disease [IBD]. Herein, we sought to validate the database by using fictional case histories of patients with IBD that were to be entered by observers of varying experience in IBD. Nineteen observers entered five patient case histories into the database. After 6 weeks, all observers entered the same case histories again. For each case history, 20 key variables were selected to calculate the accuracy for each observer. We assumed that the database was such that ≥ 90% of the entered data would be correct. The overall proportion of correctly entered data was calculated using a beta-binomial regression model to account for inter-observer variation and compared to the expected level of validity. Re-test reliability was assessed using McNemar's test. For all case histories, the overall proportion of correctly entered items and their confidence intervals included the target of 90% (Case 1: 92% [88-94%]; Case 2: 87% [83-91%]; Case 3: 93% [90-95%]; Case 4: 97% [94-99%]; Case 5: 91% [87-93%]). These numbers did not differ significantly from those found 6 weeks later [NcNemar's test p > 0.05]. The UR-CARE database appears to be feasible, valid and reliable as a tool and easy to use regardless of prior user experience and level of clinical IBD experience. UR-CARE has the potential to enhance future European collaborations regarding clinical research in IBD.

  12. Validity threats: overcoming interference with proposed interpretations of assessment data.

    PubMed

    Downing, Steven M; Haladyna, Thomas M

    2004-03-01

    Factors that interfere with the ability to interpret assessment scores or ratings in the proposed manner threaten validity. To be interpreted in a meaningful manner, all assessments in medical education require sound, scientific evidence of validity. The purpose of this essay is to discuss 2 major threats to validity: construct under-representation (CU) and construct-irrelevant variance (CIV). Examples of each type of threat for written, performance and clinical performance examinations are provided. The CU threat to validity refers to undersampling the content domain. Using too few items, cases or clinical performance observations to adequately generalise to the domain represents CU. Variables that systematically (rather than randomly) interfere with the ability to meaningfully interpret scores or ratings represent CIV. Issues such as flawed test items written at inappropriate reading levels or statistically biased questions represent CIV in written tests. For performance examinations, such as standardised patient examinations, flawed cases or cases that are too difficult for student ability contribute CIV to the assessment. For clinical performance data, systematic rater error, such as halo or central tendency error, represents CIV. The term face validity is rejected as representative of any type of legitimate validity evidence, although the fact that the appearance of the assessment may be an important characteristic other than validity is acknowledged. There are multiple threats to validity in all types of assessment in medical education. Methods to eliminate or control validity threats are suggested.

  13. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Chakraborty, Sudipta; Lauss, Georg

    This paper presents a concise description of state-of-the-art real-time simulation-based testing methods and demonstrates how they can be used independently and/or in combination as an integrated development and validation approach for smart grid DERs and systems. A three-part case study demonstrating the application of this integrated approach at the different stages of development and validation of a system-integrated smart photovoltaic (PV) inverter is also presented. Laboratory testing results and perspectives from two international research laboratories are included in the case study.

  14. The Chinese version of the Child and Adolescent Scale of Environment (CASE-C): validity and reliability for children with disabilities in Taiwan.

    PubMed

    Kang, Lin-Ju; Yen, Chia-Feng; Bedell, Gary; Simeonsson, Rune J; Liou, Tsan-Hon; Chi, Wen-Chou; Liu, Shu-Wen; Liao, Hua-Fang; Hwang, Ai-Wen

    2015-03-01

    Measurement of children's participation and environmental factors is a key component of the assessment in the new Disability Evaluation System (DES) in Taiwan. The Child and Adolescent Scale of Environment (CASE) was translated into Traditional Chinese (CASE-C) and used for assessing environmental factors affecting the participation of children and youth with disabilities in the DES. The aim of this study was to validate the CASE-C. Participants were 614 children and youth aged 6.0-17.9 years with disabilities, with the largest condition group comprised of children with intellectual disability (61%). Internal structure, internal consistency, test-retest reliability, convergent validity, and discriminant (known group) validity were examined using exploratory factor analyses, Cronbach's α coefficient, intra-class correlation coefficients (ICC), correlation analyses, and univariate ANOVAs. A three-factor structure (Family/Community Resources, Assistance/Attitude Supports, and Physical Design Access) of the CASE-C was produced with 38% variance explained. The CASE-C had adequate internal consistency (Cronbach's α=.74-.86) and test-retest reliability (ICCs=.73-.90). Children and youth with disabilities who had higher levels of severity of impairment encountered more environmental barriers and those experiencing more environmental problems also had greater restrictions in participation. The CASE-C scores were found to distinguish children on the basis of disability condition and impairment severity, but not on the basis of age or sex. The CASE-C is valid for assessing environmental problems experienced by children and youth with disabilities in Taiwan. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. Investigation of different modeling approaches for computational fluid dynamics simulation of high-pressure rocket combustors

    NASA Astrophysics Data System (ADS)

    Ivancic, B.; Riedmann, H.; Frey, M.; Knab, O.; Karl, S.; Hannemann, K.

    2016-07-01

    The paper summarizes technical results and first highlights of the cooperation between DLR and Airbus Defence and Space (DS) within the work package "CFD Modeling of Combustion Chamber Processes" conducted in the frame of the Propulsion 2020 Project. Within the addressed work package, DLR Göttingen and Airbus DS Ottobrunn have identified several test cases where adequate test data are available and which can be used for proper validation of the computational fluid dynamics (CFD) tools. In this paper, the first test case, the Penn State chamber (RCM1), is discussed. Presenting the simulation results from three different tools, it is shown that the test case can be computed properly with steady-state Reynolds-averaged Navier-Stokes (RANS) approaches. The achieved simulation results reproduce the measured wall heat flux as an important validation parameter very well but also reveal some inconsistencies in the test data which are addressed in this paper.

  16. Zig-zag tape influence in NREL Phase VI wind turbine

    NASA Astrophysics Data System (ADS)

    Gomez-Iradi, Sugoi; Munduate, Xabier

    2014-06-01

    Two bladed 10 metre diameter wind turbine was tested in the 24.4m × 36.6m NASA-Ames wind tunnel (Phase VI). These experiments have been extensively used for validation purposes for CFD and other engineering tools. The free transition case (S), has been, and is, the most employed one for validation purposes, and consist in a 3° pitch case with a rotational speed of 72rpm upwind configuration with and without yaw misalignment. However, there is another less visited case (M) where identical configuration was tested but with the inclusion of a zig-zag tape. This was called transition fixed sequence. This paper shows the differences between the free and the fix transition cases, that should be more appropriate for comparison with fully turbulent simulations. Steady k-ω SST fully turbulent computations performed with WMB CFD method are compared with the experiments showing, better predictions in the attached flow region when it is compared with the transition fixed experiments. This work wants to prove the utility of M case (transition fixed) and show its differences respect the S case (free transition) for validation purposes.

  17. Validation of the Hwalek-Sengstock Elder Abuse Screening Test.

    ERIC Educational Resources Information Center

    Neale, Anne Victoria; And Others

    Elder abuse is recognized as an under-detected and under-reported social problem. Difficulties in detecting elder abuse are compounded by the lack of a standardized, psychometrically valid instrument for case finding. The development of the Hwalek-Sengstock Elder Abuse Screening Test (H-S/EAST) followed a larger effort to identify indicators and…

  18. Validating a UAV artificial intelligence control system using an autonomous test case generator

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy; Huber, Justin

    2013-05-01

    The validation of safety-critical applications, such as autonomous UAV operations in an environment which may include human actors, is an ill posed problem. To confidence in the autonomous control technology, numerous scenarios must be considered. This paper expands upon previous work, related to autonomous testing of robotic control algorithms in a two dimensional plane, to evaluate the suitability of similar techniques for validating artificial intelligence control in three dimensions, where a minimum level of airspeed must be maintained. The results of human-conducted testing are compared to this automated testing, in terms of error detection, speed and testing cost.

  19. Recent Developments in Language Assessment and the Case of Four Large-Scale Tests of ESOL Ability

    ERIC Educational Resources Information Center

    Stoynoff, Stephen

    2009-01-01

    This review article surveys recent developments and validation activities related to four large-scale tests of L2 English ability: the iBT TOEFL, the IELTS, the FCE, and the TOEIC. In addition to describing recent changes to these tests, the paper reports on validation activities that were conducted on the measures. The results of this research…

  20. Analytic Validation of Immunohistochemical Assays: A Comparison of Laboratory Practices Before and After Introduction of an Evidence-Based Guideline.

    PubMed

    Fitzgibbons, Patrick L; Goldsmith, Jeffrey D; Souers, Rhona J; Fatheree, Lisa A; Volmar, Keith E; Stuart, Lauren N; Nowak, Jan A; Astles, J Rex; Nakhleh, Raouf E

    2017-09-01

    - Laboratories must demonstrate analytic validity before any test can be used clinically, but studies have shown inconsistent practices in immunohistochemical assay validation. - To assess changes in immunohistochemistry analytic validation practices after publication of an evidence-based laboratory practice guideline. - A survey on current immunohistochemistry assay validation practices and on the awareness and adoption of a recently published guideline was sent to subscribers enrolled in one of 3 relevant College of American Pathologists proficiency testing programs and to additional nonsubscribing laboratories that perform immunohistochemical testing. The results were compared with an earlier survey of validation practices. - Analysis was based on responses from 1085 laboratories that perform immunohistochemical staining. Of 1057 responses, 65.4% (691) were aware of the guideline recommendations before this survey was sent and 79.9% (550 of 688) of those have already adopted some or all of the recommendations. Compared with the 2010 survey, a significant number of laboratories now have written validation procedures for both predictive and nonpredictive marker assays and specifications for the minimum numbers of cases needed for validation. There was also significant improvement in compliance with validation requirements, with 99% (100 of 102) having validated their most recently introduced predictive marker assay, compared with 74.9% (326 of 435) in 2010. The difficulty in finding validation cases for rare antigens and resource limitations were cited as the biggest challenges in implementing the guideline. - Dissemination of the 2014 evidence-based guideline validation practices had a positive impact on laboratory performance; some or all of the recommendations have been adopted by nearly 80% of respondents.

  1. Valid statistical inference methods for a case-control study with missing data.

    PubMed

    Tian, Guo-Liang; Zhang, Chi; Jiang, Xuejun

    2018-04-01

    The main objective of this paper is to derive the valid sampling distribution of the observed counts in a case-control study with missing data under the assumption of missing at random by employing the conditional sampling method and the mechanism augmentation method. The proposed sampling distribution, called the case-control sampling distribution, can be used to calculate the standard errors of the maximum likelihood estimates of parameters via the Fisher information matrix and to generate independent samples for constructing small-sample bootstrap confidence intervals. Theoretical comparisons of the new case-control sampling distribution with two existing sampling distributions exhibit a large difference. Simulations are conducted to investigate the influence of the three different sampling distributions on statistical inferences. One finding is that the conclusion by the Wald test for testing independency under the two existing sampling distributions could be completely different (even contradictory) from the Wald test for testing the equality of the success probabilities in control/case groups under the proposed distribution. A real cervical cancer data set is used to illustrate the proposed statistical methods.

  2. The development and testing of a skin tear risk assessment tool.

    PubMed

    Newall, Nelly; Lewin, Gill F; Bulsara, Max K; Carville, Keryln J; Leslie, Gavin D; Roberts, Pam A

    2017-02-01

    The aim of the present study is to develop a reliable and valid skin tear risk assessment tool. The six characteristics identified in a previous case control study as constituting the best risk model for skin tear development were used to construct a risk assessment tool. The ability of the tool to predict skin tear development was then tested in a prospective study. Between August 2012 and September 2013, 1466 tertiary hospital patients were assessed at admission and followed up for 10 days to see if they developed a skin tear. The predictive validity of the tool was assessed using receiver operating characteristic (ROC) analysis. When the tool was found not to have performed as well as hoped, secondary analyses were performed to determine whether a potentially better performing risk model could be identified. The tool was found to have high sensitivity but low specificity and therefore have inadequate predictive validity. Secondary analysis of the combined data from this and the previous case control study identified an alternative better performing risk model. The tool developed and tested in this study was found to have inadequate predictive validity. The predictive validity of an alternative, more parsimonious model now needs to be tested. © 2015 Medicalhelplines.com Inc and John Wiley & Sons Ltd.

  3. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    PubMed

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  4. Red flags in the clinical interview may forecast invalid neuropsychological testing.

    PubMed

    Keesler, Michael E; McClung, Kirstie; Meredith-Duliba, Tawny; Williams, Kelli; Swirsky-Sacchetti, Thomas

    2017-04-01

    Evaluating assessment validity is expected in neuropsychological evaluation, particularly in cases with identified secondary gain, where malingering or somatization may be present. Assessed with standalone measures and embedded indices, all within the testing portion of the examination, research on validity of self-report in the clinical interview is limited. Based on experience with litigation-involved examinees recovering from mild traumatic brain injury (mTBI), it was hypothesized that inconsistently reported date of injury (DOI) and/or loss of consciousness (LOC) might predict invalid performance on neurocognitive testing. This archival study examined cases of litigation-involved mTBI patients seen at an outpatient neuropsychological practice in Philadelphia, PA. Coded data included demographic variables, performance validity measures, and consistency between self-report and medicolegal records. A significant relationship was found between the consistency of examinees' self-report with records and their scores on performance validity testing, X 2 (1, N = 84) = 24.18, p < .01, Φ = .49. Post hoc testing revealed significant between-group differences in three of four comparisons, with medium to large effect sizes. A final post hoc analysis found significance between the number of performance validity tests (PVTs) failed and the extent to which an examinee incorrectly reported DOI r(83) = .49, p < .01. Using inconsistently reported LOC and/or DOI to predict an examinee's performance as invalid had a 75% sensitivity and a 75% specificity. Examinees whose reported DOI or LOC differs from records may be more likely to fail one or more PVTs, suggesting possible symptom exaggeration and/or under performance on cognitive testing.s.

  5. Sterilization validation for medical devices at IRASM microbiological laboratory—Practical approaches

    NASA Astrophysics Data System (ADS)

    Trandafir, Laura; Alexandru, Mioara; Constantin, Mihai; Ioniţă, Anca; Zorilă, Florina; Moise, Valentin

    2012-09-01

    EN ISO 11137 established regulations for setting or substantiating the dose for achieving the desired sterility assurance level. The validation studies can be designed in particular for different types of products. Each product needs distinct protocols for bioburden determination and sterility testing. The Microbiological Laboratory from Irradiation Processing Center (IRASM) deals with different types of products, mainly for the VDmax25 method. When it comes to microbiological evaluation the most challenging was cotton gauze. A special situation for establishing the sterilization validation method appears in cases of cotton packed in large quantities. The VDmax25 method cannot be applied for items with average bioburden more than 1000 CFU/pack, irrespective of the weight of the package. This is a method limitation and implies increased costs for the manufacturer when choosing other methods. For microbiological tests, culture condition should be selected in both cases of the bioburden and sterility testing. Details about choosing criteria are given.

  6. FAST Model Calibration and Validation of the OC5-DeepCwind Floating Offshore Wind System Against Wave Tank Test Data

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  7. International Energy Agency Ocean Energy Systems Task 10 Wave Energy Converter Modeling Verification and Validation: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Yu, Yi-Hsiang; Nielsen, Kim

    This is the first joint reference paper for the Ocean Energy Systems (OES) Task 10 Wave Energy Converter modeling verification and validation group. The group is established under the OES Energy Technology Network program under the International Energy Agency. OES was founded in 2001 and Task 10 was proposed by Bob Thresher (National Renewable Energy Laboratory) in 2015 and approved by the OES Executive Committee EXCO in 2016. The kickoff workshop took place in September 2016, wherein the initial baseline task was defined. Experience from similar offshore wind validation/verification projects (OC3-OC5 conducted within the International Energy Agency Wind Task 30)more » [1], [2] showed that a simple test case would help the initial cooperation to present results in a comparable way. A heaving sphere was chosen as the first test case. The team of project participants simulated different numerical experiments, such as heave decay tests and regular and irregular wave cases. The simulation results are presented and discussed in this paper.« less

  8. What Different Kinds of Stratification Can Reveal about the Generalizability of Data-Mined Skill Assessment Models

    ERIC Educational Resources Information Center

    Sao Pedro, Michael A.; Baker, Ryan S. J. d.; Gobert, Janice D.

    2013-01-01

    When validating assessment models built with data mining, generalization is typically tested at the student-level, where models are tested on new students. This approach, though, may fail to find cases where model performance suffers if other aspects of those cases relevant to prediction are not well represented. We explore this here by testing if…

  9. Establishing the Validity and Reliability of Course Evaluation Questionnaires

    ERIC Educational Resources Information Center

    Kember, David; Leung, Doris Y. P.

    2008-01-01

    This article uses the case of designing a new course questionnaire to discuss the issues of validity, reliability and diagnostic power in good questionnaire design. Validity is often not well addressed in course questionnaire design as there are no straightforward tests that can be applied to an individual instrument. The authors propose the…

  10. Truth and Evidence in Validity Theory

    ERIC Educational Resources Information Center

    Borsboom, Denny; Markus, Keith A.

    2013-01-01

    According to Kane (this issue), "the validity of a proposed interpretation or use depends on how well the evidence supports" the claims being made. Because truth and evidence are distinct, this means that the validity of a test score interpretation could be high even though the interpretation is false. As an illustration, we discuss the case of…

  11. The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions.

    PubMed

    Jacob, Robin; Somers, Marie-Andree; Zhu, Pei; Bloom, Howard

    2016-06-01

    In this article, we examine whether a well-executed comparative interrupted time series (CITS) design can produce valid inferences about the effectiveness of a school-level intervention. This article also explores the trade-off between bias reduction and precision loss across different methods of selecting comparison groups for the CITS design and assesses whether choosing matched comparison schools based only on preintervention test scores is sufficient to produce internally valid impact estimates. We conduct a validation study of the CITS design based on the federal Reading First program as implemented in one state using results from a regression discontinuity design as a causal benchmark. Our results contribute to the growing base of evidence regarding the validity of nonexperimental designs. We demonstrate that the CITS design can, in our example, produce internally valid estimates of program impacts when multiple years of preintervention outcome data (test scores in the present case) are available and when a set of reasonable criteria are used to select comparison organizations (schools in the present case). © The Author(s) 2016.

  12. Summary of EASM Turbulence Models in CFL3D With Validation Test Cases

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.; Gatski, Thomas B.

    2003-01-01

    This paper summarizes the Explicit Algebraic Stress Model in k-omega form (EASM-ko) and in k-epsilon form (EASM-ke) in the Reynolds-averaged Navier-Stokes code CFL3D. These models have been actively used over the last several years in CFL3D, and have undergone some minor modifications during that time. Details of the equations and method for coding the latest versions of the models are given, and numerous validation cases are presented. This paper serves as a validation archive for these models.

  13. Proceedings of the 2004 Workshop on CFD Validation of Synthetic Jets and Turbulent Separation Control

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L. (Compiler)

    2007-01-01

    The papers presented here are from the Langley Research Center Workshop on Computational Fluid Dynamics (CFD) Validation of Synthetic Jets and Turbulent Separation Control (nicknamed "CFDVAL2004"), held March 2004 in Williamsburg, Virginia. The goal of the workshop was to bring together an international group of CFD practitioners to assess the current capabilities of different classes of turbulent flow solution methodologies to predict flow fields induced by synthetic jets and separation control geometries. The workshop consisted of three flow-control test cases of varying complexity, and participants could contribute to any number of the cases. Along with their workshop submissions, each participant included a short write-up describing their method for computing the particular case(s). These write-ups are presented as received from the authors with no editing. Descriptions of each of the test cases and experiments are also included.

  14. SU-E-T-131: Artificial Neural Networks Applied to Overall Survival Prediction for Patients with Periampullary Carcinoma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gong, Y; Yu, J; Yeung, V

    Purpose: Artificial neural networks (ANN) can be used to discover complex relations within datasets to help with medical decision making. This study aimed to develop an ANN method to predict two-year overall survival of patients with peri-ampullary cancer (PAC) following resection. Methods: Data were collected from 334 patients with PAC following resection treated in our institutional pancreatic tumor registry between 2006 and 2012. The dataset contains 14 variables including age, gender, T-stage, tumor differentiation, positive-lymph-node ratio, positive resection margins, chemotherapy, radiation therapy, and tumor histology.After censoring for two-year survival analysis, 309 patients were left, of which 44 patients (∼15%) weremore » randomly selected to form testing set. The remaining 265 cases were randomly divided into training set (211 cases, ∼80% of 265) and validation set (54 cases, ∼20% of 265) for 20 times to build 20 ANN models. Each ANN has one hidden layer with 5 units. The 20 ANN models were ranked according to their concordance index (c-index) of prediction on validation sets. To further improve prediction, the top 10% of ANN models were selected, and their outputs averaged for prediction on testing set. Results: By random division, 44 cases in testing set and the remaining 265 cases have approximately equal two-year survival rates, 36.4% and 35.5% respectively. The 20 ANN models, which were trained and validated on the 265 cases, yielded mean c-indexes as 0.59 and 0.63 on validation sets and the testing set, respectively. C-index was 0.72 when the two best ANN models (top 10%) were used in prediction on testing set. The c-index of Cox regression analysis was 0.63. Conclusion: ANN improved survival prediction for patients with PAC. More patient data and further analysis of additional factors may be needed for a more robust model, which will help guide physicians in providing optimal post-operative care. This project was supported by PA CURE Grant.« less

  15. Advanced information processing system: Fault injection study and results

    NASA Technical Reports Server (NTRS)

    Burkhardt, Laura F.; Masotto, Thomas K.; Lala, Jaynarayan H.

    1992-01-01

    The objective of the AIPS program is to achieve a validated fault tolerant distributed computer system. The goals of the AIPS fault injection study were: (1) to present the fault injection study components addressing the AIPS validation objective; (2) to obtain feedback for fault removal from the design implementation; (3) to obtain statistical data regarding fault detection, isolation, and reconfiguration responses; and (4) to obtain data regarding the effects of faults on system performance. The parameters are described that must be varied to create a comprehensive set of fault injection tests, the subset of test cases selected, the test case measurements, and the test case execution. Both pin level hardware faults using a hardware fault injector and software injected memory mutations were used to test the system. An overview is provided of the hardware fault injector and the associated software used to carry out the experiments. Detailed specifications are given of fault and test results for the I/O Network and the AIPS Fault Tolerant Processor, respectively. The results are summarized and conclusions are given.

  16. Military Justice Study Guide

    DTIC Science & Technology

    1990-07-01

    Intrusions for valid medical purposes 4-20 G. Inspections and inventories 4-21 1. General considerations 4-21 2. Inspections 4-21 3. Inventories 4-23...4-25 4. Valid medical purpose 4-25 5. Fitness-for-duty testing 4-26 a. Command-directPd testing 4-26 b. Aftercare and surveillance testing 4-26 c...that the convening authority assign a medical , scientific or other expert to assist in the preparation of the defense case. Once assigned, the expert

  17. Strain Gage Load Calibration of the Wing Interface Fittings for the Adaptive Compliant Trailing Edge Flap Flight Test

    NASA Technical Reports Server (NTRS)

    Miller, Eric J.; Holguin, Andrew C.; Cruz, Josue; Lokos, William A.

    2014-01-01

    The safety-of-flight parameters for the Adaptive Compliant Trailing Edge (ACTE) flap experiment require that flap-to-wing interface loads be sensed and monitored in real time to ensure that the structural load limits of the wing are not exceeded. This paper discusses the strain gage load calibration testing and load equation derivation methodology for the ACTE interface fittings. Both the left and right wing flap interfaces were monitored; each contained four uniquely designed and instrumented flap interface fittings. The interface hardware design and instrumentation layout are discussed. Twenty-one applied test load cases were developed using the predicted in-flight loads. Pre-test predictions of strain gage responses were produced using finite element method models of the interface fittings. Predicted and measured test strains are presented. A load testing rig and three hydraulic jacks were used to apply combinations of shear, bending, and axial loads to the interface fittings. Hardware deflections under load were measured using photogrammetry and transducers. Due to deflections in the interface fitting hardware and test rig, finite element model techniques were used to calculate the reaction loads throughout the applied load range, taking into account the elastically-deformed geometry. The primary load equations were selected based on multiple calibration metrics. An independent set of validation cases was used to validate each derived equation. The 2-sigma residual errors for the shear loads were less than eight percent of the full-scale calibration load; the 2-sigma residual errors for the bending moment loads were less than three percent of the full-scale calibration load. The derived load equations for shear, bending, and axial loads are presented, with the calculated errors for both the calibration cases and the independent validation load cases.

  18. Combining Advanced Turbulent Mixing and Combustion Models with Advanced Multi-Phase CFD Code to Simulate Detonation and Post-Detonation Bio-Agent Mixing and Destruction

    DTIC Science & Technology

    2017-10-01

    perturbations in the energetic material to study their effects on the blast wave formation. The last case also makes use of the same PBX, however, the...configuration, Case A: Spore cloud located on the top of the charge at an angle 45 degree, Case B: Spore cloud located at an angle 45 degree from the charge...theoretical validation. The first is the Sedov case where the pressure decay and blast wave front are validated based on analytical solutions. In this test

  19. Practical Results from the Application of Model Checking and Test Generation from UML/SysML Models of On-Board Space Applications

    NASA Astrophysics Data System (ADS)

    Faria, J. M.; Mahomad, S.; Silva, N.

    2009-05-01

    The deployment of complex safety-critical applications requires rigorous techniques and powerful tools both for the development and V&V stages. Model-based technologies are increasingly being used to develop safety-critical software, and arguably, turning to them can bring significant benefits to such processes, however, along with new challenges. This paper presents the results of a research project where we tried to extend current V&V methodologies to be applied on UML/SysML models and aiming at answering the demands related to validation issues. Two quite different but complementary approaches were investigated: (i) model checking and the (ii) extraction of robustness test-cases from the same models. These two approaches don't overlap and when combined provide a wider reaching model/design validation ability than each one alone thus offering improved safety assurance. Results are very encouraging, even though they either fell short of the desired outcome as shown for model checking, or still appear as not fully matured as shown for robustness test case extraction. In the case of model checking, it was verified that the automatic model validation process can become fully operational and even expanded in scope once tool vendors help (inevitably) to improve the XMI standard interoperability situation. For the robustness test case extraction methodology, the early approach produced interesting results but need further systematisation and consolidation effort in order to produce results in a more predictable fashion and reduce reliance on expert's heuristics. Finally, further improvements and innovation research projects were immediately apparent for both investigated approaches, which point to either circumventing current limitations in XMI interoperability on one hand and bringing test case specification onto the same graphical level as the models themselves and then attempting to automate the generation of executable test cases from its standard UML notation.

  20. The influence of validity criteria on Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) test-retest reliability among high school athletes.

    PubMed

    Brett, Benjamin L; Solomon, Gary S

    2017-04-01

    Research findings to date on the stability of Immediate Post-Concussion Assessment and Cognitive Testing (ImPACT) Composite scores have been inconsistent, requiring further investigation. The use of test validity criteria across these studies also has been inconsistent. Using multiple measures of stability, we examined test-retest reliability of repeated ImPACT baseline assessments in high school athletes across various validity criteria reported in previous studies. A total of 1146 high school athletes completed baseline cognitive testing using the online ImPACT test battery at two time periods of approximately two-year intervals. No participant sustained a concussion between assessments. Five forms of validity criteria used in previous test-retest studies were applied to the data, and differences in reliability were compared. Intraclass correlation coefficients (ICCs) ranged in composite scores from .47 (95% confidence interval, CI [.38, .54]) to .83 (95% CI [.81, .85]) and showed little change across a two-year interval for all five sets of validity criteria. Regression based methods (RBMs) examining the test-retest stability demonstrated a lack of significant change in composite scores across the two-year interval for all forms of validity criteria, with no cases falling outside the expected range of 90% confidence intervals. The application of more stringent validity criteria does not alter test-retest reliability, nor does it account for some of the variation observed across previously performed studies. As such, use of the ImPACT manual validity criteria should be utilized in the determination of test validity and in the individualized approach to concussion management. Potential future efforts to improve test-retest reliability are discussed.

  1. eLearning to facilitate the education and implementation of the Chelsea Critical Care Physical Assessment: a novel measure of function in critical illness

    PubMed Central

    Corner, Evelyn J; Handy, Jonathan M; Brett, Stephen J

    2016-01-01

    Objective To evaluate the efficacy of eLearning in the widespread standardised teaching, distribution and implementation of the Chelsea Critical Care Physical Assessment (CPAx) tool—a validated tool to assess physical function in critically ill patients. Design Prospective educational study. An eLearning module was developed through a conceptual framework, using the four-stage technique for skills teaching to teach clinicians how to use the CPAx. Example and test video case studies of CPAx assessments were embedded within the module. The CPAx scores for the test case studies and demographic data were recorded in a secure area of the website. Data were analysed for inter-rater reliability using intraclass correlation coefficients (ICCs) to see if an eLearning educational package facilitated consistent use of the tool. A utility and content validity questionnaire was distributed after 1 year to eLearning module registrants (n=971). This was to evaluate uptake of the CPAx in clinical practice and content validity of the CPAx from the perspective of clinical users. Setting The module was distributed for use via professional forums (n=2) and direct contacts (n=95). Participants Critical care clinicians. Primary outcome measure ICC of the test case studies. Results Between July and October 2014, 421 candidates from 15 countries registered for the eLearning module. The ICC for case one was 0.996 (95% CI 0.990 to 0.999; n=207). The ICC for case two was 0.988 (0.996 to 1.000; n=184). The CPAx has a strong total scale content validity index (s-CVI) of 0.94 and is well used. Conclusions eLearning is a useful and reliable way of teaching psychomotor skills, such as the CPAx. The CPAx is a well-used measure with high content validity rated by clinicians. PMID:27067895

  2. eLearning to facilitate the education and implementation of the Chelsea Critical Care Physical Assessment: a novel measure of function in critical illness.

    PubMed

    Corner, Evelyn J; Handy, Jonathan M; Brett, Stephen J

    2016-04-11

    To evaluate the efficacy of eLearning in the widespread standardised teaching, distribution and implementation of the Chelsea Critical Care Physical Assessment (CPAx) tool-a validated tool to assess physical function in critically ill patients. Prospective educational study. An eLearning module was developed through a conceptual framework, using the four-stage technique for skills teaching to teach clinicians how to use the CPAx. Example and test video case studies of CPAx assessments were embedded within the module. The CPAx scores for the test case studies and demographic data were recorded in a secure area of the website. Data were analysed for inter-rater reliability using intraclass correlation coefficients (ICCs) to see if an eLearning educational package facilitated consistent use of the tool. A utility and content validity questionnaire was distributed after 1 year to eLearning module registrants (n=971). This was to evaluate uptake of the CPAx in clinical practice and content validity of the CPAx from the perspective of clinical users. The module was distributed for use via professional forums (n=2) and direct contacts (n=95). Critical care clinicians. ICC of the test case studies. Between July and October 2014, 421 candidates from 15 countries registered for the eLearning module. The ICC for case one was 0.996 (95% CI 0.990 to 0.999; n=207). The ICC for case two was 0.988 (0.996 to 1.000; n=184). The CPAx has a strong total scale content validity index (s-CVI) of 0.94 and is well used. eLearning is a useful and reliable way of teaching psychomotor skills, such as the CPAx. The CPAx is a well-used measure with high content validity rated by clinicians. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  3. Effort, symptom validity testing, performance validity testing and traumatic brain injury.

    PubMed

    Bigler, Erin D

    2014-01-01

    To understand the neurocognitive effects of brain injury, valid neuropsychological test findings are paramount. This review examines the research on what has been referred to a symptom validity testing (SVT). Above a designated cut-score signifies a 'passing' SVT performance which is likely the best indicator of valid neuropsychological test findings. Likewise, substantially below cut-point performance that nears chance or is at chance signifies invalid test performance. Significantly below chance is the sine qua non neuropsychological indicator for malingering. However, the interpretative problems with SVT performance below the cut-point yet far above chance are substantial, as pointed out in this review. This intermediate, border-zone performance on SVT measures is where substantial interpretative challenges exist. Case studies are used to highlight the many areas where additional research is needed. Historical perspectives are reviewed along with the neurobiology of effort. Reasons why performance validity testing (PVT) may be better than the SVT term are reviewed. Advances in neuroimaging techniques may be key in better understanding the meaning of border zone SVT failure. The review demonstrates the problems with rigidity in interpretation with established cut-scores. A better understanding of how certain types of neurological, neuropsychiatric and/or even test conditions may affect SVT performance is needed.

  4. Iridology: A systematic review.

    PubMed

    Ernst, E

    1999-02-01

    Iridologists claim to be able to diagnose medical conditions through abnormalities of pigmentation in the iris. This technique is popular in many countries. Therefore it is relevant to ask whether it is valid. To systematically review all interpretable tests of the validity of iridology as a diagnostic tool. DATA SOURCE AND EXTRACTION: Three independent literature searches were performed to identify all blinded tests. Data were extracted in a predefined, standardized fashion. Four case control studies were found. The majority of these investigations suggests that iridology is not a valid diagnostic method. The validity of iridology as a diagnostic tool is not supported by scientific evaluations. Patients and therapists should be discouraged from using this method.

  5. Laboratory compliance with the American Society of Clinical Oncology/College of American Pathologists human epidermal growth factor receptor 2 testing guidelines: a 3-year comparison of validation procedures.

    PubMed

    Dyhdalo, Kathryn S; Fitzgibbons, Patrick L; Goldsmith, Jeffery D; Souers, Rhona J; Nakhleh, Raouf E

    2014-07-01

    The American Society of Clinical Oncology/College of American Pathologists (ASCO/CAP) published guidelines in 2007 regarding testing accuracy, interpretation, and reporting of results for HER2 studies. A 2008 survey identified areas needing improved compliance. To reassess laboratory response to those guidelines following a full accreditation cycle for an updated snapshot of laboratory practices regarding ASCO/CAP guidelines. In 2011, a survey was distributed with the HER2 immunohistochemistry (IHC) proficiency testing program identical to the 2008 survey. Of the 1150 surveys sent, 977 (85.0%) were returned, comparable to the original survey response in 2008 (757 of 907; 83.5%). New participants submitted 124 of 977 (12.7%) surveys. The median laboratory accession rate was 14,788 cases with 211 HER2 tests performed annually. Testing was validated with fluorescence in situ hybridization in 49.1% (443 of 902) of the laboratories; 26.3% (224 of 853) of the laboratories used another IHC assay. The median number of cases to validate fluorescence in situ hybridization (n = 40) and IHC (n = 27) was similar to those in 2008. Ninety-five percent concordance with fluorescence in situ hybridization was achieved by 76.5% (254 of 332) of laboratories for IHC(-) findings and 70.4% (233 of 331) for IHC(+) cases. Ninety-five percent concordance with another IHC assay was achieved by 71.1% (118 of 168) of the laboratories for negative findings and 69.6% (112 of 161) of the laboratories for positive cases. The proportion of laboratories interpreting HER2 IHC using ASCO/CAP guidelines (86.6% [798 of 921] in 2011; 83.8% [605 of 722] in 2008) remains similar. Although fixation time improvements have been made, assay validation deficiencies still exist. The results of this survey were shared within the CAP, including the Laboratory Accreditation Program and the ASCO/CAP panel revising the HER2 guidelines published in October 2013. The Laboratory Accreditation Program checklist was changed to strengthen HER2 validation practices.

  6. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable.

    PubMed

    Korjus, Kristjan; Hebart, Martin N; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier's generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term "Cross-validation and cross-testing" improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do.

  7. Towards a conceptual framework demonstrating the effectiveness of audiovisual patient descriptions (patient video cases): a review of the current literature

    PubMed Central

    2012-01-01

    Background Technological advances have enabled the widespread use of video cases via web-streaming and online download as an educational medium. The use of real subjects to demonstrate acute pathology should aid the education of health care professionals. However, the methodology by which this effect may be tested is not clear. Methods We undertook a literature review of major databases, found relevant articles relevant to using patient video cases as educational interventions, extracted the methodologies used and assessed these methods for internal and construct validity. Results A review of 2532 abstracts revealed 23 studies meeting the inclusion criteria and a final review of 18 of relevance. Medical students were the most commonly studied group (10 articles) with a spread of learner satisfaction, knowledge and behaviour tested. Only two of the studies fulfilled defined criteria on achieving internal and construct validity. The heterogeneity of articles meant it was not possible to perform any meta-analysis. Conclusions Previous studies have not well classified which facet of training or educational outcome the study is aiming to explore and had poor internal and construct validity. Future research should aim to validate a particular outcome measure, preferably by reproducing previous work rather than adopting new methods. In particular cognitive processing enhancement, demonstrated in a number of the medical student studies, should be tested at a postgraduate level. PMID:23256787

  8. FAST Model Calibration and Validation of the OC5- DeepCwind Floating Offshore Wind System Against Wave Tank Test Data: Preprint

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Wendt, Fabian F; Robertson, Amy N; Jonkman, Jason

    During the course of the Offshore Code Comparison Collaboration, Continued, with Correlation (OC5) project, which focused on the validation of numerical methods through comparison against tank test data, the authors created a numerical FAST model of the 1:50-scale DeepCwind semisubmersible system that was tested at the Maritime Research Institute Netherlands ocean basin in 2013. This paper discusses several model calibration studies that were conducted to identify model adjustments that improve the agreement between the numerical simulations and the experimental test data. These calibration studies cover wind-field-specific parameters (coherence, turbulence), hydrodynamic and aerodynamic modeling approaches, as well as rotor model (blade-pitchmore » and blade-mass imbalances) and tower model (structural tower damping coefficient) adjustments. These calibration studies were conducted based on relatively simple calibration load cases (wave only/wind only). The agreement between the final FAST model and experimental measurements is then assessed based on more-complex combined wind and wave validation cases.« less

  9. A verification library for multibody simulation software

    NASA Technical Reports Server (NTRS)

    Kim, Sung-Soo; Haug, Edward J.; Frisch, Harold P.

    1989-01-01

    A multibody dynamics verification library, that maintains and manages test and validation data is proposed, based on RRC Robot arm and CASE backhoe validation and a comparitive study of DADS, DISCOS, and CONTOPS that are existing public domain and commercial multibody dynamic simulation programs. Using simple representative problems, simulation results from each program are cross checked, and the validation results are presented. Functionalities of the verification library are defined, in order to automate validation procedure.

  10. EG-09EPIGENETIC PROFILING REVEALS A CpG HYPERMETHYLATION PHENOTYPE (CIMP) ASSOCIATED WITH WORSE PROGRESSION-FREE SURVIVAL IN MENINGIOMA

    PubMed Central

    Olar, Adriana; Wani, Khalida; Mansouri, Alireza; Zadeh, Gelareh; Wilson, Charmaine; DeMonte, Franco; Fuller, Gregory; Jones, David; Pfister, Stefan; von Deimling, Andreas; Sulman, Erik; Aldape, Kenneth

    2014-01-01

    BACKGROUND: Methylation profiling of solid tumors has revealed biologic subtypes, often with clinical implications. Methylation profiles of meningioma and their clinical implications are not well understood. METHODS: Ninety-two meningioma samples (n = 44 test set and n = 48 validation set) were profiled using the Illumina HumanMethylation450 BeadChip. Unsupervised clustering and analyses for recurrence-free survival (RFS) were performed. RESULTS: Unsupervised clustering of the test set using approximately 900 highly variable markers identified two clearly defined methylation subgroups. One of the groups (n = 19) showed global hypermethylation of a set of markers, analogous to CpG island methylator phenotype (CIMP). These findings were reproducible in the validation set, with 18/48 samples showing the CIMP-positive phenotype. Importantly, of 347 highly variable markers common to both the test and validation set analyses, 107 defined CIMP in the test set and 94 defined CIMP in the validation set, with an overlap of 83 markers between the two datasets. This number is much greater than expected by chance indicating reproducibly of the hypermethylated markers that define CIMP in meningioma. With respect to clinical correlation, the 37 CIMP-positive cases displayed significantly shorter RFS compared to the 55 non-CIMP cases (hazard ratio 2.9, p = 0.013). In an effort to develop a preliminary outcome predictor, a 155-marker subset correlated with RFS was identified in the test dataset. When interrogated in the validation dataset, this 155-marker subset showed a statistical trend (p < 0.1) towards distinguishing survival groups. CONCLUSIONS: This study defines the existence of a CIMP phenotype in meningioma, which involves a substantial proportion (37/92, 40%) of samples with clinical implications. Ongoing work will expand this cohort and examine identification of additional biologic differences (mutational and DNA copy number analysis) to further characterize the aberrant methylation subtype in meningioma. CIMP-positivity with aberrant methylation in recurrent/malignant meningioma suggests a potential therapeutic target for clinically aggressive cases.

  11. Soil moisture mapping using Sentinel 1 images: the proposed approach and its preliminary validation carried out in view of an operational product

    NASA Astrophysics Data System (ADS)

    Paloscia, S.; Pettinato, S.; Santi, E.; Pierdicca, N.; Pulvirenti, L.; Notarnicola, C.; Pace, G.; Reppucci, A.

    2011-11-01

    The main objective of this research is to develop, test and validate a soil moisture (SMC)) algorithm for the GMES Sentinel-1 characteristics, within the framework of an ESA project. The SMC product, to be generated from Sentinel-1 data, requires an algorithm able to process operationally in near-real-time and deliver the product to the GMES services within 3 hours from observations. Two different complementary approaches have been proposed: an Artificial Neural Network (ANN), which represented the best compromise between retrieval accuracy and processing time, thus allowing compliance with the timeliness requirements and a Bayesian Multi-temporal approach, allowing an increase of the retrieval accuracy, especially in case where little ancillary data are available, at the cost of computational efficiency, taking advantage of the frequent revisit time achieved by Sentinel-1. The algorithm was validated in several test areas in Italy, US and Australia, and finally in Spain with a 'blind' validation. The Multi-temporal Bayesian algorithm was validated in Central Italy. The validation results are in all cases very much in line with the requirements. However, the blind validation results were penalized by the availability of only VV polarization SAR images and MODIS lowresolution NDVI, although the RMS is slightly > 4%.

  12. Content validity and reliability of test of gross motor development in Chilean children

    PubMed Central

    Cano-Cappellacci, Marcelo; Leyton, Fernanda Aleitte; Carreño, Joshua Durán

    2016-01-01

    ABSTRACT OBJECTIVE To validate a Spanish version of the Test of Gross Motor Development (TGMD-2) for the Chilean population. METHODS Descriptive, transversal, non-experimental validity and reliability study. Four translators, three experts and 92 Chilean children, from five to 10 years, students from a primary school in Santiago, Chile, have participated. The Committee of Experts has carried out translation, back-translation and revision processes to determine the translinguistic equivalence and content validity of the test, using the content validity index in 2013. In addition, a pilot implementation was achieved to determine test reliability in Spanish, by using the intraclass correlation coefficient and Bland-Altman method. We evaluated whether the results presented significant differences by replacing the bat with a racket, using T-test. RESULTS We obtained a content validity index higher than 0.80 for language clarity and relevance of the TGMD-2 for children. There were significant differences in the object control subtest when comparing the results with bat and racket. The intraclass correlation coefficient for reliability inter-rater, intra-rater and test-retest reliability was greater than 0.80 in all cases. CONCLUSIONS The TGMD-2 has appropriate content validity to be applied in the Chilean population. The reliability of this test is within the appropriate parameters and its use could be recommended in this population after the establishment of normative data, setting a further precedent for the validation in other Latin American countries. PMID:26815160

  13. Comment on Hall et al. (2017), "How to Choose Between Measures of Tinnitus Loudness for Clinical Research? A Report on the Reliability and Validity of an Investigator-Administered Test and a Patient-Reported Measure Using Baseline Data Collected in a Phase IIa Drug Trial".

    PubMed

    Sabour, Siamak

    2018-03-08

    The purpose of this letter, in response to Hall, Mehta, and Fackrell (2017), is to provide important knowledge about methodology and statistical issues in assessing the reliability and validity of an audiologist-administered tinnitus loudness matching test and a patient-reported tinnitus loudness rating. The author uses reference textbooks and published articles regarding scientific assessment of the validity and reliability of a clinical test to discuss the statistical test and the methodological approach in assessing validity and reliability in clinical research. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess reliability and validity. The qualitative variables of sensitivity, specificity, positive predictive value, negative predictive value, false positive and false negative rates, likelihood ratio positive and likelihood ratio negative, as well as odds ratio (i.e., ratio of true to false results), are the most appropriate estimates to evaluate validity of a test compared to a gold standard. In the case of quantitative variables, depending on distribution of the variable, Pearson r or Spearman rho can be applied. Diagnostic accuracy (validity) and diagnostic precision (reliability or agreement) are two completely different methodological issues. Depending on the type of the variable (qualitative or quantitative), well-known statistical tests can be applied to assess validity.

  14. Comprehensive Genomic Profiling Identifies Frequent Drug-Sensitive EGFR Exon 19 Deletions in NSCLC not Identified by Prior Molecular Testing.

    PubMed

    Schrock, Alexa B; Frampton, Garrett M; Herndon, Dana; Greenbowe, Joel R; Wang, Kai; Lipson, Doron; Yelensky, Roman; Chalmers, Zachary R; Chmielecki, Juliann; Elvin, Julia A; Wollner, Mira; Dvir, Addie; -Gutman, Lior Soussan; Bordoni, Rodolfo; Peled, Nir; Braiteh, Fadi; Raez, Luis; Erlich, Rachel; Ou, Sai-Hong Ignatius; Mohamed, Mohamed; Ross, Jeffrey S; Stephens, Philip J; Ali, Siraj M; Miller, Vincent A

    2016-07-01

    Reliable detection of drug-sensitive activating EGFR mutations is critical in the care of advanced non-small cell lung cancer (NSCLC), but such testing is commonly performed using a wide variety of platforms, many of which lack rigorous analytic validation. A large pool of NSCLC cases was assayed with well-validated, hybrid capture-based comprehensive genomic profiling (CGP) at the request of the individual treating physicians in the course of clinical care for the purpose of making therapy decisions. From these, 400 cases harboring EGFR exon 19 deletions (Δex19) were identified, and available clinical history was reviewed. Pathology reports were available for 250 consecutive cases with classical EGFR Δex19 (amino acids 743-754) and were reviewed to assess previous non-hybrid capture-based EGFR testing. Twelve of 71 (17%) cases with EGFR testing results available were negative by previous testing, including 8 of 46 (17%) cases for which the same biopsy was analyzed. Independently, five of six (83%) cases harboring C-helical EGFR Δex19 were previously negative. In a subset of these patients with available clinical outcome information, robust benefit from treatment with EGFR inhibitors was observed. CGP identifies drug-sensitive EGFR Δex19 in NSCLC cases that have undergone prior EGFR testing and returned negative results. Given the proven benefit in progression-free survival conferred by EGFR tyrosine kinase inhibitors in patients with these alterations, CGP should be considered in the initial presentation of advanced NSCLC and when previous testing for EGFR mutations or other driver alterations is negative. Clin Cancer Res; 22(13); 3281-5. ©2016 AACR. ©2016 American Association for Cancer Research.

  15. Phase 2 STS new user development program. Volume 1: Executive summary

    NASA Technical Reports Server (NTRS)

    Mcdowell, J. R.

    1976-01-01

    A methodology for developing new users for STS other than NASA and DoD, thereby maximizing the use of the STS system was developed. The approach to user development, reflected in the implementation plan, and attendant informational material to be used were evaluated by conducting a series of test cases with selected user organizations. These test case organizations were, in effect, used as consultants to evaluate the effectiveness, the needs, the completeness, and the adequacy of the user development approach and informational material. The selection of the test cases provided a variety of potential STS users covering industry, other government agencies, and the educational sector. The test cases covered various use areas and provided a mix of user organization types. A summary of the actual test cases conducted is given. The conduct of the test cases verified the general approach of the implementation plan, the validity of the user development strategy prepared for each test case organization and the effectiveness of the STS basic and user customized informational material.

  16. Are the major risk/need factors predictive of both female and male reoffending?: a test with the eight domains of the level of service/case management inventory.

    PubMed

    Andrews, Donald A; Guzzo, Lina; Raynor, Peter; Rowe, Robert C; Rettinger, L Jill; Brews, Albert; Wormith, J Stephen

    2012-02-01

    The Level of Service/Case Management Inventory (LS/CMI) and the Youth version (YLS/CMI) generate an assessment of risk/need across eight domains that are considered to be relevant for girls and boys and for women and men. Aggregated across five data sets, the predictive validity of each of the eight domains was gender-neutral. The composite total score (LS/CMI total risk/need) was strongly associated with the recidivism of males (mean r = .39, mean AUC = .746) and very strongly associated with the recidivism of females (mean r = .53, mean AUC = .827). The enhanced validity of LS total risk/need with females was traced to the exceptional validity of Substance Abuse with females. The intra-data set conclusions survived the introduction of two very large samples composed of female offenders exclusively. Finally, the mean incremental contributions of gender and the gender-by-risk level interactions in the prediction of criminal recidivism were minimal compared to the relatively strong validity of the LS/CMI risk level. Although the variance explained by gender was minimal and although high-risk cases were high-risk cases regardless of gender, the recidivism rates of lower risk females were lower than the recidivism rates of lower risk males, suggesting possible implications for test interpretation and policy.

  17. An Efficient Data Partitioning to Improve Classification Performance While Keeping Parameters Interpretable

    PubMed Central

    Korjus, Kristjan; Hebart, Martin N.; Vicente, Raul

    2016-01-01

    Supervised machine learning methods typically require splitting data into multiple chunks for training, validating, and finally testing classifiers. For finding the best parameters of a classifier, training and validation are usually carried out with cross-validation. This is followed by application of the classifier with optimized parameters to a separate test set for estimating the classifier’s generalization performance. With limited data, this separation of test data creates a difficult trade-off between having more statistical power in estimating generalization performance versus choosing better parameters and fitting a better model. We propose a novel approach that we term “Cross-validation and cross-testing” improving this trade-off by re-using test data without biasing classifier performance. The novel approach is validated using simulated data and electrophysiological recordings in humans and rodents. The results demonstrate that the approach has a higher probability of discovering significant results than the standard approach of cross-validation and testing, while maintaining the nominal alpha level. In contrast to nested cross-validation, which is maximally efficient in re-using data, the proposed approach additionally maintains the interpretability of individual parameters. Taken together, we suggest an addition to currently used machine learning approaches which may be particularly useful in cases where model weights do not require interpretation, but parameters do. PMID:27564393

  18. FUEL ASSEMBLY SHAKER TEST SIMULATION

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Klymyshyn, Nicholas A.; Sanborn, Scott E.; Adkins, Harold E.

    This report describes the modeling of a PWR fuel assembly under dynamic shock loading in support of the Sandia National Laboratories (SNL) shaker test campaign. The focus of the test campaign is on evaluating the response of used fuel to shock and vibration loads that a can occur during highway transport. Modeling began in 2012 using an LS-DYNA fuel assembly model that was first created for modeling impact scenarios. SNL’s proposed test scenario was simulated through analysis and the calculated results helped guide the instrumentation and other aspects of the testing. During FY 2013, the fuel assembly model was refinedmore » to better represent the test surrogate. Analysis of the proposed loads suggested the frequency band needed to be lowered to attempt to excite the lower natural frequencies of the fuel assembly. Despite SNL’s expansion of lower frequency components in their five shock realizations, pretest predictions suggested a very mild dynamic response to the test loading. After testing was completed, one specific shock case was modeled, using recorded accelerometer data to excite the model. Direct comparison of predicted strain in the cladding was made to the recorded strain gauge data. The magnitude of both sets of strain (calculated and recorded) are very low, compared to the expected yield strength of the Zircaloy-4 material. The model was accurate enough to predict that no yielding of the cladding was expected, but its precision at predicting micro strains is questionable. The SNL test data offers some opportunity for validation of the finite element model, but the specific loading conditions of the testing only excite the fuel assembly to respond in a limited manner. For example, the test accelerations were not strong enough to substantially drive the fuel assembly out of contact with the basket. Under this test scenario, the fuel assembly model does a reasonable job of approximating actual fuel assembly response, a claim that can be verified through direct comparison of model results to recorded test results. This does not offer validation for the fuel assembly model in all conceivable cases, such as high kinetic energy shock cases where the fuel assembly might lift off the basket floor to strike to basket ceiling. This type of nonlinear behavior was not witnessed in testing, so the model does not have test data to be validated against.a basis for validation in cases that substantially alter the fuel assembly response range. This leads to a gap in knowledge that is identified through this modeling study. The SNL shaker testing loaded a surrogate fuel assembly with a certain set of artificially-generated time histories. One thing all the shock cases had in common was an elimination of low frequency components, which reduces the rigid body dynamic response of the system. It is not known if the SNL test cases effectively bound all highway transportation scenarios, or if significantly greater rigid body motion than was tested is credible. This knowledge gap could be filled through modeling the vehicle dynamics of a used fuel conveyance, or by collecting acceleration time history data from an actual conveyance under highway conditions.« less

  19. Review of the Reported Measures of Clinical Validity and Clinical Utility as Arguments for the Implementation of Pharmacogenetic Testing: A Case Study of Statin-Induced Muscle Toxicity.

    PubMed

    Jansen, Marleen E; Rigter, T; Rodenburg, W; Fleur, T M C; Houwink, E J F; Weda, M; Cornel, Martina C

    2017-01-01

    Advances from pharmacogenetics (PGx) have not been implemented into health care to the expected extent. One gap that will be addressed in this study is a lack of reporting on clinical validity and clinical utility of PGx-tests. A systematic review of current reporting in scientific literature was conducted on publications addressing PGx in the context of statins and muscle toxicity. Eighty-nine publications were included and information was selected on reported measures of effect, arguments, and accompanying conclusions. Most authors report associations to quantify the relationship between a genetic variation an outcome, such as adverse drug responses. Conclusions on the implementation of a PGx-test are generally based on these associations, without explicit mention of other measures relevant to evaluate the test's clinical validity and clinical utility. To gain insight in the clinical impact and select useful tests, additional outcomes are needed to estimate the clinical validity and utility, such as cost-effectiveness.

  20. Validating an Asthma Case Detection Instrument in a Head Start Sample

    ERIC Educational Resources Information Center

    Bonner, Sebastian; Matte, Thomas; Rubin, Mitchell; Sheares, Beverley J.; Fagan, Joanne K.; Evans, David; Mellins, Robert B.

    2006-01-01

    Although specific tests screen children in preschool programs for vision, hearing, and dental conditions, there are no published validated instruments to detect preschool-age children with asthma, one of the most common pediatric chronic conditions affecting children in economically disadvantaged communities of color. As part of an asthma…

  1. DSMC Simulations of Hypersonic Flows and Comparison With Experiments

    NASA Technical Reports Server (NTRS)

    Moss, James N.; Bird, Graeme A.; Markelov, Gennady N.

    2004-01-01

    This paper presents computational results obtained with the direct simulation Monte Carlo (DSMC) method for several biconic test cases in which shock interactions and flow separation-reattachment are key features of the flow. Recent ground-based experiments have been performed for several biconic configurations, and surface heating rate and pressure measurements have been proposed for code validation studies. The present focus is to expand on the current validating activities for a relatively new DSMC code called DS2V that Bird (second author) has developed. Comparisons with experiments and other computations help clarify the agreement currently being achieved between computations and experiments and to identify the range of measurement variability of the proposed validation data when benchmarked with respect to the current computations. For the test cases with significant vibrational nonequilibrium, the effect of the vibrational energy surface accommodation on heating and other quantities is demonstrated.

  2. Injector Design Tool Improvements: User's manual for FDNS V.4.5

    NASA Technical Reports Server (NTRS)

    Chen, Yen-Sen; Shang, Huan-Min; Wei, Hong; Liu, Jiwen

    1998-01-01

    The major emphasis of the current effort is in the development and validation of an efficient parallel machine computational model, based on the FDNS code, to analyze the fluid dynamics of a wide variety of liquid jet configurations for general liquid rocket engine injection system applications. This model includes physical models for droplet atomization, breakup/coalescence, evaporation, turbulence mixing and gas-phase combustion. Benchmark validation cases for liquid rocket engine chamber combustion conditions will be performed for model validation purpose. Test cases may include shear coaxial, swirl coaxial and impinging injection systems with combinations LOXIH2 or LOXISP-1 propellant injector elements used in rocket engine designs. As a final goal of this project, a well tested parallel CFD performance methodology together with a user's operation description in a final technical report will be reported at the end of the proposed research effort.

  3. Helping Crisis Managers Protect Reputational Assets: Initial Tests of the Situational Crisis Communication Theory.

    ERIC Educational Resources Information Center

    Coombs, W. Timothy; Holladay, Sherry J.

    2002-01-01

    Explains a comprehensive, prescriptive, situational approach for responding to crises and protecting organizational reputation: the situational crisis communication theory (SCCT). Notes undergraduate students read two crisis case studies from a set of 13 cases and responded to questions following the case. Validates a key assumption in SCCT and…

  4. Digital Fly-By-Wire Flight Control Validation Experience

    NASA Technical Reports Server (NTRS)

    Szalai, K. J.; Jarvis, C. R.; Krier, G. E.; Megna, V. A.; Brock, L. D.; Odonnell, R. N.

    1978-01-01

    The experience gained in digital fly-by-wire technology through a flight test program being conducted by the NASA Dryden Flight Research Center in an F-8C aircraft is described. The system requirements are outlined, along with the requirements for flight qualification. The system is described, including the hardware components, the aircraft installation, and the system operation. The flight qualification experience is emphasized. The qualification process included the theoretical validation of the basic design, laboratory testing of the hardware and software elements, systems level testing, and flight testing. The most productive testing was performed on an iron bird aircraft, which used the actual electronic and hydraulic hardware and a simulation of the F-8 characteristics to provide the flight environment. The iron bird was used for sensor and system redundancy management testing, failure modes and effects testing, and stress testing in many cases with the pilot in the loop. The flight test program confirmed the quality of the validation process by achieving 50 flights without a known undetected failure and with no false alarms.

  5. Initial Teacher Certification Testing in Massachusetts: A Case of the Tail Wagging the Dog.

    ERIC Educational Resources Information Center

    Flippo, Rona F.; Riccards, Michael P.

    2000-01-01

    An evaluation of the Massachusetts Educator Certification Test has revealed unforeseen, counterproductive consequences. Teacher preparation colleges are adjusting curricular emphases to teach to a test of dubious validity and are inadvertently excluding substantial portions of enrollees to boost test scores. Minorities are failing at higher rates.…

  6. Simulation verification techniques study: Simulation performance validation techniques document. [for the space shuttle system

    NASA Technical Reports Server (NTRS)

    Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.

    1975-01-01

    Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.

  7. The validity of upper-limb neurodynamic tests for detecting peripheral neuropathic pain.

    PubMed

    Nee, Robert J; Jull, Gwendolen A; Vicenzino, Bill; Coppieters, Michel W

    2012-05-01

    The validity of upper-limb neurodynamic tests (ULNTs) for detecting peripheral neuropathic pain (PNP) was assessed by reviewing the evidence on plausibility, the definition of a positive test, reliability, and concurrent validity. Evidence was identified by a structured search for peer-reviewed articles published in English before May 2011. The quality of concurrent validity studies was assessed with the Quality Assessment of Diagnostic Accuracy Studies tool, where appropriate. Biomechanical and experimental pain data support the plausibility of ULNTs. Evidence suggests that a positive ULNT should at least partially reproduce the patient's symptoms and that structural differentiation should change these symptoms. Data indicate that this definition of a positive ULNT is reliable when used clinically. Limited evidence suggests that the median nerve test, but not the radial nerve test, helps determine whether a patient has cervical radiculopathy. The median nerve test does not help diagnose carpal tunnel syndrome. These findings should be interpreted cautiously, because diagnostic accuracy might have been distorted by the investigators' definitions of a positive ULNT. Furthermore, patients with PNP who presented with increased nerve mechanosensitivity rather than conduction loss might have been incorrectly classified by electrophysiological reference standards as not having PNP. The only evidence for concurrent validity of the ulnar nerve test was a case study on cubital tunnel syndrome. We recommend that researchers develop more comprehensive reference standards for PNP to accurately assess the concurrent validity of ULNTs and continue investigating the predictive validity of ULNTs for prognosis or treatment response.

  8. Validation of On-board Cloud Cover Assessment Using EO-1

    NASA Technical Reports Server (NTRS)

    Mandl, Dan; Miller, Jerry; Griffin, Michael; Burke, Hsiao-hua

    2003-01-01

    The purpose of this NASA Earth Science Technology Office funded effort was to flight validate an on-board cloud detection algorithm and to determine the performance that can be achieved with a Mongoose V flight computer. This validation was performed on the EO-1 satellite, which is operational, by uploading new flight code to perform the cloud detection. The algorithm was developed by MIT/Lincoln Lab and is based on the use of the Hyperion hyperspectral instrument using selected spectral bands from 0.4 to 2.5 microns. The Technology Readiness Level (TRL) of this technology at the beginning of the task was level 5 and was TRL 6 upon completion. In the final validation, an 8 second (0.75 Gbytes) Hyperion image was processed on-board and assessed for percentage cloud cover within 30 minutes. It was expected to take many hours and perhaps a day considering that the Mongoose V is only a 6-8 MIP machine in performance. To accomplish this test, the image taken had to have level 0 and level 1 processing performed on-board before the cloud algorithm was applied. For almost all of the ground test cases and all of the flight cases, the cloud assessment was within 5% of the correct value and in most cases within 1-2%.

  9. Identification of patients at high risk for Clostridium difficile infection: development and validation of a risk prediction model in hospitalized patients treated with antibiotics.

    PubMed

    van Werkhoven, C H; van der Tempel, J; Jajou, R; Thijsen, S F T; Diepersloot, R J A; Bonten, M J M; Postma, D F; Oosterheert, J J

    2015-08-01

    To develop and validate a prediction model for Clostridium difficile infection (CDI) in hospitalized patients treated with systemic antibiotics, we performed a case-cohort study in a tertiary (derivation) and secondary care hospital (validation). Cases had a positive Clostridium test and were treated with systemic antibiotics before suspicion of CDI. Controls were randomly selected from hospitalized patients treated with systemic antibiotics. Potential predictors were selected from the literature. Logistic regression was used to derive the model. Discrimination and calibration of the model were tested in internal and external validation. A total of 180 cases and 330 controls were included for derivation. Age >65 years, recent hospitalization, CDI history, malignancy, chronic renal failure, use of immunosuppressants, receipt of antibiotics before admission, nonsurgical admission, admission to the intensive care unit, gastric tube feeding, treatment with cephalosporins and presence of an underlying infection were independent predictors of CDI. The area under the receiver operating characteristic curve of the model in the derivation cohort was 0.84 (95% confidence interval 0.80-0.87), and was reduced to 0.81 after internal validation. In external validation, consisting of 97 cases and 417 controls, the model area under the curve was 0.81 (95% confidence interval 0.77-0.85) and model calibration was adequate (Brier score 0.004). A simplified risk score was derived. Using a cutoff of 7 points, the positive predictive value, sensitivity and specificity were 1.0%, 72% and 73%, respectively. In conclusion, a risk prediction model was developed and validated, with good discrimination and calibration, that can be used to target preventive interventions in patients with increased risk of CDI. Copyright © 2015 European Society of Clinical Microbiology and Infectious Diseases. Published by Elsevier Ltd. All rights reserved.

  10. A Selection of Experimental Test Cases for the Validation of CFD Codes (Recueil de cas d’essai experimentaux pour la validation des codes de l’aerodynamique numerique). Volume 1

    DTIC Science & Technology

    1994-08-01

    volume H1. Le rapport ext accompagnt5 doun jeo die disqoettex contenant les donn~es appropri~es Li bous let cas d’essai. (’es disqoettes sont disponibles ...GERMANY PURPL’Sb OF THE TESi The tests are part of a larger effort to establish a database of experimental measurements for missile configurations

  11. Group Sequential Testing of the Predictive Accuracy of a Continuous Biomarker with Unknown Prevalence

    PubMed Central

    Koopmeiners, Joseph S.; Feng, Ziding

    2015-01-01

    Group sequential testing procedures have been proposed as an approach to conserving resources in biomarker validation studies. Previously, Koopmeiners and Feng (2011) derived the asymptotic properties of the sequential empirical positive predictive value (PPV) and negative predictive value curves, which summarize the predictive accuracy of a continuous marker, under case-control sampling. A limitation of their approach is that the prevalence can not be estimated from a case-control study and must be assumed known. In this manuscript, we consider group sequential testing of the predictive accuracy of a continuous biomarker with unknown prevalence. First, we develop asymptotic theory for the sequential empirical PPV and NPV curves when the prevalence must be estimated, rather than assumed known in a case-control study. We then discuss how our results can be combined with standard group sequential methods to develop group sequential testing procedures and bias-adjusted estimators for the PPV and NPV curve. The small sample properties of the proposed group sequential testing procedures and estimators are evaluated by simulation and we illustrate our approach in the context of a study to validate a novel biomarker for prostate cancer. PMID:26537180

  12. [Development and testing of a preparedness and response capacity questionnaire in public health emergency for Chinese provincial and municipal governments].

    PubMed

    Hu, Guo-Qing; Rao, Ke-Qin; Sun, Zhen-Qiu

    2008-12-01

    To develop a capacity questionnaire in public health emergency for Chinese local governments. Literature reviews, conceptual modelling, stake-holder analysis, focus group, interview, and Delphi technique were employed together to develop the questionnaire. Classical test theory and case study were used to assess the reliability and validity. (1) A 2-dimension conceptual model was built. A preparedness and response capacity questionnaire in public health emergency with 10 dimensions and 204 items, was developed. (2) Reliability and validity results. Internal consistency: except for dimension 3 and 8, the Cronbach's alpha coefficient of other dimensions was higher than 0.60. The alpha coefficients of dimension 3 and dimension 8 were 0.59 and 0.39 respectively; Content validity: the questionnaire was recognized by the investigatees; Construct validity: the Spearman correlation coefficients among the 10 dimensions fluctuated around 0.50, ranging from 0.26 to 0.75 (P<0.05); Discrimination validity: comparisons of 10 dimensions among 4 provinces did not show statistical significance using One-way analysis of variance (P>0.05). Criterion-related validity: case study showed significant difference among the 10 dimensions in Beijing between February 2003 (before SARS event) and November 2005 (after SARS event). The preparedness and response capacity questionnaire in public health emergency is a reliable and valid tool, which can be used in all provinces and municipalities in China.

  13. Psychometric instrumentation: reliability and validity of instruments used for clinical practice, evidence-based practice projects and research studies.

    PubMed

    Mayo, Ann M

    2015-01-01

    It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.

  14. Clinical Functional Capacity Testing in Patients With Facioscapulohumeral Muscular Dystrophy: Construct Validity and Interrater Reliability of Antigravity Tests.

    PubMed

    Rijken, Noortje H; van Engelen, Baziel G; Weerdesteyn, Vivian; Geurts, Alexander C

    2015-12-01

    To evaluate the construct validity and interrater reliability of 4 simple antigravity tests in a small group of patients with facioscapulohumeral muscular dystrophy (FSHD). Case-control study. University medical center. Patients with various severity levels of FSHD (n=9) and healthy control subjects (n=10) were included (N=19). Not applicable. A 4-point ordinal scale was designed to grade performance on the following 4 antigravity tests: sit to stance, stance to sit, step up, and step down. In addition, the 6-minute walk test, 10-m walking test, Berg Balance Scale, and timed Up and Go test were administered as conventional tests. Construct validity was determined by linear regression analysis using the Clinical Severity Score (CSS) as the dependent variable. Interrater agreement was tested using a κ analysis. Patients with FSHD performed worse on all 4 antigravity tests compared with the controls. Stronger correlations were found within than between test categories (antigravity vs conventional). The antigravity tests revealed the highest explained variance with regard to the CSS (R(2)=.86, P=.014). Interrater agreement was generally good. The results of this exploratory study support the construct validity and interrater reliability of the proposed antigravity tests for the assessment of functional capacity in patients with FSHD taking into account the use of compensatory strategies. Future research should further validate these results in a larger sample of patients with FSHD. Copyright © 2015 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  15. Certification of highly complex safety-related systems.

    PubMed

    Reinert, D; Schaefer, M

    1999-01-01

    The BIA has now 15 years of experience with the certification of complex electronic systems for safety-related applications in the machinery sector. Using the example of machining centres this presentation will show the systematic procedure for verifying and validating control systems using Application Specific Integrated Circuits (ASICs) and microcomputers for safety functions. One section will describe the control structure of machining centres with control systems using "integrated safety." A diverse redundant architecture combined with crossmonitoring and forced dynamization is explained. In the main section the steps of the systematic certification procedure are explained showing some results of the certification of drilling machines. Specification reviews, design reviews with test case specification, statistical analysis, and walk-throughs are the analytical measures in the testing process. Systematic tests based on the test case specification, Electro Magnetic Interference (EMI), and environmental testing, and site acceptance tests on the machines are the testing measures for validation. A complex software driven system is always undergoing modification. Most of the changes are not safety-relevant but this has to be proven. A systematic procedure for certifying software modifications is presented in the last section of the paper.

  16. Results from an Independent View on The Validation of Safety-Critical Space Systems

    NASA Astrophysics Data System (ADS)

    Silva, N.; Lopes, R.; Esper, A.; Barbosa, R.

    2013-08-01

    The Independent verification and validation (IV&V) has been a key process for decades, and is considered in several international standards. One of the activities described in the “ESA ISVV Guide” is the independent test verification (stated as Integration/Unit Test Procedures and Test Data Verification). This activity is commonly overlooked since customers do not really see the added value of checking thoroughly the validation team work (could be seen as testing the tester's work). This article presents the consolidated results of a large set of independent test verification activities, including the main difficulties, results obtained and advantages/disadvantages for the industry of these activities. This study will support customers in opting-in or opting-out for this task in future IV&V contracts since we provide concrete results from real case studies in the space embedded systems domain.

  17. NLP based congestive heart failure case finding: A prospective analysis on statewide electronic medical records.

    PubMed

    Wang, Yue; Luo, Jin; Hao, Shiying; Xu, Haihua; Shin, Andrew Young; Jin, Bo; Liu, Rui; Deng, Xiaohong; Wang, Lijuan; Zheng, Le; Zhao, Yifan; Zhu, Chunqing; Hu, Zhongkai; Fu, Changlin; Hao, Yanpeng; Zhao, Yingzhen; Jiang, Yunliang; Dai, Dorothy; Culver, Devore S; Alfreds, Shaun T; Todd, Rogow; Stearns, Frank; Sylvester, Karl G; Widen, Eric; Ling, Xuefeng B

    2015-12-01

    In order to proactively manage congestive heart failure (CHF) patients, an effective CHF case finding algorithm is required to process both structured and unstructured electronic medical records (EMR) to allow complementary and cost-efficient identification of CHF patients. We set to identify CHF cases from both EMR codified and natural language processing (NLP) found cases. Using narrative clinical notes from all Maine Health Information Exchange (HIE) patients, the NLP case finding algorithm was retrospectively (July 1, 2012-June 30, 2013) developed with a random subset of HIE associated facilities, and blind-tested with the remaining facilities. The NLP based method was integrated into a live HIE population exploration system and validated prospectively (July 1, 2013-June 30, 2014). Total of 18,295 codified CHF patients were included in Maine HIE. Among the 253,803 subjects without CHF codings, our case finding algorithm prospectively identified 2411 uncodified CHF cases. The positive predictive value (PPV) is 0.914, and 70.1% of these 2411 cases were found to be with CHF histories in the clinical notes. A CHF case finding algorithm was developed, tested and prospectively validated. The successful integration of the CHF case findings algorithm into the Maine HIE live system is expected to improve the Maine CHF care. Copyright © 2015. Published by Elsevier Ireland Ltd.

  18. Feasibility study for remote assessment of cognitive function in multiple sclerosis.

    PubMed

    George, Michaela F; Holingue, Calliope B; Briggs, Farren B S; Shao, Xiaorong; Bellesis, Kalliope H; Whitmer, Rachel A; Schaefer, Catherine; Benedict, Ralph Hb; Barcellos, Lisa F

    2016-01-01

    Cognitive impairment is common in multiple sclerosis (MS), and affects employment and quality of life. Large studies are needed to identify risk factors for cognitive decline. Currently, a MS-validated remote assessment for cognitive function does not exist. Studies to determine feasibility of large remote cognitive function investigations in MS have not been published. To determine whether MS patients would participate in remote cognitive studies. We utilized the Modified Telephone Interview for Cognitive Status (TICS-M), a previously validated phone assessment for cognitive function in healthy elderly populations to detect mild cognitive impairment. We identified factors that influenced participation rates. We investigated the relationship between MS risk factors and TICS-M score in cases, and score differences between cases and control individuals. The TICS-M was administered to MS cases and controls. Linear and logistic regression models were utilized. 11.5% of eligible study participants did not participate in cognitive testing. MS cases, females and individuals with lower educational status were more likely to refuse (p<0.001). Cases who did complete testing did not differ in terms of perceived cognitive deficit compared to cases that did participate. More severe disease, smoking, and being male were associated with a lower TICS-M score among cases (p<0.001). The TICS-M score was significantly lower in cases compared to controls (p=0.007). Our results demonstrate convincingly that a remotely administered cognitive assessment is quite feasible for conducting large epidemiologic studies in MS, and lay the much needed foundation for future work that will utilize MS-validated cognitive measures.

  19. Feasibility study for remote assessment of cognitive function in multiple sclerosis

    PubMed Central

    George, Michaela F.; Holingue, Calliope B.; Briggs, Farren B.S.; Shao, Xiaorong; Bellesis, Kalliope H.; Whitmer, Rachel A.; Schaefer, Catherine; Benedict, Ralph HB; Barcellos, Lisa F.

    2017-01-01

    Background Cognitive impairment is common in multiple sclerosis (MS), and affects employment and quality of life. Large studies are needed to identify risk factors for cognitive decline. Currently, a MS-validated remote assessment for cognitive function does not exist. Studies to determine feasibility of large remote cognitive function investigations in MS have not been published. Objective To determine whether MS patients would participate in remote cognitive studies. We utilized the Modified Telephone Interview for Cognitive Status (TICS-M), a previously validated phone assessment for cognitive function in healthy elderly populations to detect mild cognitive impairment. We identified factors that influenced participation rates. We investigated the relationship between MS risk factors and TICS-M score in cases, and score differences between cases and control individuals. Methods The TICS-M was administered to MS cases and controls. Linear and logistic regression models were utilized. Results 11.5% of eligible study participants did not participate in cognitive testing. MS cases, females and individuals with lower educational status were more likely to refuse (p<0.001). Cases who did complete testing did not differ in terms of perceived cognitive deficit compared to cases that did participate. More severe disease, smoking, and being male were associated with a lower TICS-M score among cases (p<0.001). The TICS-M score was significantly lower in cases compared to controls (p=0.007). Conclusions Our results demonstrate convincingly that a remotely administered cognitive assessment is quite feasible for conducting large epidemiologic studies in MS, and lay the much needed foundation for future work that will utilize MS-validated cognitive measures. PMID:28255581

  20. Laboratory compliance with the American Society of Clinical Oncology/college of American Pathologists guidelines for human epidermal growth factor receptor 2 testing: a College of American Pathologists survey of 757 laboratories.

    PubMed

    Nakhleh, Raouf E; Grimm, Erin E; Idowu, Michael O; Souers, Rhona J; Fitzgibbons, Patrick L

    2010-05-01

    To ensure quality human epidermal growth receptor 2 (HER2) testing in breast cancer, the American Society of Clinical Oncology/College of American Pathologists guidelines were introduced with expected compliance by 2008. To assess the effect these guidelines have had on pathology laboratories and their ability to address key components. In late 2008, a survey was distributed with the HER2 immunohistochemistry (IHC) proficiency testing program. It included questions regarding pathology practice characteristics and assay validation using fluorescence in situ hybridization or another IHC laboratory assay and assessed pathologist HER2 scoring competency. Of the 907 surveys sent, 757 (83.5%) were returned. The median laboratory accessioned 15 000 cases and performed 190 HER2 tests annually. Quantitative computer image analysis was used by 33% of laboratories. In-house fluorescence in situ hybridization was performed in 23% of laboratories, and 60% of laboratories addressed the 6- to 48-hour tissue fixation requirement by embedding tissue on the weekend. HER2 testing was performed on the initial biopsy in 40%, on the resection specimen in 6%, and on either in 56% of laboratories. Testing was validated with only fluorescence in situ hybridization in 47% of laboratories, whereas 10% of laboratories used another IHC assay only; 13% used both assays, and 12% and 15% of laboratories had not validated their assays or chose "not applicable" on the survey question, respectively. The 90% concordance rate with fluorescence in situ hybridization results was achieved by 88% of laboratories for IHC-negative findings and by 81% of laboratories for IHC-positive cases. The 90% concordance rate for laboratories using another IHC assay was achieved by 80% for negative findings and 75% for positive cases. About 91% of laboratories had a pathologist competency assessment program. This survey demonstrates the extent and characteristics of HER2 testing. Although some American Society of Clinical Oncology/College of American Pathologists guidelines have been implemented, gaps remain in validation of HER2 IHC testing.

  1. Face and construct validation of a next generation virtual reality (Gen2-VR) surgical simulator.

    PubMed

    Sankaranarayanan, Ganesh; Li, Baichun; Manser, Kelly; Jones, Stephanie B; Jones, Daniel B; Schwaitzberg, Steven; Cao, Caroline G L; De, Suvranu

    2016-03-01

    Surgical performance is affected by distractors and interruptions to surgical workflow that exist in the operating room. However, traditional surgical simulators are used to train surgeons in a skills laboratory that does not recreate these conditions. To overcome this limitation, we have developed a novel, immersive virtual reality (Gen2-VR) system to train surgeons in these environments. This study was to establish face and construct validity of our system. The study was a within-subjects design, with subjects repeating a virtual peg transfer task under three different conditions: Case I: traditional VR; Case II: Gen2-VR with no distractions and Case III: Gen2-VR with distractions and interruptions. In Case III, to simulate the effects of distractions and interruptions, music was played intermittently, the camera lens was fogged for 10 s and tools malfunctioned for 15 s at random points in time during the simulation. At the completion of the study subjects filled in a 5-point Likert scale feedback questionnaire. A total of sixteen subjects participated in this study. Friedman test showed significant difference in scores between the three conditions (p < 0.0001). Post hoc analysis using Wilcoxon signed-rank tests with Bonferroni correction further showed that all the three conditions were significantly different from each other (Case I, Case II, p < 0.0001), (Case I, Case III, p < 0.0001) and (Case II, Case III, p = 0.009). Subjects rated that fog (mean 4.18) and tool malfunction (median 4.56) significantly hindered their performance. The results showed that Gen2-VR simulator has both face and construct validity and that it can accurately and realistically present distractions and interruptions in a simulated OR, in spite of limitations of the current HMD hardware technology.

  2. Face and Construct Validation of a Next Generation Virtual Reality (Gen2-VR©) Surgical Simulator

    PubMed Central

    Sankaranarayanan, Ganesh; Li, Baichun; Manser, Kelly; Jones, Stephanie B.; Jones, Daniel B.; Schwaitzberg, Steven; Cao, Caroline G. L.; De, Suvranu

    2015-01-01

    Introduction Surgical performance is affected by distractors and interruptions to surgical workflow that exist in the operating room. However, traditional surgical simulators are used to train surgeons in a skills lab that does not recreate these conditions. To overcome this limitation, we have developed a novel, immersive virtual reality (Gen2-VR©) system to train surgeons in these environments. This study was to establish face and construct validity of our system. Methods and Procedures The study was a within-subjects design, with subjects repeating a virtual peg transfer task under three different conditions: CASE I: traditional VR; CASE II: Gen2-VR© with no distractions and CASE III: Gen2-VR© with distractions and interruptions.. In Case III, to simulate the effects of distractions and interruptions, music was played intermittently, the camera lens was fogged for 10 seconds and tools malfunctioned for 15 seconds at random points in time during the simulation. At the completion of the study subjects filled in a 5-point Likert scale feedback questionnaire. A total of sixteen subjects participated in this study. Results Friedman test showed significant difference in scores between the three conditions (p < 0.0001). Post hoc analysis using Wilcoxon Signed Rank tests with Bonferroni correction further showed that all the three conditions were significantly different from each other (Case I, Case II, p < 0.001), (Case I, Case III, p < 0.001) and (Case II, Case III, p = 0.009). Subjects rated that fog (mean= 4.18) and tool malfunction (median = 4.56) significantly hindered their performance. Conclusion The results showed that Gen2-VR© simulator has both face and construct validity and it can accurately and realistically present distractions and interruptions in a simulated OR, in spite of limitations of the current HMD hardware technology. PMID:26092010

  3. An Instrument to Predict Job Performance of Home Health Aides--Testing the Reliability and Validity.

    ERIC Educational Resources Information Center

    Sturges, Jack; Quina, Patricia

    The development of four paper-and-pencil tests, useful in assessing the effectiveness of inservice training provided to either nurses aides or home health aides, was described. These tests were designed for utilization in employment selection and case assignment. Two tests of 37 multiple-choice items and two tests of 10 matching items were…

  4. Description and validation of a new automated surveillance system for Clostridium difficile in Denmark.

    PubMed

    Chaine, M; Gubbels, S; Voldstedlund, M; Kristensen, B; Nielsen, J; Andersen, L P; Ellermann-Eriksen, S; Engberg, J; Holm, A; Olesen, B; Schønheyder, H C; Østergaard, C; Ethelberg, S; Mølbak, K

    2017-09-01

    The surveillance of Clostridium difficile (CD) in Denmark consists of laboratory based data from Departments of Clinical Microbiology (DCMs) sent to the National Registry of Enteric Pathogens (NREP). We validated a new surveillance system for CD based on the Danish Microbiology Database (MiBa). MiBa automatically collects microbiological test results from all Danish DCMs. We built an algorithm to identify positive test results for CD recorded in MiBa. A CD case was defined as a person with a positive culture for CD or PCR detection of toxin A and/or B and/or binary toxin. We compared CD cases identified through the MiBa-based surveillance with those reported to NREP and locally in five DCMs representing different Danish regions. During 2010-2014, NREP reported 13 896 CD cases, and the MiBa-based surveillance 21 252 CD cases. There was a 99·9% concordance between the local datasets and the MiBa-based surveillance. Surveillance based on MiBa was superior to the current surveillance system, and the findings show that the number of CD cases in Denmark hitherto has been under-reported. There were only minor differences between local data and the MiBa-based surveillance, showing the completeness and validity of CD data in MiBa. This nationwide electronic system can greatly strengthen surveillance and research in various applications.

  5. Validation and Simulation of Ares I Scale Model Acoustic Test - 3 - Modeling and Evaluating the Effect of Rainbird Water Deluge Inclusion

    NASA Technical Reports Server (NTRS)

    Strutzenberg, Louise L.; Putman, Gabriel C.

    2011-01-01

    The Ares I Scale Model Acoustics Test (ASMAT) is a series of live-fire tests of scaled rocket motors meant to simulate the conditions of the Ares I launch configuration. These tests have provided a well documented set of high fidelity measurements useful for validation including data taken over a range of test conditions and containing phenomena like Ignition Over-Pressure and water suppression of acoustics. Building on dry simulations of the ASMAT tests with the vehicle at 5 ft. elevation (100 ft. real vehicle elevation), wet simulations of the ASMAT test setup have been performed using the Loci/CHEM computational fluid dynamics software to explore the effect of rainbird water suppression inclusion on the launch platform deck. Two-phase water simulation has been performed using an energy and mass coupled lagrangian particle system module where liquid phase emissions are segregated into clouds of virtual particles and gas phase mass transfer is accomplished through simple Weber number controlled breakup and boiling models. Comparisons have been performed to the dry 5 ft. elevation cases, using configurations with and without launch mounts. These cases have been used to explore the interaction between rainbird spray patterns and launch mount geometry and evaluate the acoustic sound pressure level knockdown achieved through above-deck rainbird deluge inclusion. This comparison has been anchored with validation from live-fire test data which showed a reduction in rainbird effectiveness with the presence of a launch mount.

  6. Testing and Modeling of a 3-MW Wind Turbine Using Fully Coupled Simulation Codes (Poster)

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    LaCava, W.; Guo, Y.; Van Dam, J.

    This poster describes the NREL/Alstom Wind testing and model verification of the Alstom 3-MW wind turbine located at NREL's National Wind Technology Center. NREL,in collaboration with ALSTOM Wind, is studying a 3-MW wind turbine installed at the National Wind Technology Center(NWTC). The project analyzes the turbine design using a state-of-the-art simulation code validated with detailed test data. This poster describes the testing and the model validation effort, and provides conclusions about the performance of the unique drive train configuration used in this wind turbine. The 3-MW machine has been operating at the NWTC since March 2011, and drive train measurementsmore » will be collected through the spring of 2012. The NWTC testing site has particularly turbulent wind patterns that allow for the measurement of large transient loads and the resulting turbine response. This poster describes the 3-MW turbine test project, the instrumentation installed, and the load cases captured. The design of a reliable wind turbine drive train increasingly relies on the use of advanced simulation to predict structural responses in a varying wind field. This poster presents a fully coupled, aero-elastic and dynamic model of the wind turbine. It also shows the methodology used to validate the model, including the use of measured tower modes, model-to-model comparisons of the power curve, and mainshaft bending predictions for various load cases. The drivetrain is designed to only transmit torque to the gearbox, eliminating non-torque moments that are known to cause gear misalignment. Preliminary results show that the drivetrain is able to divert bending loads in extreme loading cases, and that a significantly smaller bending moment is induced on the mainshaft compared to a three-point mounting design.« less

  7. Transitioning from Software Requirements Models to Design Models

    NASA Technical Reports Server (NTRS)

    Lowry, Michael (Technical Monitor); Whittle, Jon

    2003-01-01

    Summary: 1. Proof-of-concept of state machine synthesis from scenarios - CTAS case study. 2. CTAS team wants to use the syntheses algorithm to validate trajectory generation. 3. Extending synthesis algorithm towards requirements validation: (a) scenario relationships' (b) methodology for generalizing/refining scenarios, and (c) interaction patterns to control synthesis. 4. Initial ideas tested on conflict detection scenarios.

  8. Validation of an auditory startle response system using chemicals or parametric modulation as positive controls.

    PubMed

    Marable, Brian R; Maurissen, Jacques P J

    2004-01-01

    Neurotoxicity regulatory guidelines mandate that automated test systems be validated using chemicals. However, in some cases, chemicals may not necessarily be needed to prove test system validity. To examine this issue, two independent experiments were conducted to validate an automated auditory startle response (ASR) system. In Experiment 1, we used adult (PND 63) and weanling (PND 22) Sprague-Dawley rats (10/sex/dose) to determine the effect of either d-amphetamine (4.0 or 8.0 mg/kg) or clonidine (0.4 or 0.8 mg/kg) on the ASR peak amplitude (ASR PA). The startle response of each rat to a short burst of white noise (120 dB SPL) was recorded over 50 consecutive trials. The ASR PA was significantly decreased (by clonidine) and increased (by d-amphetamine) compared to controls in PND 63 rats. In PND 22 rats, the response to clonidine was similar to adults, but d-amphetamine effects were not significant. Neither drug affected the rate of the decrease in ASR PA over time (habituation). In Experiment 2, PND 31 Sprague-Dawley rats (8/sex) were presented with 150 trials consisting of either white noise bursts of variable intensity (70-120 dB SPL in 10 dB increments, presented in random order) or null (0 dB SPL) trials. Statistically significant sex- and intensity-dependent differences were detected in the ASR PA. These results suggest that in some cases, parametric modulation may be an alternative to using chemicals for test system validation.

  9. Numerical simulations in the development of propellant management devices

    NASA Astrophysics Data System (ADS)

    Gaulke, Diana; Winkelmann, Yvonne; Dreyer, Michael

    Propellant management devices (PMDs) are used for positioning the propellant at the propel-lant port. It is important to provide propellant without gas bubbles. Gas bubbles can inflict cavitation and may lead to system failures in the worst case. Therefore, the reliable operation of such devices must be guaranteed. Testing these complex systems is a very intricate process. Furthermore, in most cases only tests with downscaled geometries are possible. Numerical sim-ulations are used here as an aid to optimize the tests and to predict certain results. Based on these simulations, parameters can be determined in advance and parts of the equipment can be adjusted in order to minimize the number of experiments. In return, the simulations are validated regarding the test results. Furthermore, if the accuracy of the numerical prediction is verified, then numerical simulations can be used for validating the scaling of the experiments. This presentation demonstrates some selected numerical simulations for the development of PMDs at ZARM.

  10. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal System

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    The research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to: 1) Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation. 2) Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator. 3) Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the resultsmore » to improve understand of proppant flow and transport. 4) Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production. 5) Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include: 1) A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS, 2) Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock, 3) Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications, and 4) Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  11. Recovery Act. Development and Validation of an Advanced Stimulation Prediction Model for Enhanced Geothermal Systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gutierrez, Marte

    2013-12-31

    This research project aims to develop and validate an advanced computer model that can be used in the planning and design of stimulation techniques to create engineered reservoirs for Enhanced Geothermal Systems. The specific objectives of the proposal are to; Develop a true three-dimensional hydro-thermal fracturing simulator that is particularly suited for EGS reservoir creation; Perform laboratory scale model tests of hydraulic fracturing and proppant flow/transport using a polyaxial loading device, and use the laboratory results to test and validate the 3D simulator; Perform discrete element/particulate modeling of proppant transport in hydraulic fractures, and use the results to improve understandmore » of proppant flow and transport; Test and validate the 3D hydro-thermal fracturing simulator against case histories of EGS energy production; and Develop a plan to commercialize the 3D fracturing and proppant flow/transport simulator. The project is expected to yield several specific results and benefits. Major technical products from the proposal include; A true-3D hydro-thermal fracturing computer code that is particularly suited to EGS; Documented results of scale model tests on hydro-thermal fracturing and fracture propping in an analogue crystalline rock; Documented procedures and results of discrete element/particulate modeling of flow and transport of proppants for EGS applications; and Database of monitoring data, with focus of Acoustic Emissions (AE) from lab scale modeling and field case histories of EGS reservoir creation.« less

  12. Student Accounts of the Ontario Secondary School Literacy Test: A Case for Validation

    ERIC Educational Resources Information Center

    Cheng, Liying; Fox, Janna; Zheng, Ying

    2007-01-01

    The Ontario Secondary School Literacy Test (OSSLT) is a cross-curricular literacy test issued to all secondary school students in the province of Ontario. The test consists of a reading and a writing component, both of which must be successfully completed for secondary school graduation in Ontario. This study elicited 16 first language and second…

  13. DNA Commission of the International Society for Forensic Genetics: Recommendations on the validation of software programs performing biostatistical calculations for forensic genetics applications.

    PubMed

    Coble, M D; Buckleton, J; Butler, J M; Egeland, T; Fimmers, R; Gill, P; Gusmão, L; Guttman, B; Krawczak, M; Morling, N; Parson, W; Pinto, N; Schneider, P M; Sherry, S T; Willuweit, S; Prinz, M

    2016-11-01

    The use of biostatistical software programs to assist in data interpretation and calculate likelihood ratios is essential to forensic geneticists and part of the daily case work flow for both kinship and DNA identification laboratories. Previous recommendations issued by the DNA Commission of the International Society for Forensic Genetics (ISFG) covered the application of bio-statistical evaluations for STR typing results in identification and kinship cases, and this is now being expanded to provide best practices regarding validation and verification of the software required for these calculations. With larger multiplexes, more complex mixtures, and increasing requests for extended family testing, laboratories are relying more than ever on specific software solutions and sufficient validation, training and extensive documentation are of upmost importance. Here, we present recommendations for the minimum requirements to validate bio-statistical software to be used in forensic genetics. We distinguish between developmental validation and the responsibilities of the software developer or provider, and the internal validation studies to be performed by the end user. Recommendations for the software provider address, for example, the documentation of the underlying models used by the software, validation data expectations, version control, implementation and training support, as well as continuity and user notifications. For the internal validations the recommendations include: creating a validation plan, requirements for the range of samples to be tested, Standard Operating Procedure development, and internal laboratory training and education. To ensure that all laboratories have access to a wide range of samples for validation and training purposes the ISFG DNA commission encourages collaborative studies and public repositories of STR typing results. Published by Elsevier Ireland Ltd.

  14. Advanced Subsonic Technology (AST) Area of Interest (AOI) 6: Develop and Validate Aeroelastic Codes for Turbomachinery

    NASA Technical Reports Server (NTRS)

    Gardner, Kevin D.; Liu, Jong-Shang; Murthy, Durbha V.; Kruse, Marlin J.; James, Darrell

    1999-01-01

    AlliedSignal Engines, in cooperation with NASA GRC (National Aeronautics and Space Administration Glenn Research Center), completed an evaluation of recently-developed aeroelastic computer codes using test cases from the AlliedSignal Engines fan blisk and turbine databases. Test data included strain gage, performance, and steady-state pressure information obtained for conditions where synchronous or flutter vibratory conditions were found to occur. Aeroelastic codes evaluated included quasi 3-D UNSFLO (MIT Developed/AE Modified, Quasi 3-D Aeroelastic Computer Code), 2-D FREPS (NASA-Developed Forced Response Prediction System Aeroelastic Computer Code), and 3-D TURBO-AE (NASA/Mississippi State University Developed 3-D Aeroelastic Computer Code). Unsteady pressure predictions for the turbine test case were used to evaluate the forced response prediction capabilities of each of the three aeroelastic codes. Additionally, one of the fan flutter cases was evaluated using TURBO-AE. The UNSFLO and FREPS evaluation predictions showed good agreement with the experimental test data trends, but quantitative improvements are needed. UNSFLO over-predicted turbine blade response reductions, while FREPS under-predicted them. The inviscid TURBO-AE turbine analysis predicted no discernible blade response reduction, indicating the necessity of including viscous effects for this test case. For the TURBO-AE fan blisk test case, significant effort was expended getting the viscous version of the code to give converged steady flow solutions for the transonic flow conditions. Once converged, the steady solutions provided an excellent match with test data and the calibrated DAWES (AlliedSignal 3-D Viscous Steady Flow CFD Solver). However, efforts expended establishing quality steady-state solutions prevented exercising the unsteady portion of the TURBO-AE code during the present program. AlliedSignal recommends that unsteady pressure measurement data be obtained for both test cases examined for use in aeroelastic code validation.

  15. Proposed epidemiological case definition for serious skin infection in children.

    PubMed

    O'Sullivan, Cathryn E; Baker, Michael G

    2010-04-01

    Researching the rising incidence of serious skin infections in children is limited by the lack of a consistent and valid case definition. We aimed to develop and evaluate a good quality case definition, for use in future research and surveillance of these infections. We tested the validity of the existing case definition, and then of 11 proposed alternative definitions, by assessing their screening performance when applied to a population of paediatric skin infection cases identified by a chart review of 4 years of admissions to a New Zealand hospital. Previous studies have largely used definitions based on the International Classification of Diseases skin infection subchapter. This definition is highly specific (100%) but poorly sensitive (61%); it fails to capture skin infections of atypical anatomical sites, those secondary to primary skin disease and trauma, and those recorded as additional diagnoses. Including these groups produced a new case definition with 98.9% sensitivity and 98.8% specificity. Previous analyses of serious skin infection in children have underestimated the true burden of disease. Using this proposed broader case definition should allow future researchers to produce more valid and comparable estimates of the true burden of these important and increasing infections.

  16. Symptom validity testing in memory clinics: Hippocampal-memory associations and relevance for diagnosing mild cognitive impairment.

    PubMed

    Rienstra, Anne; Groot, Paul F C; Spaan, Pauline E J; Majoie, Charles B L M; Nederveen, Aart J; Walstra, Gerard J M; de Jonghe, Jos F M; van Gool, Willem A; Olabarriaga, Silvia D; Korkhov, Vladimir V; Schmand, Ben

    2013-01-01

    Patients with mild cognitive impairment (MCI) do not always convert to dementia. In such cases, abnormal neuropsychological test results may not validly reflect cognitive symptoms due to brain disease, and the usual brain-behavior relationships may be absent. This study examined symptom validity in a memory clinic sample and its effect on the associations between hippocampal volume and memory performance. Eleven of 170 consecutive patients (6.5%; 13% of patients younger than 65 years) referred to memory clinics showed noncredible performance on symptom validity tests (SVTs, viz. Word Memory Test and Test of Memory Malingering). They were compared to a demographically matched group (n = 57) selected from the remaining patients. Hippocampal volume, measured by an automated volumetric method (Freesurfer), was correlated with scores on six verbal memory tests. The median correlation was r = .49 in the matched group. However, the relation was absent (median r = -.11) in patients who failed SVTs. Memory clinic samples may include patients who show noncredible performance, which invalidates their MCI diagnosis. This underscores the importance of applying SVTs in evaluating patients with cognitive complaints that may signify a predementia stage, especially when these patients are relatively young.

  17. Fatty acid ethyl esters (FAEEs) as markers for alcohol in meconium: method validation and implementation of a screening program for prenatal drug exposure.

    PubMed

    Hastedt, Martin; Krumbiegel, Franziska; Gapert, René; Tsokos, Michael; Hartwig, Sven

    2013-09-01

    Alcohol consumption during pregnancy is a widespread problem and can cause severe fetal damage. As the diagnosis of fetal alcohol syndrome is difficult, the implementation of a reliable marker for alcohol consumption during pregnancy into meconium drug screening programs would be invaluable. A previously published gas chromatography mass spectrometry method for the detection of fatty acid ethyl esters (FAEEs) as alcohol markers in meconium was optimized and newly validated for a sample size of 50 mg. This method was applied to 122 cases from a drug-using population. The meconium samples were also tested for common drugs of abuse. In 73 % of the cases, one or more drugs were found. Twenty percent of the samples tested positive for FAEEs at levels indicating significant alcohol exposure. Consequently, alcohol was found to be the third most frequently abused substance within the study group. This re-validated method provides an increase in testing sensitivity, is reliable and easily applicable as part of a drug screening program. It can be used as a non-invasive tool to detect high alcohol consumption in the last trimester of pregnancy. The introduction of FAEEs testing in meconium screening was found to be of particular use in a drug-using population.

  18. Validity and Reliability of Baseline Testing in a Standardized Environment.

    PubMed

    Higgins, Kathryn L; Caze, Todd; Maerlender, Arthur

    2017-08-11

    The Immediate Postconcussion Assessment and Cognitive Testing (ImPACT) is a computerized neuropsychological test battery commonly used to determine cognitive recovery from concussion based on comparing post-injury scores to baseline scores. This model is based on the premise that ImPACT baseline test scores are a valid and reliable measure of optimal cognitive function at baseline. Growing evidence suggests that this premise may not be accurate and a large contributor to invalid and unreliable baseline test scores may be the protocol and environment in which baseline tests are administered. This study examined the effects of a standardized environment and administration protocol on the reliability and performance validity of athletes' baseline test scores on ImPACT by comparing scores obtained in two different group-testing settings. Three hundred-sixty one Division 1 cohort-matched collegiate athletes' baseline data were assessed using a variety of indicators of potential performance invalidity; internal reliability was also examined. Thirty-one to thirty-nine percent of the baseline cases had at least one indicator of low performance validity, but there were no significant differences in validity indicators based on environment in which the testing was conducted. Internal consistency reliability scores were in the acceptable to good range, with no significant differences between administration conditions. These results suggest that athletes may be reliably performing at levels lower than their best effort would produce. © The Author 2017. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  19. Towards Validation of an Adaptive Flight Control Simulation Using Statistical Emulation

    NASA Technical Reports Server (NTRS)

    He, Yuning; Lee, Herbert K. H.; Davies, Misty D.

    2012-01-01

    Traditional validation of flight control systems is based primarily upon empirical testing. Empirical testing is sufficient for simple systems in which a.) the behavior is approximately linear and b.) humans are in-the-loop and responsible for off-nominal flight regimes. A different possible concept of operation is to use adaptive flight control systems with online learning neural networks (OLNNs) in combination with a human pilot for off-nominal flight behavior (such as when a plane has been damaged). Validating these systems is difficult because the controller is changing during the flight in a nonlinear way, and because the pilot and the control system have the potential to co-adapt in adverse ways traditional empirical methods are unlikely to provide any guarantees in this case. Additionally, the time it takes to find unsafe regions within the flight envelope using empirical testing means that the time between adaptive controller design iterations is large. This paper describes a new concept for validating adaptive control systems using methods based on Bayesian statistics. This validation framework allows the analyst to build nonlinear models with modal behavior, and to have an uncertainty estimate for the difference between the behaviors of the model and system under test.

  20. CFD Validation for Hypersonic Flight: Real Gas Flows

    DTIC Science & Technology

    2006-01-01

    Calculations Two research groups contributed results to this test case. The group of Drs. Marco Marini and Salvatore Borelli from the Italian Aerospace...and Borelli ran calculations at four test con- ditions, and we simulated Run 46, which is the high- enthalpy air case. The test section conditions were...N2 at ρ∞ = 3.31 g/cm3, T∞ = 556 K, u∞ = 4450 m/s. Run 46: Air at ρ∞ = 3.28 g/cm3, T∞ = 672 K, u∞ = 4480 m/s. Marini and Borelli used grids of 144× 40

  1. Managing Rater Effects through the Use of FACETS Analysis: The Case of a University Placement Test

    ERIC Educational Resources Information Center

    Wu, Siew Mei; Tan, Susan

    2016-01-01

    Rating essays is a complex task where students' grades could be adversely affected by test-irrelevant factors such as rater characteristics and rating scales. Understanding these factors and controlling their effects are crucial for test validity. Rater behaviour has been extensively studied through qualitative methods such as questionnaires and…

  2. Clinical Validation of Targeted Next Generation Sequencing for Colon and Lung Cancers

    PubMed Central

    D’Haene, Nicky; Le Mercier, Marie; De Nève, Nancy; Blanchard, Oriane; Delaunoy, Mélanie; El Housni, Hakim; Dessars, Barbara; Heimann, Pierre; Remmelink, Myriam; Demetter, Pieter; Tejpar, Sabine; Salmon, Isabelle

    2015-01-01

    Objective Recently, Next Generation Sequencing (NGS) has begun to supplant other technologies for gene mutation testing that is now required for targeted therapies. However, transfer of NGS technology to clinical daily practice requires validation. Methods We validated the Ion Torrent AmpliSeq Colon and Lung cancer panel interrogating 1850 hotspots in 22 genes using the Ion Torrent Personal Genome Machine. First, we used commercial reference standards that carry mutations at defined allelic frequency (AF). Then, 51 colorectal adenocarcinomas (CRC) and 39 non small cell lung carcinomas (NSCLC) were retrospectively analyzed. Results Sensitivity and accuracy for detecting variants at an AF >4% was 100% for commercial reference standards. Among the 90 cases, 89 (98.9%) were successfully sequenced. Among the 86 samples for which NGS and the reference test were both informative, 83 showed concordant results between NGS and the reference test; i.e. KRAS and BRAF for CRC and EGFR for NSCLC, with the 3 discordant cases each characterized by an AF <10%. Conclusions Overall, the AmpliSeq colon/lung cancer panel was specific and sensitive for mutation analysis of gene panels and can be incorporated into clinical daily practice. PMID:26366557

  3. Concurrent Validity and Classification Accuracy of the Leiter and Leiter-R in Low Functioning Children with Autism.

    ERIC Educational Resources Information Center

    Tsatsanis, Katherine D.; Dartnall, Nancy; Cicchetti, Domenic; Sparrow, Sara S.; Klin, Ami; Volkmar, Fred R.

    2003-01-01

    The concurrent validity of the original and revised versions of the Leiter International Performance Scale was examined with 26 children (ages 4-16) with autism. Although the correlation between the two tests was high (.87), there were significant intra-individual discrepancies present in 10 cases, two of which were both large and clinically…

  4. Building validation tools for knowledge-based systems

    NASA Technical Reports Server (NTRS)

    Stachowitz, R. A.; Chang, C. L.; Stock, T. S.; Combs, J. B.

    1987-01-01

    The Expert Systems Validation Associate (EVA), a validation system under development at the Lockheed Artificial Intelligence Center for more than a year, provides a wide range of validation tools to check the correctness, consistency and completeness of a knowledge-based system. A declarative meta-language (higher-order language), is used to create a generic version of EVA to validate applications written in arbitrary expert system shells. The architecture and functionality of EVA are presented. The functionality includes Structure Check, Logic Check, Extended Structure Check (using semantic information), Extended Logic Check, Semantic Check, Omission Check, Rule Refinement, Control Check, Test Case Generation, Error Localization, and Behavior Verification.

  5. Meta-Analysis of Integrity Tests: A Critical Examination of Validity Generalization and Moderator Variables

    DTIC Science & Technology

    1992-06-01

    AVA LABLLTY OF PEPOR’ 2b DECLASSfFiCATION DOWNGRADING SCHEDULE UnI imiited 4 PERFORMING ORGANZAT ON REPORT NUMBER(S) 5 MON’TORzNG ORGA% ZA C% RPEOR...8217 " S 92- 1 6a NAME OF PERFORMING ORGANIZATION 6b OFFPCE SYMBOL 7a NAME OF V0’O0R ’C OCGAz) ZA- %I University of Iowa (Ifappicable) Defense Personnel...data points. Results indicate that integrity test validities are positive and in many cases substantial for predicting both job performance and

  6. Experiences Using Lightweight Formal Methods for Requirements Modeling

    NASA Technical Reports Server (NTRS)

    Easterbrook, Steve; Lutz, Robyn; Covington, Rick; Kelly, John; Ampo, Yoko; Hamilton, David

    1997-01-01

    This paper describes three case studies in the lightweight application of formal methods to requirements modeling for spacecraft fault protection systems. The case studies differ from previously reported applications of formal methods in that formal methods were applied very early in the requirements engineering process, to validate the evolving requirements. The results were fed back into the projects, to improve the informal specifications. For each case study, we describe what methods were applied, how they were applied, how much effort was involved, and what the findings were. In all three cases, formal methods enhanced the existing verification and validation processes, by testing key properties of the evolving requirements, and helping to identify weaknesses. We conclude that the benefits gained from early modeling of unstable requirements more than outweigh the effort needed to maintain multiple representations.

  7. Processing Relative Clauses by Hungarian Typically Developing Children

    ERIC Educational Resources Information Center

    Kas, Bence; Lukacs, Agnes

    2012-01-01

    Hungarian is a language with morphological case marking and relatively free word order. These typological characteristics make it a good ground for testing the crosslinguistic validity of theories on processing sentences with relative clauses. Our study focused on effects of structural factors and processing capacity. We tested 43 typically…

  8. An Adaptation of the Original Fresno Test to Measure Evidence-Based Practice Competence in Pediatric Bedside Nurses.

    PubMed

    Laibhen-Parkes, Natasha; Kimble, Laura P; Melnyk, Bernadette Mazurek; Sudia, Tanya; Codone, Susan

    2018-06-01

    Instruments used to assess evidence-based practice (EBP) competence in nurses have been subjective, unreliable, or invalid. The Fresno test was identified as the only instrument to measure all the steps of EBP with supportive reliability and validity data. However, the items and psychometric properties of the original Fresno test are only relevant to measure EBP with medical residents. Therefore, the purpose of this paper is to describe the development of the adapted Fresno test for pediatric nurses, and provide preliminary validity and reliability data for its use with Bachelor of Science in Nursing-prepared pediatric bedside nurses. General adaptations were made to the original instrument's case studies, item content, wording, and format to meet the needs of a pediatric nursing sample. The scoring rubric was also modified to complement changes made to the instrument. Content and face validity, and intrarater reliability of the adapted Fresno test were assessed during a mixed-methods pilot study conducted from October to December 2013 with 29 Bachelor of Science in Nursing-prepared pediatric nurses. Validity data provided evidence for good content and face validity. Intrarater reliability estimates were high. The adapted Fresno test presented here appears to be a valid and reliable assessment of EBP competence in Bachelor of Science in Nursing-prepared pediatric nurses. However, further testing of this instrument is warranted using a larger sample of pediatric nurses in diverse settings. This instrument can be a starting point for evaluating the impact of EBP competence on patient outcomes. © 2018 Sigma Theta Tau International.

  9. Validation of the Italian Version of the Caregiver Abuse Screen among Family Caregivers of Older People with Alzheimer's Disease.

    PubMed

    Melchiorre, Maria Gabriella; Di Rosa, Mirko; Barbabella, Francesco; Barbini, Norma; Lattanzio, Fabrizia; Chiatti, Carlos

    2017-01-01

    Introduction . Elder abuse is often a hidden phenomenon and, in many cases, screening practices are difficult to implement among older people with dementia. The Caregiver Abuse Screen (CASE) is a useful tool which is administered to family caregivers for detecting their potential abusive behavior. Objectives . To validate the Italian version of the CASE tool in the context of family caregiving of older people with Alzheimer's disease (AD) and to identify risk factors for elder abuse in Italy. Methods . The CASE test was administered to 438 caregivers, recruited in the Up-Tech study. Validity and reliability were evaluated using Spearman's correlation coefficients, principal-component analysis, and Cronbach's alphas. The association between the CASE and other variables potentially associated with elder abuse was also analyzed. Results . The factor analysis suggested the presence of a single factor, with a strong internal consistency (Cronbach's alpha = 0.86). CASE score was strongly correlated with well-known risk factors of abuse. At multivariate level, main factors associated with CASE total score were caregiver burden and AD-related behavioral disturbances. Conclusions . The Italian version of the CASE is a reliable and consistent screening tool for tackling the risk of being or becoming perpetrators of abuse by family caregivers of people with AD.

  10. Validation of the Italian Version of the Caregiver Abuse Screen among Family Caregivers of Older People with Alzheimer's Disease

    PubMed Central

    Di Rosa, Mirko; Barbabella, Francesco; Barbini, Norma; Chiatti, Carlos

    2017-01-01

    Introduction. Elder abuse is often a hidden phenomenon and, in many cases, screening practices are difficult to implement among older people with dementia. The Caregiver Abuse Screen (CASE) is a useful tool which is administered to family caregivers for detecting their potential abusive behavior. Objectives. To validate the Italian version of the CASE tool in the context of family caregiving of older people with Alzheimer's disease (AD) and to identify risk factors for elder abuse in Italy. Methods. The CASE test was administered to 438 caregivers, recruited in the Up-Tech study. Validity and reliability were evaluated using Spearman's correlation coefficients, principal-component analysis, and Cronbach's alphas. The association between the CASE and other variables potentially associated with elder abuse was also analyzed. Results. The factor analysis suggested the presence of a single factor, with a strong internal consistency (Cronbach's alpha = 0.86). CASE score was strongly correlated with well-known risk factors of abuse. At multivariate level, main factors associated with CASE total score were caregiver burden and AD-related behavioral disturbances. Conclusions. The Italian version of the CASE is a reliable and consistent screening tool for tackling the risk of being or becoming perpetrators of abuse by family caregivers of people with AD. PMID:28265571

  11. Evaluation of surveillance case definition in the diagnosis of leptospirosis, using the Microscopic Agglutination Test: a validation study.

    PubMed

    Dassanayake, Dinesh L B; Wimalaratna, Harith; Agampodi, Suneth B; Liyanapathirana, Veranja C; Piyarathna, Thibbotumunuwe A C L; Goonapienuwala, Bimba L

    2009-04-22

    Leptospirosis is endemic in both urban and rural areas of Sri Lanka and there had been many out breaks in the recent past. This study was aimed at validating the leptospirosis surveillance case definition, using the Microscopic Agglutination Test (MAT). The study population consisted of patients with undiagnosed acute febrile illness who were admitted to the medical wards of the Teaching Hospital Kandy, from 1st July 2007 to 31st July 2008. The subjects were screened to diagnose leptospirosis according to the leptospirosis case definition. MAT was performed on blood samples taken from each patient on the 7th day of fever. Leptospirosis case definition was evaluated in regard to sensitivity, specificity and predictive values, using a MAT titre >or= 1:800 for confirming leptospirosis. A total of 123 patients were initially recruited of which 73 had clinical features compatible with the surveillance case definition. Out of the 73 only 57 had a positive MAT result (true positives) leaving 16 as false positives. Out of the 50 who didn't have clinical features compatible with the case definition 45 had a negative MAT as well (true negatives), therefore 5 were false negatives. Total number of MAT positives was 62 out of 123. According to these results the test sensitivity was 91.94%, specificity 73.77%, positive predictive value and negative predictive values were 78.08% and 90% respectively. Diagnostic accuracy of the test was 82.93%. This study confirms that the surveillance case definition has a very high sensitivity and negative predictive value with an average specificity in diagnosing leptospirosis, based on a MAT titre of >or= 1: 800.

  12. CFD Validation Experiment of a Mach 2.5 Axisymmetric Shock-Wave/Boundary-Layer Interaction

    NASA Technical Reports Server (NTRS)

    Davis, David O.

    2015-01-01

    Experimental investigations of specific flow phenomena, e.g., Shock Wave Boundary-Layer Interactions (SWBLI), provide great insight to the flow behavior but often lack the necessary details to be useful as CFD validation experiments. Reasons include: 1.Undefined boundary conditions Inconsistent results 2.Undocumented 3D effects (CL only measurements) 3.Lack of uncertainty analysis While there are a number of good subsonic experimental investigations that are sufficiently documented to be considered test cases for CFD and turbulence model validation, the number of supersonic and hypersonic cases is much less. This was highlighted by Settles and Dodsons [1] comprehensive review of available supersonic and hypersonic experimental studies. In all, several hundred studies were considered for their database.Of these, over a hundred were subjected to rigorous acceptance criteria. Based on their criteria, only 19 (12 supersonic, 7 hypersonic) were considered of sufficient quality to be used for validation purposes. Aeschliman and Oberkampf [2] recognized the need to develop a specific methodology for experimental studies intended specifically for validation purposes.

  13. Design and Testing of Braided Composite Fan Case Materials and Components

    NASA Technical Reports Server (NTRS)

    Roberts, Gary D.; Pereira, J. Michael; Braley, Michael S.; Arnold, William a.; Dorer, James D.; Watson, William R/.

    2009-01-01

    Triaxial braid composite materials are beginning to be used in fan cases for commercial gas turbine engines. The primary benefit for the use of composite materials is reduced weight and the associated reduction in fuel consumption. However, there are also cost benefits in some applications. This paper presents a description of the braided composite materials and discusses aspects of the braiding process that can be utilized for efficient fabrication of composite cases. The paper also presents an approach that was developed for evaluating the braided composite materials and composite fan cases in a ballistic impact laboratory. Impact of composite panels with a soft projectile is used for materials evaluation. Impact of composite fan cases with fan blades or blade-like projectiles is used to evaluate containment capability. A post-impact structural load test is used to evaluate the capability of the impacted fan case to survive dynamic loads during engine spool down. Validation of these new test methods is demonstrated by comparison with results of engine blade-out tests.

  14. Empirical test of the performance of an acoustic-phonetic approach to forensic voice comparison under conditions similar to those of a real case.

    PubMed

    Enzinger, Ewald; Morrison, Geoffrey Stewart

    2017-08-01

    In a 2012 case in New South Wales, Australia, the identity of a speaker on several audio recordings was in question. Forensic voice comparison testimony was presented based on an auditory-acoustic-phonetic-spectrographic analysis. No empirical demonstration of the validity and reliability of the analytical methodology was presented. Unlike the admissibility standards in some other jurisdictions (e.g., US Federal Rule of Evidence 702 and the Daubert criteria, or England & Wales Criminal Practice Directions 19A), Australia's Unified Evidence Acts do not require demonstration of the validity and reliability of analytical methods and their implementation before testimony based upon them is presented in court. The present paper reports on empirical tests of the performance of an acoustic-phonetic-statistical forensic voice comparison system which exploited the same features as were the focus of the auditory-acoustic-phonetic-spectrographic analysis in the case, i.e., second-formant (F2) trajectories in /o/ tokens and mean fundamental frequency (f0). The tests were conducted under conditions similar to those in the case. The performance of the acoustic-phonetic-statistical system was very poor compared to that of an automatic system. Copyright © 2017 Elsevier B.V. All rights reserved.

  15. Automated smartphone audiometry: Validation of a word recognition test app.

    PubMed

    Dewyer, Nicholas A; Jiradejvong, Patpong; Henderson Sabes, Jennifer; Limb, Charles J

    2018-03-01

    Develop and validate an automated smartphone word recognition test. Cross-sectional case-control diagnostic test comparison. An automated word recognition test was developed as an app for a smartphone with earphones. English-speaking adults with recent audiograms and various levels of hearing loss were recruited from an audiology clinic and were administered the smartphone word recognition test. Word recognition scores determined by the smartphone app and the gold standard speech audiometry test performed by an audiologist were compared. Test scores for 37 ears were analyzed. Word recognition scores determined by the smartphone app and audiologist testing were in agreement, with 86% of the data points within a clinically acceptable margin of error and a linear correlation value between test scores of 0.89. The WordRec automated smartphone app accurately determines word recognition scores. 3b. Laryngoscope, 128:707-712, 2018. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  16. Herbalife hepatotoxicity: Evaluation of cases with positive reexposure tests.

    PubMed

    Teschke, Rolf; Frenzel, Christian; Schulze, Johannes; Schwarzenboeck, Alexander; Eickhoff, Axel

    2013-07-27

    To analyze the validity of applied test criteria and causality assessment methods in assumed Herbalife hepatotoxicity with positive reexposure tests. We searched the Medline database for suspected cases of Herbalife hepatotoxicity and retrieved 53 cases including eight cases with a positive unintentional reexposure and a high causality level for Herbalife. First, analysis of these eight cases focused on the data quality of the positive reexposure cases, requiring a baseline value of alanine aminotransferase (ALT) < 5 upper limit of normal (N) before reexposure, with N as the upper limit of normal, and a doubling of the ALT value at reexposure as compared to the ALT value at baseline prior to reexposure. Second, reported methods to assess causality in the eight cases were evaluated, and then the liver specific Council for International Organizations of Medical Sciences (CIOMS) scale validated for hepatotoxicity cases was used for quantitative causality reevaluation. This scale consists of various specific elements with scores provided through the respective case data, and the sum of the scores yields a causality grading for each individual case of initially suspected hepatotoxicity. Details of positive reexposure test conditions and their individual results were scattered in virtually all cases, since reexposures were unintentional and allowed only retrospective rather than prospective assessments. In 1/8 cases, criteria for a positive reexposure were fulfilled, whereas in the remaining cases the reexposure test was classified as negative (n = 1), or the data were considered as uninterpretable due to missing information to comply adequately with the criteria (n = 6). In virtually all assessed cases, liver unspecific causality assessment methods were applied rather than a liver specific method such as the CIOMS scale. Using this scale, causality gradings for Herbalife in these eight cases were probable (n = 1), unlikely (n = 4), and excluded (n = 3). Confounding variables included low data quality, alternative diagnoses, poor exclusion of important other causes, and comedication by drugs and herbs in 6/8 cases. More specifically, problems were evident in some cases regarding temporal association, daily doses, exact start and end dates of product use, actual data of laboratory parameters such as ALT, and exact dechallenge characteristics. Shortcomings included scattered exclusion of hepatitis A-C, cytomegalovirus and Epstein Barr virus infection with only globally presented or lacking parameters. Hepatitis E virus infection was considered in one single patient and found positive, infections by herpes simplex virus and varicella zoster virus were excluded in none. Only one case fulfilled positive reexposure test criteria in initially assumed Herbalife hepatotoxicity, with lower CIOMS based causality gradings for the other cases than hitherto proposed.

  17. A Performance Management Framework for Civil Engineering

    DTIC Science & Technology

    1990-09-01

    cultural change. A non - equivalent control group design was chosen to augment the case analysis. Figure 3.18 shows the form of the quasi-experiment. The...The non - equivalent control group design controls the following obstacles to internal validity: history, maturation, testing, and instrumentation. The...and Stanley, 1963:48,50) Table 7. Validity of Quasi-Experiment The non - equivalent control group experimental design controls the following obstacles to

  18. Challenges in Rotorcraft Acoustic Flight Prediction and Validation

    NASA Technical Reports Server (NTRS)

    Boyd, D. Douglas, Jr.

    2003-01-01

    Challenges associated with rotorcraft acoustic flight prediction and validation are examined. First, an outline of a state-of-the-art rotorcraft aeroacoustic prediction methodology is presented. Components including rotorcraft aeromechanics, high resolution reconstruction, and rotorcraft acoustic prediction arc discussed. Next, to illustrate challenges and issues involved, a case study is presented in which an analysis of flight data from a specific XV-15 tiltrotor acoustic flight test is discussed in detail. Issues related to validation of methodologies using flight test data are discussed. Primary flight parameters such as velocity, altitude, and attitude are discussed and compared for repeated flight conditions. Other measured steady state flight conditions are examined for consistency and steadiness. A representative example prediction is presented and suggestions are made for future research.

  19. Turbine-99 unsteady simulations - Validation

    NASA Astrophysics Data System (ADS)

    Cervantes, M. J.; Andersson, U.; Lövgren, H. M.

    2010-08-01

    The Turbine-99 test case, a Kaplan draft tube model, aimed to determine the state of the art within draft tube simulation. Three workshops were organized on the matter in 1999, 2001 and 2005 where the geometry and experimental data were provided as boundary conditions to the participants. Since the last workshop, computational power and flow modelling have been developed and the available data completed with unsteady pressure measurements and phase resolved velocity measurements in the cone. Such new set of data together with the corresponding phase resolved velocity boundary conditions offer new possibilities to validate unsteady numerical simulations in Kaplan draft tube. The present work presents simulation of the Turbine-99 test case with time dependent angular resolved inlet velocity boundary conditions. Different grids and time steps are investigated. The results are compared to experimental time dependent pressure and velocity measurements.

  20. Validating emotional attention regulation as a component of emotional intelligence: A Stroop approach to individual differences in tuning in to and out of nonverbal cues.

    PubMed

    Elfenbein, Hillary Anger; Jang, Daisung; Sharma, Sudeep; Sanchez-Burks, Jeffrey

    2017-03-01

    Emotional intelligence (EI) has captivated researchers and the public alike, but it has been challenging to establish its components as objective abilities. Self-report scales lack divergent validity from personality traits, and few ability tests have objectively correct answers. We adapt the Stroop task to introduce a new facet of EI called emotional attention regulation (EAR), which involves focusing emotion-related attention for the sake of information processing rather than for the sake of regulating one's own internal state. EAR includes 2 distinct components. First, tuning in to nonverbal cues involves identifying nonverbal cues while ignoring alternate content, that is, emotion recognition under conditions of distraction by competing stimuli. Second, tuning out of nonverbal cues involves ignoring nonverbal cues while identifying alternate content, that is, the ability to interrupt emotion recognition when needed to focus attention elsewhere. An auditory test of valence included positive and negative words spoken in positive and negative vocal tones. A visual test of approach-avoidance included green- and red-colored facial expressions depicting happiness and anger. The error rates for incongruent trials met the key criteria for establishing the validity of an EI test, in that the measure demonstrated test-retest reliability, convergent validity with other EI measures, divergent validity from factors such as general processing speed and mostly personality, and predictive validity in this case for well-being. By demonstrating that facets of EI can be validly theorized and empirically assessed, results also speak to the validity of EI more generally. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  1. Approaches to Validation of Models for Low Gravity Fluid Behavior

    NASA Technical Reports Server (NTRS)

    Chato, David J.; Marchetta, Jeffery; Hochstein, John I.; Kassemi, Mohammad

    2005-01-01

    This paper details the author experiences with the validation of computer models to predict low gravity fluid behavior. It reviews the literature of low gravity fluid behavior as a starting point for developing a baseline set of test cases. It examines authors attempts to validate their models against these cases and the issues they encountered. The main issues seem to be that: Most of the data is described by empirical correlation rather than fundamental relation; Detailed measurements of the flow field have not been made; Free surface shapes are observed but through thick plastic cylinders, and therefore subject to a great deal of optical distortion; and Heat transfer process time constants are on the order of minutes to days but the zero-gravity time available has been only seconds.

  2. Real-time Raman spectroscopy for automatic in vivo skin cancer detection: an independent validation.

    PubMed

    Zhao, Jianhua; Lui, Harvey; Kalia, Sunil; Zeng, Haishan

    2015-11-01

    In a recent study, we have demonstrated that real-time Raman spectroscopy could be used for skin cancer diagnosis. As a translational study, the objective of this study is to validate previous findings through a completely independent clinical test. In total, 645 confirmed cases were included in the analysis, including a cohort of 518 cases from a previous study, and an independent cohort of 127 new cases. Multi-variant statistical data analyses including principal component with general discriminant analysis (PC-GDA) and partial least squares (PLS) were used separately for lesion classification, which generated similar results. When the previous cohort (n = 518) was used as training and the new cohort (n = 127) was used as testing, the area under the receiver operating characteristic curve (ROC AUC) was found to be 0.889 (95 % CI 0.834-0.944; PLS); when the two cohorts were combined, the ROC AUC was 0.894 (95 % CI 0.870-0.918; PLS) with the narrowest confidence intervals. Both analyses were comparable to the previous findings, where the ROC AUC was 0.896 (95 % CI 0.846-0.946; PLS). The independent study validates that real-time Raman spectroscopy could be used for automatic in vivo skin cancer diagnosis with good accuracy.

  3. Strengthening the SDP Relaxation of AC Power Flows with Convex Envelopes, Bound Tightening, and Valid Inequalities

    DOE PAGES

    Coffrin, Carleton James; Hijazi, Hassan L; Van Hentenryck, Pascal R

    2016-12-01

    Here this work revisits the Semidefine Programming (SDP) relaxation of the AC power flow equations in light of recent results illustrating the benefits of bounds propagation, valid inequalities, and the Convex Quadratic (QC) relaxation. By integrating all of these results into the SDP model a new hybrid relaxation is proposed, which combines the benefits from all of these recent works. This strengthened SDP formulation is evaluated on 71 AC Optimal Power Flow test cases from the NESTA archive and is shown to have an optimality gap of less than 1% on 63 cases. This new hybrid relaxation closes 50% ofmore » the open cases considered, leaving only 8 for future investigation.« less

  4. Real time test bed development for power system operation, control and cyber security

    NASA Astrophysics Data System (ADS)

    Reddi, Ram Mohan

    The operation and control of the power system in an efficient way is important in order to keep the system secure, reliable and economical. With advancements in smart grid, several new algorithms have been developed for improved operation and control. These algorithms need to be extensively tested and validated in real time before applying to the real electric power grid. This work focuses on the development of a real time test bed for testing and validating power system control algorithms, hardware devices and cyber security vulnerability. The test bed developed utilizes several hardware components including relays, phasor measurement units, phasor data concentrator, programmable logic controllers and several software tools. Current work also integrates historian for power system monitoring and data archiving. Finally, two different power system test cases are simulated to demonstrate the applications of developed test bed. The developed test bed can also be used for power system education.

  5. Pseudoisochromatic test plate colour representation dependence on printing technology

    NASA Astrophysics Data System (ADS)

    Luse, K.; Fomins, S.; Ozolinsh, M.

    2012-08-01

    The aim of the study is to determine best printing technology for creation of colour vision deficiency tests. Valid tests for protanopia and deuteranopia were created from perceived colour matching experiments from printed colour samples by colour deficient individuals. Calibrated EpsonStylus Pro 7800 printer for ink prints and Noritsu HD 3701 digital printer for photographic prints were used. Multispectral imagery (by tunable liquid crystal filters system CRI Nuance Vis 07) data analysis show that in case of ink prints, the measured pixel colour coordinate dispersion (in the CIExy colour diagram) of similar colour arrays is smaller than in case of photographic printing. The print quality in terms of colour coordinate dispersion for printing methods used is much higher than in case of commercially available colour vision deficiency tests.

  6. Reliability and Model Fit

    ERIC Educational Resources Information Center

    Stanley, Leanne M.; Edwards, Michael C.

    2016-01-01

    The purpose of this article is to highlight the distinction between the reliability of test scores and the fit of psychometric measurement models, reminding readers why it is important to consider both when evaluating whether test scores are valid for a proposed interpretation and/or use. It is often the case that an investigator judges both the…

  7. Examining the Return on Investment of Test and Evaluation

    DTIC Science & Technology

    2015-03-26

    Problem Discovery Cases Observed in DOT&E Oversight Programs ............... 4 Figure 2. DoD T&E Organizational Structure...11 Figure 3. Product Maturity Levels Commercial Firms Seek to Validate ........................ 23 Figure 4 ...beginning in its fiscal year (FY) 2011 report, the Director, Operational Test and Evaluation (DOT&E), started reporting significant issues observed

  8. The Effect of Stakes on Accountability Test Scores and Pass Rates

    ERIC Educational Resources Information Center

    Steedle, Jeffrey T.; Grochowalski, Joseph

    2017-01-01

    Students may not fully demonstrate their knowledge and skills on accountability tests if there are no stakes attached to individual performance. In that case, assessment results may not accurately reflect student achievement, so the validity of score interpretations and uses suffers. For this study, matched samples of students taking state…

  9. Crypto-Giardia antigen rapid test versus conventional modified Ziehl-Neelsen acid fast staining method for diagnosis of cryptosporidiosis.

    PubMed

    Zaglool, Dina Abdulla Muhammad; Mohamed, Amr; Khodari, Yousif Abdul Wahid; Farooq, Mian Usman

    2013-03-01

    To evaluate the validity of Crypto-Giardia antigen rapid test (CA-RT) in comparison with the conventional modified Ziehl-Neelsen acid fast (MZN-AF) staining method for the diagnosis of cryptosporidiosis. Fifteen preserved stool samples from previously confirmed infections were used as positive controls and 40 stool samples from healthy people were used as negative control. A total of 85 stool samples were collected from suspected patients with cryptosporidiosis over 6 months during the period from January till June, 2011. The study was conducted in the department of parasitology, central laboratory, Alnoor Specialist Hospital, Makkah, Saudi Arabia. All samples were subjected to CA-RT and conventional MZN-AF staining method. Validation parameters including sensitivity (SN), specificity (SP), accuracy index (AI), positive predictive value (PPV), and negative predictive value (NPV) were evaluated for both tests. Out of 15 positive controls, CA-RT detected 13 (86.7%) while MZN-AF detected 11(73.3%) positive cases. However, CA-RT detected no positive case in 40 normal controls but MZN-AF detected 2(5%) as positive cases. Based on the results, the SN, SP, AI, PPV and NPV were high in CA-RT than MZN-AF staining method, ie., 86.7%vs. 73.3%, 100%vs. 95%, 96.4%vs. 89.1%, 100%vs. 84.6% and 95.2%vs. 90.5%, respectively. Out of a total of 85 suspected specimens, CA-RT detected 7(8.2%) but MZN-AF detected 6(7.1%) cases as positive. CA-RT immunoassay is more valid and reliable than MZN-AF staining method. Copyright © 2013 Hainan Medical College. Published by Elsevier B.V. All rights reserved.

  10. [An instrument for assessing clinical aptitude in cervicovaginitis in the family medicine practice].

    PubMed

    Arrieta-Pérez, Raúl Tomás; Lona-Calixto, Beatriz

    2011-01-01

    the cervicovaginitis is one of the first twelve causes on demand at primary care medicine thus the family physician must be able to identify and treat it. The objective was to validate a constructed instrument for measuring the clinical aptitude on cervicovaginitis. cross-sectional, descriptive, prolective study was carried out. An instrument with five clinical cases was done. It has seven indicators, whose answers were true, false and I do not know. The validity content was done by three family physicians and a Gynecologist, with experience in education. The trustworthiness was determined by means of the test of Kuder-Richardson formula 20 with the results obtained in a pilot test in 50 family medicine residents. the instrument was constituted by five clinical cases with 140 Items distributed in seven indicators with 20 items for each indicator and a total of 70 true answers and 70 false answers; seven categories for the degree of clinical aptitude settled down. The trustworthiness of the instrument was 0.81. the instrument is valid and reliable to identify the clinical aptitude of the family physician on cervicovaginitis.

  11. [Validation of three screening tests used for early detection of cervical cancer].

    PubMed

    Rodriguez-Reyes, Esperanza Rosalba; Cerda-Flores, Ricardo M; Quiñones-Pérez, Juan M; Cortés-Gutiérrez, Elva I

    2008-01-01

    to evaluate the validity (sensitivity, specificity, and accuracy) of three screening methods used in the early detection of the cervical carcinoma versus the histopathology diagnosis. a selected sample of 107 women attended in the Opportune Detection of Cervicouterine Cancer Program in the Hospital de Zona 46, Instituto Mexicano del Seguro Social in Durango, during the 2003 was included. The application of Papa-nicolaou, acetic acid test, and molecular detection of human papillomavirus, and histopatholgy diagnosis were performed in all the patients at the time of the gynecological exam. The detection and tipification of the human papillomavirus was performed by polymerase chain reaction (PCR) and analysis of polymorphisms of length of restriction fragments (RFLP). Histopathology diagnosis was considered the gold standard. The evaluation of the validity was carried out by the Bayesian method for diagnosis test. the positive cases for acetic acid test, Papanicolaou, and PCR were 47, 22, and 19. The accuracy values were 0.70, 0.80 and 0.99, respectively. since the molecular method showed a greater validity in the early detection of the cervical carcinoma we considered of vital importance its implementation in suitable programs of Opportune Detection of Cervicouterino Cancer Program in Mexico. However, in order to validate this conclusion, cross-sectional studies in different region of country must be carried out.

  12. Optimization and Validation of ELISA for Pre-Clinical Trials of Influenza Vaccine.

    PubMed

    Mitic, K; Muhandes, L; Minic, R; Petrusic, V; Zivkovic, I

    2016-01-01

    Testing of every new vaccine involves investigation of its immunogenicity, which is based on monitoring its ability to induce specific antibodies in animals. The fastest and most sensitive method used for this purpose is enzyme-linked immunosorbent assay (ELISA). However, commercial ELISA kits with whole influenza virus antigens are not available on the market, and it is therefore essential to establish an adequate assay for testing influenza virusspecific antibodies. We developed ELISA with whole influenza virus strains for the season 2011/2012 as antigens and validated it by checking its specificity, accuracy, linearity, range, precision, and sensitivity. The results show that we developed high-quality ELISA that can be used to test immunogenicity of newly produced seasonal or pandemic vaccines in mice. The pre-existence of validated ELISA enables shortening the time from the process of vaccine production to its use in patients, which is particularly important in the case of a pandemic.

  13. Validation of the state version of the Self-Statement during Public Speaking Scale.

    PubMed

    Osório, Flávia L; Crippa, José Alexandre S; Loureiro, Sonia Regina

    2013-03-01

    To adapt the trait version of the Self Statements during Public Speaking (SSPS) scale to a state version (SSPS-S) and to assess its discriminative validity for use in the Simulated Public Speaking Test (SPST). Subjects with and without social anxiety disorder (n = 45) were assessed while performing the SPST, a clinical-experimental model of anxiety with seven different phases. Alterations in negative self-assessment occurred with significant changes throughout the different phases of the procedure (p = .05). Non-cases presented significantly higher mean values of the SSPS-S in all phases of the procedure than cases (p < .01). Cases assessed themselves in a less positive and more negative manner during the SPST than did non-cases. SSPS-S is adequate for this assessment, especially its negative subscale, and shows good psychometric qualities.

  14. A Testing Framework for Critical Space SW

    NASA Astrophysics Data System (ADS)

    Fernandez, Ignacio; Di Cerbo, Antonio; Dehnhardt, Erik; Massimo, Tipaldi; Brünjes, Bernhard

    2015-09-01

    This paper describes a testing framework for critical space SW named Technical Specification Validation Framework (TSVF). It provides a powerful and flexible means and can be used throughout the SW test activities (test case specification & implementation, test execution and test artifacts analysis). In particular, tests can be run in an automated and/or step-by-step mode. The TSVF framework is currently used for the validation of the Satellite Control Software (SCSW), which runs on the Meteosat Third Generation (MTG) satellite on-board computer. The main purpose of the SCSW is to control the spacecraft along with its various subsystems (AOCS, Payload, Electrical Power, Telemetry Tracking & Command, etc.) in a way that guarantees a high degree of flexibility and autonomy. The TSVF framework serves the challenging needs of the SCSW project, where a plan-driven approach has been combined with an agile process in order to produce preliminary SW versions (with a reduced scope of implemented functionality) in order to fulfill the stakeholders needs ([1]). The paper has been organised as follows. Section 2 gives an overview of the TSVF architecture and interfaces versus the test bench along with the technology used for its implementation. Section 3 describes the key elements of the XML based language for the test case implementation. Section 4 highlights all the benefits compared to conventional test environments requiring a manual test script development, whereas section 5 concludes the paper.

  15. Model-Based Verification and Validation of Spacecraft Avionics

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Sievers, Michael; Standley, Shaun

    2012-01-01

    Verification and Validation (V&V) at JPL is traditionally performed on flight or flight-like hardware running flight software. For some time, the complexity of avionics has increased exponentially while the time allocated for system integration and associated V&V testing has remained fixed. There is an increasing need to perform comprehensive system level V&V using modeling and simulation, and to use scarce hardware testing time to validate models; the norm for thermal and structural V&V for some time. Our approach extends model-based V&V to electronics and software through functional and structural models implemented in SysML. We develop component models of electronics and software that are validated by comparison with test results from actual equipment. The models are then simulated enabling a more complete set of test cases than possible on flight hardware. SysML simulations provide access and control of internal nodes that may not be available in physical systems. This is particularly helpful in testing fault protection behaviors when injecting faults is either not possible or potentially damaging to the hardware. We can also model both hardware and software behaviors in SysML, which allows us to simulate hardware and software interactions. With an integrated model and simulation capability we can evaluate the hardware and software interactions and identify problems sooner. The primary missing piece is validating SysML model correctness against hardware; this experiment demonstrated such an approach is possible.

  16. Validation of a case definition for leptospirosis diagnosis in patients with acute severe febrile disease admitted in reference hospitals at the State of Pernambuco, Brazil.

    PubMed

    Albuquerque Filho, Alfredo Pereira Leite de; Araújo, Jéssica Guido de; Souza, Inacelli Queiroz de; Martins, Luciana Cardoso; Oliveira, Marta Iglis de; Silva, Maria Jesuíta Bezerra da; Montarroyos, Ulisses Ramos; Miranda Filho, Demócrito de Barros

    2011-01-01

    Leptospirosis is often mistaken for other acute febrile illnesses because of its nonspecific presentation. Bacteriologic, serologic, and molecular methods have several limitations for early diagnosis: technical complexity, low availability, low sensitivity in early disease, or high cost. This study aimed to validate a case definition, based on simple clinical and laboratory tests, that is intended for bedside diagnosis of leptospirosis among hospitalized patients. Adult patients, admitted to two reference hospitals in Recife, Brazil, with a febrile illness of less than 21 days and with a clinical suspicion of leptospirosis, were included to test a case definition comprising ten clinical and laboratory criteria. Leptospirosis was confirmed or excluded by a composite reference standard (microscopic agglutination test, ELISA, and blood culture). Test properties were determined for each cutoff number of the criteria from the case definition. Ninety seven patients were included; 75 had confirmed leptospirosis and 22 did not. Mean number of criteria from the case definition that were fulfilled was 7.8±1.2 for confirmed leptospirosis and 5.9±1.5 for non-leptospirosis patients (p<0.0001). Best sensitivity (85.3%) and specificity (68.2%) combination was found with a cutoff of 7 or more criteria, reaching positive and negative predictive values of 90.1% and 57.7%, respectively; accuracy was 81.4%. The case definition, for a cutoff of at least 7 criteria, reached average sensitivity and specificity, but with a high positive predictive value. Its simplicity and low cost make it useful for rapid bedside leptospirosis diagnosis in Brazilian hospitalized patients with acute severe febrile disease.

  17. The Efficacy of Mammography Boot Camp to Improve the Performance of Radiologists

    PubMed Central

    Lee, Eun Hye; Jung, Seung Eun; Kim, You Me; Choi, Nami

    2014-01-01

    Objective To evaluate the efficacy of a mammography boot camp (MBC) to improve radiologists' performance in interpreting mammograms in the National Cancer Screening Program (NCSP) in Korea. Materials and Methods Between January and July of 2013, 141 radiologists were invited to a 3-day educational program composed of lectures and group practice readings using 250 digital mammography cases. The radiologists' performance in interpreting mammograms were evaluated using a pre- and post-camp test set of 25 cases validated prior to the camp by experienced breast radiologists. Factors affecting the radiologists' performance, including age, type of attending institution, and type of test set cases, were analyzed. Results The average scores of the pre- and post-camp tests were 56.0 ± 12.2 and 78.3 ± 9.2, respectively (p < 0.001). The post-camp test scores were higher than the pre-camp test scores for all age groups and all types of attending institutions (p < 0.001). The rate of incorrect answers in the post-camp test decreased compared to the pre-camp test for all suspicious cases, but not for negative cases (p > 0.05). Conclusion The MBC improves radiologists' performance in interpreting mammograms irrespective of age and type of attending institution. Improved interpretation is observed for suspicious cases, but not for negative cases. PMID:25246818

  18. Development of Detonation Modeling Capabilities for Rocket Test Facilities: Hydrogen-Oxygen-Nitrogen Mixtures

    NASA Technical Reports Server (NTRS)

    Allgood, Daniel C.

    2016-01-01

    The objective of the presented work was to develop validated computational fluid dynamics (CFD) based methodologies for predicting propellant detonations and their associated blast environments. Applications of interest were scenarios relevant to rocket propulsion test and launch facilities. All model development was conducted within the framework of the Loci/CHEM CFD tool due to its reliability and robustness in predicting high-speed combusting flow-fields associated with rocket engines and plumes. During the course of the project, verification and validation studies were completed for hydrogen-fueled detonation phenomena such as shock-induced combustion, confined detonation waves, vapor cloud explosions, and deflagration-to-detonation transition (DDT) processes. The DDT validation cases included predicting flame acceleration mechanisms associated with turbulent flame-jets and flow-obstacles. Excellent comparison between test data and model predictions were observed. The proposed CFD methodology was then successfully applied to model a detonation event that occurred during liquid oxygen/gaseous hydrogen rocket diffuser testing at NASA Stennis Space Center.

  19. Validation of catchment models for predicting land-use and climate change impacts. 2. Case study for a Mediterranean catchment

    NASA Astrophysics Data System (ADS)

    Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.

    1996-02-01

    Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.

  20. Attitudes about Advances in Sweat Patch Testing in Drug Courts: Insights from a Case Study in Southern California

    ERIC Educational Resources Information Center

    Polzer, Katherine

    2010-01-01

    Drug courts are reinventing the drug testing framework by experimenting with new methods, including use of the sweat patch. The sweat patch is a band-aid like strip used to monitor drug court participants. The validity and reliability of the sweat patch as an effective testing method was examined, as well as the effectiveness, meaning how likely…

  1. Developmental validation of an X-Insertion/Deletion polymorphism panel and application in HAN population of China.

    PubMed

    Zhang, Suhua; Sun, Kuan; Bian, Yingnan; Zhao, Qi; Wang, Zheng; Ji, Chaoneng; Li, Chengtao

    2015-12-14

    InDels are short-length polymorphisms characterized by low mutation rates, high inter-population diversity, short amplicon strategy and simplicity of laboratory analysis. This work describes the developmental validation of an X-InDels panel amplifying 18 bi-allelic markers and Amelogenin in one single PCR system. Developmental validation indicated that this novel panel was reproducible, accurate, sensitive and robust for forensic application. Sensitivity testing of the panel was such that a full profile was obtainable even with 125 pg of human DNA with intra-locus balance above 70%. Specificity testing was demonstrated by the lack of cross-reactivity with a variety of commonly encountered animal species and microorganisms. For the stability testing in cases of PCR inhibition, full profiles have been obtained with hematin (≤1000 μM) and humic acid (≤150 ng/μL). For the forensic investigation of the 18 X-InDels in the HAN population of China, no locus deviated from the Hardy-Weinberg equilibrium and linkage disequilibrium. Since they are independent from each other, the CDPfemale was 0.999999726 and CDPmale was 0.999934223. The forensic parameters suggested that this X-Indel panel is polymorphic and informative, which provides valuable X-linked information for deficient relationship cases where autosomal markers are uninformative.

  2. Developmental validation of an X-Insertion/Deletion polymorphism panel and application in HAN population of China

    PubMed Central

    Zhang, Suhua; Sun, Kuan; Bian, Yingnan; Zhao, Qi; Wang, Zheng; Ji, Chaoneng; Li, Chengtao

    2015-01-01

    InDels are short-length polymorphisms characterized by low mutation rates, high inter-population diversity, short amplicon strategy and simplicity of laboratory analysis. This work describes the developmental validation of an X-InDels panel amplifying 18 bi-allelic markers and Amelogenin in one single PCR system. Developmental validation indicated that this novel panel was reproducible, accurate, sensitive and robust for forensic application. Sensitivity testing of the panel was such that a full profile was obtainable even with 125 pg of human DNA with intra-locus balance above 70%. Specificity testing was demonstrated by the lack of cross-reactivity with a variety of commonly encountered animal species and microorganisms. For the stability testing in cases of PCR inhibition, full profiles have been obtained with hematin (≤1000 μM) and humic acid (≤150 ng/μL). For the forensic investigation of the 18 X-InDels in the HAN population of China, no locus deviated from the Hardy–Weinberg equilibrium and linkage disequilibrium. Since they are independent from each other, the CDPfemale was 0.999999726 and CDPmale was 0.999934223. The forensic parameters suggested that this X-Indel panel is polymorphic and informative, which provides valuable X-linked information for deficient relationship cases where autosomal markers are uninformative. PMID:26655948

  3. A model of scientific attitudes assessment by observation in physics learning based scientific approach: case study of dynamic fluid topic in high school

    NASA Astrophysics Data System (ADS)

    Yusliana Ekawati, Elvin

    2017-01-01

    This study aimed to produce a model of scientific attitude assessment in terms of the observations for physics learning based scientific approach (case study of dynamic fluid topic in high school). Development of instruments in this study adaptation of the Plomp model, the procedure includes the initial investigation, design, construction, testing, evaluation and revision. The test is done in Surakarta, so that the data obtained are analyzed using Aiken formula to determine the validity of the content of the instrument, Cronbach’s alpha to determine the reliability of the instrument, and construct validity using confirmatory factor analysis with LISREL 8.50 program. The results of this research were conceptual models, instruments and guidelines on scientific attitudes assessment by observation. The construct assessment instruments include components of curiosity, objectivity, suspended judgment, open-mindedness, honesty and perseverance. The construct validity of instruments has been qualified (rated load factor > 0.3). The reliability of the model is quite good with the Alpha value 0.899 (> 0.7). The test showed that the model fits the theoretical models are supported by empirical data, namely p-value 0.315 (≥ 0.05), RMSEA 0.027 (≤ 0.08)

  4. Casing Tier 529.887-020; Sausage Packer; Skin Peeler 525.884-050; Sliced-Bacon Packer II; Packer 920.887-114 -- Technical Report on Standardization of the General Aptitude Test Battery.

    ERIC Educational Resources Information Center

    Manpower Administration (DOL), Washington, DC. U.S. Training and Employment Service.

    The United States Training and Employment Service General Aptitude Test Battery (GATB), first published in 1947, has been included in a continuing program of research to validate the tests against success in many different occupations. The GATB consists of 12 tests which measure nine aptitudes: General Learning Ability; Verbal Aptitude; Numerical…

  5. Outcomes of Moral Case Deliberation - the development of an evaluation instrument for clinical ethics support (the Euro-MCD)

    PubMed Central

    2014-01-01

    Background Clinical ethics support, in particular Moral Case Deliberation, aims to support health care providers to manage ethically difficult situations. However, there is a lack of evaluation instruments regarding outcomes of clinical ethics support in general and regarding Moral Case Deliberation (MCD) in particular. There also is a lack of clarity and consensuses regarding which MCD outcomes are beneficial. In addition, MCD outcomes might be context-sensitive. Against this background, there is a need for a standardised but flexible outcome evaluation instrument. The aim of this study was to develop a multi-contextual evaluation instrument measuring health care providers’ experiences and perceived importance of outcomes of Moral Case Deliberation. Methods A multi-item instrument for assessing outcomes of Moral Case Deliberation (MCD) was constructed through an iterative process, founded on a literature review and modified through a multistep review by ethicists and health care providers. The instrument measures perceived importance of outcomes before and after MCD, as well as experienced outcomes during MCD and in daily work. A purposeful sample of 86 European participants contributed to a Delphi panel and content validity testing. The Delphi panel (n = 13), consisting of ethicists and ethics researchers, participated in three Delphi-rounds. Health care providers (n = 73) participated in the content validity testing through ‘think-aloud’ interviews and a method using Content Validity Index. Results The development process resulted in the European Moral Case Deliberation Outcomes Instrument (Euro-MCD), which consists of two sections, one to be completed before a participant’s first MCD and the other after completing multiple MCDs. The instrument contains a few open-ended questions and 26 specific items with a corresponding rating/response scale representing various MCD outcomes. The items were categorised into the following six domains: Enhanced emotional support, Enhanced collaboration, Improved moral reflexivity, Improved moral attitude, Improvement on organizational level and Concrete results. Conclusions A tentative instrument has been developed that seems to cover main outcomes of Moral Case Deliberation. The next step will be to test the Euro-MCD in a field study. PMID:24712735

  6. Outcomes of moral case deliberation--the development of an evaluation instrument for clinical ethics support (the Euro-MCD).

    PubMed

    Svantesson, Mia; Karlsson, Jan; Boitte, Pierre; Schildman, Jan; Dauwerse, Linda; Widdershoven, Guy; Pedersen, Reidar; Huisman, Martijn; Molewijk, Bert

    2014-04-08

    Clinical ethics support, in particular Moral Case Deliberation, aims to support health care providers to manage ethically difficult situations. However, there is a lack of evaluation instruments regarding outcomes of clinical ethics support in general and regarding Moral Case Deliberation (MCD) in particular. There also is a lack of clarity and consensuses regarding which MCD outcomes are beneficial. In addition, MCD outcomes might be context-sensitive. Against this background, there is a need for a standardised but flexible outcome evaluation instrument. The aim of this study was to develop a multi-contextual evaluation instrument measuring health care providers' experiences and perceived importance of outcomes of Moral Case Deliberation. A multi-item instrument for assessing outcomes of Moral Case Deliberation (MCD) was constructed through an iterative process, founded on a literature review and modified through a multistep review by ethicists and health care providers. The instrument measures perceived importance of outcomes before and after MCD, as well as experienced outcomes during MCD and in daily work. A purposeful sample of 86 European participants contributed to a Delphi panel and content validity testing. The Delphi panel (n = 13), consisting of ethicists and ethics researchers, participated in three Delphi-rounds. Health care providers (n = 73) participated in the content validity testing through 'think-aloud' interviews and a method using Content Validity Index. The development process resulted in the European Moral Case Deliberation Outcomes Instrument (Euro-MCD), which consists of two sections, one to be completed before a participant's first MCD and the other after completing multiple MCDs. The instrument contains a few open-ended questions and 26 specific items with a corresponding rating/response scale representing various MCD outcomes. The items were categorised into the following six domains: Enhanced emotional support, Enhanced collaboration, Improved moral reflexivity, Improved moral attitude, Improvement on organizational level and Concrete results. A tentative instrument has been developed that seems to cover main outcomes of Moral Case Deliberation. The next step will be to test the Euro-MCD in a field study.

  7. Validity and power of association testing in family-based sampling designs: evidence for and against the common wisdom.

    PubMed

    Knight, Stacey; Camp, Nicola J

    2011-04-01

    Current common wisdom posits that association analyses using family-based designs have inflated type 1 error rates (if relationships are ignored) and independent controls are more powerful than familial controls. We explore these suppositions. We show theoretically that family-based designs can have deflated type-error rates. Through simulation, we examine the validity and power of family designs for several scenarios: cases from randomly or selectively ascertained pedigrees; and familial or independent controls. Family structures considered are as follows: sibships, nuclear families, moderate-sized and extended pedigrees. Three methods were considered with the χ(2) test for trend: variance correction (VC), weighted (weights assigned to account for genetic similarity), and naïve (ignoring relatedness) as well as the Modified Quasi-likelihood Score (MQLS) test. Selectively ascertained pedigrees had similar levels of disease enrichment; random ascertainment had no such restriction. Data for 1,000 cases and 1,000 controls were created under the null and alternate models. The VC and MQLS methods were always valid. The naïve method was anti-conservative if independent controls were used and valid or conservative in designs with familial controls. The weighted association method was generally valid for independent controls, and was conservative for familial controls. With regard to power, independent controls were more powerful for small-to-moderate selectively ascertained pedigrees, but familial and independent controls were equivalent in the extended pedigrees and familial controls were consistently more powerful for all randomly ascertained pedigrees. These results suggest a more complex situation than previously assumed, which has important implications for study design and analysis. © 2011 Wiley-Liss, Inc.

  8. Feasibility of a Networked Air Traffic Infrastructure Validation Environment for Advanced NextGen Concepts

    NASA Technical Reports Server (NTRS)

    McCormack, Michael J.; Gibson, Alec K.; Dennis, Noah E.; Underwood, Matthew C.; Miller,Lana B.; Ballin, Mark G.

    2013-01-01

    Abstract-Next Generation Air Transportation System (NextGen) applications reliant upon aircraft data links such as Automatic Dependent Surveillance-Broadcast (ADS-B) offer a sweeping modernization of the National Airspace System (NAS), but the aviation stakeholder community has not yet established a positive business case for equipage and message content standards remain in flux. It is necessary to transition promising Air Traffic Management (ATM) Concepts of Operations (ConOps) from simulation environments to full-scale flight tests in order to validate user benefits and solidify message standards. However, flight tests are prohibitively expensive and message standards for Commercial-off-the-Shelf (COTS) systems cannot support many advanced ConOps. It is therefore proposed to simulate future aircraft surveillance and communications equipage and employ an existing commercial data link to exchange data during dedicated flight tests. This capability, referred to as the Networked Air Traffic Infrastructure Validation Environment (NATIVE), would emulate aircraft data links such as ADS-B using in-flight Internet and easily-installed test equipment. By utilizing low-cost equipment that is easy to install and certify for testing, advanced ATM ConOps can be validated, message content standards can be solidified, and new standards can be established through full-scale flight trials without necessary or expensive equipage or extensive flight test preparation. This paper presents results of a feasibility study of the NATIVE concept. To determine requirements, six NATIVE design configurations were developed for two NASA ConOps that rely on ADS-B. The performance characteristics of three existing in-flight Internet services were investigated to determine whether performance is adequate to support the concept. Next, a study of requisite hardware and software was conducted to examine whether and how the NATIVE concept might be realized. Finally, to determine a business case, economic factors were evaluated and a preliminary cost-benefit analysis was performed.

  9. Correlates of Incident Cognitive Impairment in the REasons for Geographic and Racial Differences in Stroke (REGARDS) Study

    PubMed Central

    Gillett, Sarah R.; Thacker, Evan L.; Letter, Abraham J.; McClure, Leslie A.; Wadley, Virginia G.; Unverzagt, Frederick W.; Kissela, Brett M.; Kennedy, Richard E.; Glasser, Stephen P.; Levine, Deborah A.; Cushman, Mary

    2015-01-01

    Objective To identify approximately 500 cases of incident cognitive impairment (ICI) in a large, national sample adapting an existing cognitive test-based case definition and to examine relationships of vascular risk factors with ICI. Method Participants were from the REGARDS study, a national sample of 30,239 African-American and white Americans. Participants included in this analysis had normal cognitive screening and no history of stroke at baseline, and at least one follow-up cognitive assessment with a three test battery (TTB). Regression-based norms were applied to TTB scores to identify cases of ICI. Logistic regression was used to model associations with baseline vascular risk factors. Results We identified 495 participants with ICI out of 17,630 eligible participants. In multivariable modeling, income (OR 1.83 CI 1.27,2.62), stroke belt residence (OR 1.45 CI 1.18,1.78), history of transient ischemic attack (OR 1.90 CI 1.29,2.81), coronary artery disease(OR 1.32 CI 1.02,1.70), diabetes (OR 1.48 CI 1.17,1.87), obesity (OR 1.40 CI 1.05,1.86), and incident stroke (OR 2.73 CI 1.52,4.90) were associated with ICI. Conclusions We adapted a previously validated cognitive test-based case definition to identify cases of ICI. Many previously identified risk factors were associated with ICI, supporting the criterion-related validity of our definition. PMID:25978342

  10. Improving machine learning reproducibility in genetic association studies with proportional instance cross validation (PICV).

    PubMed

    Piette, Elizabeth R; Moore, Jason H

    2018-01-01

    Machine learning methods and conventions are increasingly employed for the analysis of large, complex biomedical data sets, including genome-wide association studies (GWAS). Reproducibility of machine learning analyses of GWAS can be hampered by biological and statistical factors, particularly so for the investigation of non-additive genetic interactions. Application of traditional cross validation to a GWAS data set may result in poor consistency between the training and testing data set splits due to an imbalance of the interaction genotypes relative to the data as a whole. We propose a new cross validation method, proportional instance cross validation (PICV), that preserves the original distribution of an independent variable when splitting the data set into training and testing partitions. We apply PICV to simulated GWAS data with epistatic interactions of varying minor allele frequencies and prevalences and compare performance to that of a traditional cross validation procedure in which individuals are randomly allocated to training and testing partitions. Sensitivity and positive predictive value are significantly improved across all tested scenarios for PICV compared to traditional cross validation. We also apply PICV to GWAS data from a study of primary open-angle glaucoma to investigate a previously-reported interaction, which fails to significantly replicate; PICV however improves the consistency of testing and training results. Application of traditional machine learning procedures to biomedical data may require modifications to better suit intrinsic characteristics of the data, such as the potential for highly imbalanced genotype distributions in the case of epistasis detection. The reproducibility of genetic interaction findings can be improved by considering this variable imbalance in cross validation implementation, such as with PICV. This approach may be extended to problems in other domains in which imbalanced variable distributions are a concern.

  11. A Direct Test of the Theory of Comparative Advantage: The Case of Japan.

    ERIC Educational Resources Information Center

    Bernhofen, Daniel M.; Brown, John C.

    2004-01-01

    We exploit Japan's sudden and complete opening up to international trade in the 1860s to test the empirical validity of one of the oldest and most fundamental propositions in economics: the theory of comparative advantage. Historical evidence supports the assertion that the characteristics of the Japanese economy at the time were compatible with…

  12. Hovering Dual-Spin Vehicle Groundwork for Bias Momentum Sizing Validation Experiment

    NASA Technical Reports Server (NTRS)

    Rothhaar, Paul M.; Moerder, Daniel D.; Lim, Kyong B.

    2008-01-01

    Angular bias momentum offers significant stability augmentation for hovering flight vehicles. The reliance of the vehicle on thrust vectoring for agility and disturbance rejection is greatly reduced with significant levels of stored angular momentum in the system. A methodical procedure for bias momentum sizing has been developed in previous studies. This current study provides groundwork for experimental validation of that method using an experimental vehicle called the Dual-Spin Test Device, a thrust-levitated platform. Using measured data the vehicle's thrust vectoring units are modeled and a gust environment is designed and characterized. Control design is discussed. Preliminary experimental results of the vehicle constrained to three rotational degrees of freedom are compared to simulation for a case containing no bias momentum to validate the simulation. A simulation of a bias momentum dominant case is presented.

  13. Comparative assessment of three standardized robotic surgery training methods.

    PubMed

    Hung, Andrew J; Jayaratna, Isuru S; Teruya, Kara; Desai, Mihir M; Gill, Inderbir S; Goh, Alvin C

    2013-10-01

    To evaluate three standardized robotic surgery training methods, inanimate, virtual reality and in vivo, for their construct validity. To explore the concept of cross-method validity, where the relative performance of each method is compared. Robotic surgical skills were prospectively assessed in 49 participating surgeons who were classified as follows: 'novice/trainee': urology residents, previous experience <30 cases (n = 38) and 'experts': faculty surgeons, previous experience ≥30 cases (n = 11). Three standardized, validated training methods were used: (i) structured inanimate tasks; (ii) virtual reality exercises on the da Vinci Skills Simulator (Intuitive Surgical, Sunnyvale, CA, USA); and (iii) a standardized robotic surgical task in a live porcine model with performance graded by the Global Evaluative Assessment of Robotic Skills (GEARS) tool. A Kruskal-Wallis test was used to evaluate performance differences between novices and experts (construct validity). Spearman's correlation coefficient (ρ) was used to measure the association of performance across inanimate, simulation and in vivo methods (cross-method validity). Novice and expert surgeons had previously performed a median (range) of 0 (0-20) and 300 (30-2000) robotic cases, respectively (P < 0.001). Construct validity: experts consistently outperformed residents with all three methods (P < 0.001). Cross-method validity: overall performance of inanimate tasks significantly correlated with virtual reality robotic performance (ρ = -0.7, P < 0.001) and in vivo robotic performance based on GEARS (ρ = -0.8, P < 0.0001). Virtual reality performance and in vivo tissue performance were also found to be strongly correlated (ρ = 0.6, P < 0.001). We propose the novel concept of cross-method validity, which may provide a method of evaluating the relative value of various forms of skills education and assessment. We externally confirmed the construct validity of each featured training tool. © 2013 BJU International.

  14. Validity of injecting drug users' self report of hepatitis A, B, and C.

    PubMed

    Schlicting, Erin G; Johnson, Mark E; Brems, Christiane; Wells, Rebecca S; Fisher, Dennis G; Reynolds, Grace

    2003-01-01

    To test the validity of drug users self-reports of diseases associated with drug use, in this case hepatitis A, B, and C. Injecting drug users (n = 653) were recruited and asked whether they had been diagnosed previously with hepatitis A, B, and/or C. These self-report data were compared to total hepatitis A antibody, hepatitis B core antibody, and hepatitis C antibody seromarkers as a means of determining the validity of the self-reported information. Anchorage, Alaska. Criteria for inclusion included being at least 18-years old; testing positive on urinalysis for cocaine metabolites, amphetamine, or morphine; having visible signs of injection (track marks). Serological testing for hepatitis A, B, and C. Findings indicate high specificity, low sensitivity, and low kappa coefficients for all three self-report measures. Subgroup analyses revealed significant differences in sensitivity associated with previous substance abuse treatment experience for hepatitis B self-report and with gender for hepatitis C self-report. Given the low sensitivity, the validity of drug users, self-reported information on hepatitis should be considered with caution.

  15. Testing for qualitative heterogeneity: An application to composite endpoints in survival analysis.

    PubMed

    Oulhaj, Abderrahim; El Ghouch, Anouar; Holman, Rury R

    2017-01-01

    Composite endpoints are frequently used in clinical outcome trials to provide more endpoints, thereby increasing statistical power. A key requirement for a composite endpoint to be meaningful is the absence of the so-called qualitative heterogeneity to ensure a valid overall interpretation of any treatment effect identified. Qualitative heterogeneity occurs when individual components of a composite endpoint exhibit differences in the direction of a treatment effect. In this paper, we develop a general statistical method to test for qualitative heterogeneity, that is to test whether a given set of parameters share the same sign. This method is based on the intersection-union principle and, provided that the sample size is large, is valid whatever the model used for parameters estimation. We propose two versions of our testing procedure, one based on a random sampling from a Gaussian distribution and another version based on bootstrapping. Our work covers both the case of completely observed data and the case where some observations are censored which is an important issue in many clinical trials. We evaluated the size and power of our proposed tests by carrying out some extensive Monte Carlo simulations in the case of multivariate time to event data. The simulations were designed under a variety of conditions on dimensionality, censoring rate, sample size and correlation structure. Our testing procedure showed very good performances in terms of statistical power and type I error. The proposed test was applied to a data set from a single-center, randomized, double-blind controlled trial in the area of Alzheimer's disease.

  16. Validation of a Case Definition for Pediatric Brain Injury Using Administrative Data.

    PubMed

    McChesney-Corbeil, Jane; Barlow, Karen; Quan, Hude; Chen, Guanmin; Wiebe, Samuel; Jette, Nathalie

    2017-03-01

    Health administrative data are a common population-based data source for traumatic brain injury (TBI) surveillance and research; however, before using these data for surveillance, it is important to develop a validated case definition. The objective of this study was to identify the optimal International Classification of Disease , edition 10 (ICD-10), case definition to ascertain children with TBI in emergency room (ER) or hospital administrative data. We tested multiple case definitions. Children who visited the ER were identified from the Regional Emergency Department Information System at Alberta Children's Hospital. Secondary data were collected for children with trauma, musculoskeletal, or central nervous system complaints who visited the ER between October 5, 2005, and June 6, 2007. TBI status was determined based on chart review. Sensitivity, specificity, positive predictive value (PPV), and negative predictive value (NPV) were calculated for each case definition. Of 6639 patients, 1343 had a TBI. The best case definition was, "1 hospital or 1 ER encounter coded with an ICD-10 code for TBI in 1 year" (sensitivity 69.8% [95% confidence interval (CI), 67.3-72.2], specificity 96.7% [95% CI, 96.2-97.2], PPV 84.2% [95% CI 82.0-86.3], NPV 92.7% [95% CI, 92.0-93.3]). The nonspecific code S09.9 identified >80% of TBI cases in our study. The optimal ICD-10-based case definition for pediatric TBI in this study is valid and should be considered for future pediatric TBI surveillance studies. However, external validation is recommended before use in other jurisdictions, particularly because it is plausible that a larger proportion of patients in our cohort had milder injuries.

  17. Reliability and Validity in Hospital Case-Mix Measurement

    PubMed Central

    Pettengill, Julian; Vertrees, James

    1982-01-01

    There is widespread interest in the development of a measure of hospital output. This paper describes the problem of measuring the expected cost of the mix of inpatient cases treated in a hospital (hospital case-mix) and a general approach to its solution. The solution is based on a set of homogenous groups of patients, defined by a patient classification system, and a set of estimated relative cost weights corresponding to the patient categories. This approach is applied to develop a summary measure of the expected relative costliness of the mix of Medicare patients treated in 5,576 participating hospitals. The Medicare case-mix index is evaluated by estimating a hospital average cost function. This provides a direct test of the hypothesis that the relationship between Medicare case-mix and Medicare cost per case is proportional. The cost function analysis also provides a means of simulating the effects of classification error on our estimate of this relationship. Our results indicate that this general approach to measuring hospital case-mix provides a valid and robust measure of the expected cost of a hospital's case-mix. PMID:10309909

  18. Herbalife hepatotoxicity: Evaluation of cases with positive reexposure tests

    PubMed Central

    Teschke, Rolf; Frenzel, Christian; Schulze, Johannes; Schwarzenboeck, Alexander; Eickhoff, Axel

    2013-01-01

    AIM: To analyze the validity of applied test criteria and causality assessment methods in assumed Herbalife hepatotoxicity with positive reexposure tests. METHODS: We searched the Medline database for suspected cases of Herbalife hepatotoxicity and retrieved 53 cases including eight cases with a positive unintentional reexposure and a high causality level for Herbalife. First, analysis of these eight cases focused on the data quality of the positive reexposure cases, requiring a baseline value of alanine aminotransferase (ALT) < 5 upper limit of normal (N) before reexposure, with N as the upper limit of normal, and a doubling of the ALT value at reexposure as compared to the ALT value at baseline prior to reexposure. Second, reported methods to assess causality in the eight cases were evaluated, and then the liver specific Council for International Organizations of Medical Sciences (CIOMS) scale validated for hepatotoxicity cases was used for quantitative causality reevaluation. This scale consists of various specific elements with scores provided through the respective case data, and the sum of the scores yields a causality grading for each individual case of initially suspected hepatotoxicity. RESULTS: Details of positive reexposure test conditions and their individual results were scattered in virtually all cases, since reexposures were unintentional and allowed only retrospective rather than prospective assessments. In 1/8 cases, criteria for a positive reexposure were fulfilled, whereas in the remaining cases the reexposure test was classified as negative (n = 1), or the data were considered as uninterpretable due to missing information to comply adequately with the criteria (n = 6). In virtually all assessed cases, liver unspecific causality assessment methods were applied rather than a liver specific method such as the CIOMS scale. Using this scale, causality gradings for Herbalife in these eight cases were probable (n = 1), unlikely (n = 4), and excluded (n = 3). Confounding variables included low data quality, alternative diagnoses, poor exclusion of important other causes, and comedication by drugs and herbs in 6/8 cases. More specifically, problems were evident in some cases regarding temporal association, daily doses, exact start and end dates of product use, actual data of laboratory parameters such as ALT, and exact dechallenge characteristics. Shortcomings included scattered exclusion of hepatitis A-C, cytomegalovirus and Epstein Barr virus infection with only globally presented or lacking parameters. Hepatitis E virus infection was considered in one single patient and found positive, infections by herpes simplex virus and varicella zoster virus were excluded in none. CONCLUSION: Only one case fulfilled positive reexposure test criteria in initially assumed Herbalife hepatotoxicity, with lower CIOMS based causality gradings for the other cases than hitherto proposed. PMID:23898368

  19. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Yoo, Jun Soo; Choi, Yong Joon

    The RELAP-7 code verification and validation activities are ongoing under the code assessment plan proposed in the previous document (INL-EXT-16-40015). Among the list of V&V test problems in the ‘RELAP-7 code V&V RTM (Requirements Traceability Matrix)’, the RELAP-7 7-equation model has been tested with additional demonstration problems and the results of these tests are reported in this document. In this report, we describe the testing process, the test cases that were conducted, and the results of the evaluation.

  20. Validation of a research case definition of Gulf War illness in the 1991 US military population.

    PubMed

    Iannacchione, Vincent G; Dever, Jill A; Bann, Carla M; Considine, Kathleen A; Creel, Darryl; Carson, Christopher P; Best, Heather; Haley, Robert W

    2011-01-01

    A case definition of Gulf War illness with 3 primary variants, previously developed by factor analysis of symptoms in a US Navy construction battalion and validated in clinic veterans, identified ill veterans with objective abnormalities of brain function. This study tests prestated hypotheses of its external validity. A stratified probability sample (n = 8,020), selected from a sampling frame of the 3.5 million Gulf War era US military veterans, completed a computer-assisted telephone interview survey. Application of the prior factor weights to the subjects' responses generated the case definition. The structural equation model of the case definition fit both random halves of the population sample well (root mean-square error of approximation = 0.015). The overall case definition was 3.87 times (95% confidence interval, 2.61-5.74) more prevalent in the deployed than the deployable nondeployed veterans: 3.33 (1.10-10.10) for syndrome variant 1; 5.11 (2.43-10.75) for variant 2, and 4.25 (2.33-7.74) for variant 3. Functional status on SF-12 was greatly reduced (effect sizes, 1.0-2.0) in veterans meeting the overall and variant case definitions. The factor case definition applies to the full Gulf War veteran population and has good characteristics for research. Copyright © 2011 S. Karger AG, Basel.

  1. [ETAP: A smoking scale for Primary Health Care].

    PubMed

    González Romero, Pilar María; Cuevas Fernández, Francisco Javier; Marcelino Rodríguez, Itahisa; Rodríguez Pérez, María Del Cristo; Cabrera de León, Antonio; Aguirre-Jaime, Armando

    2016-05-01

    To obtain a scale of tobacco exposure to address smoking cessation. Follow-up of a cohort. Scale validation. Primary Care Research Unit. Tenerife. A total of 6729 participants from the "CDC de Canarias" cohort. A scale was constructed under the assumption that the time of exposure to tobacco is the key factor to express accumulated risk. Discriminant validity was tested on prevalent cases of acute myocardial infarction (AMI; n=171), and its best cut-off for preventive screening was obtained. Its predictive validity was tested with incident cases of AMI (n=46), comparing the predictive power with markers (age, sex) and classic risk factors of AMI (hypertension, diabetes, dyslipidaemia), including the pack-years index (PYI). The scale obtained was the sum of three times the years that they had smoked plus years exposed to smoking at home and at work. The frequency of AMI increased with the values of the scale, with the value 20 years of exposure being the most appropriate cut-off for preventive action, as it provided adequate predictive values for incident AMI. The scale surpassed PYI in predicting AMI, and competed with the known markers and risk factors. The proposed scale allows a valid measurement of exposure to smoking and provides a useful and simple approach that can help promote a willingness to change, as well as prevention. It still needs to demonstrate its validity, taking as reference other problems associated with smoking. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  2. Mars Science Laboratory Flight Software Boot Robustness Testing Project Report

    NASA Technical Reports Server (NTRS)

    Roth, Brian

    2011-01-01

    On the surface of Mars, the Mars Science Laboratory will boot up its flight computers every morning, having charged the batteries through the night. This boot process is complicated, critical, and affected by numerous hardware states that can be difficult to test. The hardware test beds do not facilitate testing a long duration of back-to-back unmanned automated tests, and although the software simulation has provided the necessary functionality and fidelity for this boot testing, there has not been support for the full flexibility necessary for this task. Therefore to perform this testing a framework has been build around the software simulation that supports running automated tests loading a variety of starting configurations for software and hardware states. This implementation has been tested against the nominal cases to validate the methodology, and support for configuring off-nominal cases is ongoing. The implication of this testing is that the introduction of input configurations that have yet proved difficult to test may reveal boot scenarios worth higher fidelity investigation, and in other cases increase confidence in the robustness of the flight software boot process.

  3. Nemesis Autonomous Test System

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Lee, Cin-Young; Horvath, Gregory A,; Clement, Bradley J.

    2012-01-01

    A generalized framework has been developed for systems validation that can be applied to both traditional and autonomous systems. The framework consists of an automated test case generation and execution system called Nemesis that rapidly and thoroughly identifies flaws or vulnerabilities within a system. By applying genetic optimization and goal-seeking algorithms on the test equipment side, a "war game" is conducted between a system and its complementary nemesis. The end result of the war games is a collection of scenarios that reveals any undesirable behaviors of the system under test. The software provides a reusable framework to evolve test scenarios using genetic algorithms using an operation model of the system under test. It can automatically generate and execute test cases that reveal flaws in behaviorally complex systems. Genetic algorithms focus the exploration of tests on the set of test cases that most effectively reveals the flaws and vulnerabilities of the system under test. It leverages advances in state- and model-based engineering, which are essential in defining the behavior of autonomous systems. It also uses goal networks to describe test scenarios.

  4. Computer aided system engineering and analysis (CASE/A) modeling package for ECLS systems - An overview

    NASA Technical Reports Server (NTRS)

    Dalee, Robert C.; Bacskay, Allen S.; Knox, James C.

    1990-01-01

    An overview of the CASE/A-ECLSS series modeling package is presented. CASE/A is an analytical tool that has supplied engineering productivity accomplishments during ECLSS design activities. A components verification program was performed to assure component modeling validity based on test data from the Phase II comparative test program completed at the Marshall Space Flight Center. An integrated plotting feature has been added to the program which allows the operator to analyze on-screen data trends or get hard copy plots from within the CASE/A operating environment. New command features in the areas of schematic, output, and model management, and component data editing have been incorporated to enhance the engineer's productivity during a modeling program.

  5. Protecting Files Hosted on Virtual Machines With Out-of-Guest Access Control

    DTIC Science & Technology

    2017-12-01

    analyzes the design and methodology of the implemented mechanism, while Chapter 4 explains the test methodology, test cases, and performance testing ...SACL, we verify that the user or group accessing the file has sufficient permissions. If that is correct, the callback function returns control to...ferify. In the first section, we validate our design of ferify. Next, we explain the tests we performed to verify that ferify has the results we expected

  6. Integration of system identification and finite element modelling of nonlinear vibrating structures

    NASA Astrophysics Data System (ADS)

    Cooper, Samson B.; DiMaio, Dario; Ewins, David J.

    2018-03-01

    The Finite Element Method (FEM), Experimental modal analysis (EMA) and other linear analysis techniques have been established as reliable tools for the dynamic analysis of engineering structures. They are often used to provide solutions to small and large structures and other variety of cases in structural dynamics, even those exhibiting a certain degree of nonlinearity. Unfortunately, when the nonlinear effects are substantial or the accuracy of the predicted response is of vital importance, a linear finite element model will generally prove to be unsatisfactory. As a result, the validated linear FE model requires further enhancement so that it can represent and predict the nonlinear behaviour exhibited by the structure. In this paper, a pragmatic approach to integrating test-based system identification and FE modelling of a nonlinear structure is presented. This integration is based on three different phases: the first phase involves the derivation of an Underlying Linear Model (ULM) of the structure, the second phase includes experiment-based nonlinear identification using measured time series and the third phase covers augmenting the linear FE model and experimental validation of the nonlinear FE model. The proposed case study is demonstrated on a twin cantilever beam assembly coupled with a flexible arch shaped beam. In this case, polynomial-type nonlinearities are identified and validated with force-controlled stepped-sine test data at several excitation levels.

  7. Six stroma-based RNA markers diagnostic for prostate cancer in European-Americans validated at the RNA and protein levels in patients in China

    PubMed Central

    Zhu, Jianguo; Pan, Cong; Jiang, Jun; Deng, Mingsen; Gao, Hengjun; Men, Bozhao; McClelland, Michael; Mercola, Dan; Zhong, Wei-De; Jia, Zhenyu

    2015-01-01

    We previously analyzed human prostate tissue containing stroma near to tumor and from cancer-negative tissues of volunteers. Over 100 candidate gene expression differences were identified and used to develop a classifier that could detect nearby tumor with an accuracy of 97% (sensitivity = 98% and specificity = 88%) based on 364 independent test cases from primarily European American cases. These stroma-based gene signatures have the potential to identify cancer patients among those with negative biopsies. In this study, we used prostate tissues from Chinese cases to validate six of these markers (CAV1, COL4A2, HSPB1, ITGB3, MAP1A and MCAM). In validation by real-time PCR, four genes (COL4A2, HSPB1, ITGB3, and MAP1A) demonstrated significantly lower expression in tumor-adjacent stroma compared to normal stroma (p value ≤ 0.05). Next, we tested whether these expression differences could be extended to the protein level. In IHC assays, all six selected proteins showed lower expression in tumor-adjacent stroma compared to the normal stroma, of which COL4A2, HSPB1 and ITGB3 showed significant differences (p value ≤ 0.05). These results suggest that biomarkers for diagnosing prostate cancer based on tumor microenvironment may be applicable across multiple racial groups. PMID:26158290

  8. Mathematical Models of IABG Thermal-Vacuum Facilities

    NASA Astrophysics Data System (ADS)

    Doring, Daniel; Ulfers, Hendrik

    2014-06-01

    IABG in Ottobrunn, Germany, operates thermal-vacuum facilities of different sizes and complexities as a service for space-testing of satellites and components. One aspect of these tests is the qualification of the thermal control system that keeps all onboard components within their save operating temperature band. As not all possible operation / mission states can be simulated within a sensible test time, usually a subset of important and extreme states is tested at TV facilities to validate the thermal model of the satellite, which is then used to model all other possible mission states. With advances in the precision of customer thermal models, simple assumptions of the test environment (e.g. everything black & cold, one solar constant of light from this side) are no longer sufficient, as real space simulation chambers do deviate from this ideal. For example the mechanical adapters which support the spacecraft are usually not actively cooled. To enable IABG to provide a model that is sufficiently detailed and realistic for current system tests, Munich engineering company CASE developed ESATAN models for the two larger chambers. CASE has many years of experience in thermal analysis for space-flight systems and ESATAN. The two models represent the rather simple (and therefore very homogeneous) 3m-TVA and the extremely complex space simulation test facility and its solar simulator. The cooperation of IABG and CASE built up extensive knowledge of the facilities thermal behaviour. This is the key to optimally support customers with their test campaigns in the future. The ESARAD part of the models contains all relevant information with regard to geometry (CAD data), surface properties (optical measurements) and solar irradiation for the sun simulator. The temperature of the actively cooled thermal shrouds is measured and mapped to the thermal mesh to create the temperature field in the ESATAN part as boundary conditions. Both models comprise switches to easily establish multiple possible set-ups (e.g. exclude components like the motion system or enable / disable the solar simulator). Both models were validated by comparing calculated results (thermal balance temperatures for simple passive test articles) with measured temperatures generated in actual tests in these facilities. This paper presents information about the chambers, the modelling approach, properties of the models and their performance in the validation tests.

  9. Development and validation of a melanoma risk score based on pooled data from 16 case-control studies

    PubMed Central

    Davies, John R; Chang, Yu-mei; Bishop, D Timothy; Armstrong, Bruce K; Bataille, Veronique; Bergman, Wilma; Berwick, Marianne; Bracci, Paige M; Elwood, J Mark; Ernstoff, Marc S; Green, Adele; Gruis, Nelleke A; Holly, Elizabeth A; Ingvar, Christian; Kanetsky, Peter A; Karagas, Margaret R; Lee, Tim K; Le Marchand, Loïc; Mackie, Rona M; Olsson, Håkan; Østerlind, Anne; Rebbeck, Timothy R; Reich, Kristian; Sasieni, Peter; Siskind, Victor; Swerdlow, Anthony J; Titus, Linda; Zens, Michael S; Ziegler, Andreas; Gallagher, Richard P.; Barrett, Jennifer H; Newton-Bishop, Julia

    2015-01-01

    Background We report the development of a cutaneous melanoma risk algorithm based upon 7 factors; hair colour, skin type, family history, freckling, nevus count, number of large nevi and history of sunburn, intended to form the basis of a self-assessment webtool for the general public. Methods Predicted odds of melanoma were estimated by analysing a pooled dataset from 16 case-control studies using logistic random coefficients models. Risk categories were defined based on the distribution of the predicted odds in the controls from these studies. Imputation was used to estimate missing data in the pooled datasets. The 30th, 60th and 90th centiles were used to distribute individuals into four risk groups for their age, sex and geographic location. Cross-validation was used to test the robustness of the thresholds for each group by leaving out each study one by one. Performance of the model was assessed in an independent UK case-control study dataset. Results Cross-validation confirmed the robustness of the threshold estimates. Cases and controls were well discriminated in the independent dataset (area under the curve 0.75, 95% CI 0.73-0.78). 29% of cases were in the highest risk group compared with 7% of controls, and 43% of controls were in the lowest risk group compared with 13% of cases. Conclusion We have identified a composite score representing an estimate of relative risk and successfully validated this score in an independent dataset. Impact This score may be a useful tool to inform members of the public about their melanoma risk. PMID:25713022

  10. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  11. Validation of the Oncentra Brachy Advanced Collapsed cone Engine for a commercial (192)Ir source using heterogeneous geometries.

    PubMed

    Ma, Yunzhi; Lacroix, Fréderic; Lavallée, Marie-Claude; Beaulieu, Luc

    2015-01-01

    To validate the Advanced Collapsed cone Engine (ACE) dose calculation engine of Oncentra Brachy (OcB) treatment planning system using an (192)Ir source. Two levels of validation were performed, conformant to the model-based dose calculation algorithm commissioning guidelines of American Association of Physicists in Medicine TG-186 report. Level 1 uses all-water phantoms, and the validation is against TG-43 methodology. Level 2 uses real-patient cases, and the validation is against Monte Carlo (MC) simulations. For each case, the ACE and TG-43 calculations were performed in the OcB treatment planning system. ALGEBRA MC system was used to perform MC simulations. In Level 1, the ray effect depends on both accuracy mode and the number of dwell positions. The volume fraction with dose error ≥2% quickly reduces from 23% (13%) for a single dwell to 3% (2%) for eight dwell positions in the standard (high) accuracy mode. In Level 2, the 10% and higher isodose lines were observed overlapping between ACE (both standard and high-resolution modes) and MC. Major clinical indices (V100, V150, V200, D90, D50, and D2cc) were investigated and validated by MC. For example, among the Level 2 cases, the maximum deviation in V100 of ACE from MC is 2.75% but up to ~10% for TG-43. Similarly, the maximum deviation in D90 is 0.14 Gy between ACE and MC but up to 0.24 Gy for TG-43. ACE demonstrated good agreement with MC in most clinically relevant regions in the cases tested. Departure from MC is significant for specific situations but limited to low-dose (<10% isodose) regions. Copyright © 2015 American Brachytherapy Society. Published by Elsevier Inc. All rights reserved.

  12. Broadband Fan Noise Prediction System for Turbofan Engines. Volume 3; Validation and Test Cases

    NASA Technical Reports Server (NTRS)

    Morin, Bruce L.

    2010-01-01

    Pratt & Whitney has developed a Broadband Fan Noise Prediction System (BFaNS) for turbofan engines. This system computes the noise generated by turbulence impinging on the leading edges of the fan and fan exit guide vane, and noise generated by boundary-layer turbulence passing over the fan trailing edge. BFaNS has been validated on three fan rigs that were tested during the NASA Advanced Subsonic Technology Program (AST). The predicted noise spectra agreed well with measured data. The predicted effects of fan speed, vane count, and vane sweep also agreed well with measurements. The noise prediction system consists of two computer programs: Setup_BFaNS and BFaNS. Setup_BFaNS converts user-specified geometry and flow-field information into a BFaNS input file. From this input file, BFaNS computes the inlet and aft broadband sound power spectra generated by the fan and FEGV. The output file from BFaNS contains the inlet, aft and total sound power spectra from each noise source. This report is the third volume of a three-volume set documenting the Broadband Fan Noise Prediction System: Volume 1: Setup_BFaNS User s Manual and Developer s Guide; Volume 2: BFaNS User s Manual and Developer s Guide; and Volume 3: Validation and Test Cases. The present volume begins with an overview of the Broadband Fan Noise Prediction System, followed by validation studies that were done on three fan rigs. It concludes with recommended improvements and additional studies for BFaNS.

  13. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less

  14. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  15. Validation of an expert system intended for research in distributed artificial intelligence

    NASA Technical Reports Server (NTRS)

    Grossner, C.; Lyons, J.; Radhakrishnan, T.

    1991-01-01

    The expert system discussed in this paper is designed to function as a testbed for research on cooperating expert systems. Cooperating expert systems are members of an organization which dictates the manner in which the expert systems will interact when solving a problem. The Blackbox Expert described in this paper has been constructed using the C Language Integrated Production System (CLIPS), C++, and X windowing environment. CLIPS is embedded in a C++ program which provides objects that are used to maintain the state of the Blackbox puzzle. These objects are accessed by CLIPS rules through user-defined functions calls. The performance of the Blackbox Expert is validated by experimentation. A group of people are asked to solve a set of test cases for the Blackbox puzzle. A metric has been devised which evaluates the 'correctness' of a solution proposed for a test case of Blackbox. Using this metric and the solutions proposed by the humans, each person receives a rating for their ability to solve the Blackbox puzzle. The Blackbox Expert solves the same set of test cases and is assigned a rating for its ability. Then the rating obtained by the Blackbox Expert is compared with the ratings of the people, thus establishing the skill level of our expert system.

  16. A computable phenotype for asthma case identification in adult and pediatric patients: External validation in the Chicago Area Patient-Outcomes Research Network (CAPriCORN).

    PubMed

    Afshar, Majid; Press, Valerie G; Robison, Rachel G; Kho, Abel N; Bandi, Sindhura; Biswas, Ashvini; Avila, Pedro C; Kumar, Harsha Vardhan Madan; Yu, Byung; Naureckas, Edward T; Nyenhuis, Sharmilee M; Codispoti, Christopher D

    2017-10-13

    Comprehensive, rapid, and accurate identification of patients with asthma for clinical care and engagement in research efforts is needed. The original development and validation of a computable phenotype for asthma case identification occurred at a single institution in Chicago and demonstrated excellent test characteristics. However, its application in a diverse payer mix, across different health systems and multiple electronic health record vendors, and in both children and adults was not examined. The objective of this study is to externally validate the computable phenotype across diverse Chicago institutions to accurately identify pediatric and adult patients with asthma. A cohort of 900 asthma and control patients was identified from the electronic health record between January 1, 2012 and November 30, 2014. Two physicians at each site independently reviewed the patient chart to annotate cases. The inter-observer reliability between the physician reviewers had a κ-coefficient of 0.95 (95% CI 0.93-0.97). The accuracy, sensitivity, specificity, negative predictive value, and positive predictive value of the computable phenotype were all above 94% in the full cohort. The excellent positive and negative predictive values in this multi-center external validation study establish a useful tool to identify asthma cases in in the electronic health record for research and care. This computable phenotype could be used in large-scale comparative-effectiveness trials.

  17. The Information a Test Provides on an Ability Parameter. Research Report. ETS RR-07-18

    ERIC Educational Resources Information Center

    Haberman, Shelby J.

    2007-01-01

    In item-response theory, if a latent-structure model has an ability variable, then elementary information theory may be employed to provide a criterion for evaluation of the information the test provides concerning ability. This criterion may be considered even in cases in which the latent-structure model is not valid, although interpretation of…

  18. MicroRNA expression in benign breast tissue and risk of subsequent invasive breast cancer.

    PubMed

    Rohan, Thomas; Ye, Kenny; Wang, Yihong; Glass, Andrew G; Ginsberg, Mindy; Loudig, Olivier

    2018-01-01

    MicroRNAs are endogenous, small non-coding RNAs that control gene expression by directing their target mRNAs for degradation and/or posttranscriptional repression. Abnormal expression of microRNAs is thought to contribute to the development and progression of cancer. A history of benign breast disease (BBD) is associated with increased risk of subsequent breast cancer. However, no large-scale study has examined the association between microRNA expression in BBD tissue and risk of subsequent invasive breast cancer (IBC). We conducted discovery and validation case-control studies nested in a cohort of 15,395 women diagnosed with BBD in a large health plan between 1971 and 2006 and followed to mid-2015. Cases were women with BBD who developed subsequent IBC; controls were matched 1:1 to cases on age, age at diagnosis of BBD, and duration of plan membership. The discovery stage (316 case-control pairs) entailed use of the Illumina MicroRNA Expression Profiling Assay (in duplicate) to identify breast cancer-associated microRNAs. MicroRNAs identified at this stage were ranked by the strength of the correlation between Illumina array and quantitative PCR results for 15 case-control pairs. The top ranked 14 microRNAs entered the validation stage (165 case-control pairs) which was conducted using quantitative PCR (in triplicate). In both stages, linear regression was used to evaluate the association between the mean expression level of each microRNA (response variable) and case-control status (independent variable); paired t-tests were also used in the validation stage. None of the 14 validation stage microRNAs was associated with breast cancer risk. The results of this study suggest that microRNA expression in benign breast tissue does not influence the risk of subsequent IBC.

  19. MicroRNA expression in benign breast tissue and risk of subsequent invasive breast cancer

    PubMed Central

    Ye, Kenny; Wang, Yihong; Ginsberg, Mindy; Loudig, Olivier

    2018-01-01

    MicroRNAs are endogenous, small non-coding RNAs that control gene expression by directing their target mRNAs for degradation and/or posttranscriptional repression. Abnormal expression of microRNAs is thought to contribute to the development and progression of cancer. A history of benign breast disease (BBD) is associated with increased risk of subsequent breast cancer. However, no large-scale study has examined the association between microRNA expression in BBD tissue and risk of subsequent invasive breast cancer (IBC). We conducted discovery and validation case-control studies nested in a cohort of 15,395 women diagnosed with BBD in a large health plan between 1971 and 2006 and followed to mid-2015. Cases were women with BBD who developed subsequent IBC; controls were matched 1:1 to cases on age, age at diagnosis of BBD, and duration of plan membership. The discovery stage (316 case-control pairs) entailed use of the Illumina MicroRNA Expression Profiling Assay (in duplicate) to identify breast cancer-associated microRNAs. MicroRNAs identified at this stage were ranked by the strength of the correlation between Illumina array and quantitative PCR results for 15 case-control pairs. The top ranked 14 microRNAs entered the validation stage (165 case-control pairs) which was conducted using quantitative PCR (in triplicate). In both stages, linear regression was used to evaluate the association between the mean expression level of each microRNA (response variable) and case-control status (independent variable); paired t-tests were also used in the validation stage. None of the 14 validation stage microRNAs was associated with breast cancer risk. The results of this study suggest that microRNA expression in benign breast tissue does not influence the risk of subsequent IBC. PMID:29432432

  20. Automated lung sound analysis for detecting pulmonary abnormalities.

    PubMed

    Datta, Shreyasi; Dutta Choudhury, Anirban; Deshpande, Parijat; Bhattacharya, Sakyajit; Pal, Arpan

    2017-07-01

    Identification of pulmonary diseases comprises of accurate auscultation as well as elaborate and expensive pulmonary function tests. Prior arts have shown that pulmonary diseases lead to abnormal lung sounds such as wheezes and crackles. This paper introduces novel spectral and spectrogram features, which are further refined by Maximal Information Coefficient, leading to the classification of healthy and abnormal lung sounds. A balanced lung sound dataset, consisting of publicly available data and data collected with a low-cost in-house digital stethoscope are used. The performance of the classifier is validated over several randomly selected non-overlapping training and validation samples and tested on separate subjects for two separate test cases: (a) overlapping and (b) non-overlapping data sources in training and testing. The results reveal that the proposed method sustains an accuracy of 80% even for non-overlapping data sources in training and testing.

  1. The Human Toxome Project

    EPA Science Inventory

    The Human Toxome project, funded as an NIH Transformative Research grant 2011--‐ 2016, is focused on developing the concepts and the means for deducing, validating, and sharing molecular Pathways of Toxicity (PoT). Using the test case of estrogenic endocrine disruption, the respo...

  2. Second-Moment RANS Model Verification and Validation Using the Turbulence Modeling Resource Website (Invited)

    NASA Technical Reports Server (NTRS)

    Eisfeld, Bernhard; Rumsey, Chris; Togiti, Vamshi

    2015-01-01

    The implementation of the SSG/LRR-omega differential Reynolds stress model into the NASA flow solvers CFL3D and FUN3D and the DLR flow solver TAU is verified by studying the grid convergence of the solution of three different test cases from the Turbulence Modeling Resource Website. The model's predictive capabilities are assessed based on four basic and four extended validation cases also provided on this website, involving attached and separated boundary layer flows, effects of streamline curvature and secondary flow. Simulation results are compared against experimental data and predictions by the eddy-viscosity models of Spalart-Allmaras (SA) and Menter's Shear Stress Transport (SST).

  3. Real-world use of the risk-need-responsivity model and the level of service/case management inventory with community-supervised offenders.

    PubMed

    Dyck, Heather L; Campbell, Mary Ann; Wershler, Julie L

    2018-06-01

    The risk-need-responsivity model (RNR; Bonta & Andrews, 2017) has become a leading approach for effective offender case management, but field tests of this model are still required. The present study first assessed the predictive validity of the RNR-informed Level of Service/Case Management Inventory (LS/CMI; Andrews, Bonta, & Wormith, 2004) with a sample of Atlantic Canadian male and female community-supervised provincial offenders (N = 136). Next, the case management plans prepared from these LS/CMI results were analyzed for adherence to the principles of risk, need, and responsivity. As expected, the LS/CMI was a strong predictor of general recidivism for both males (area under the curve = .75, 95% confidence interval [.66, .85]), and especially females (area under the curve = .94, 95% confidence interval [.84, 1.00]), over an average 3.42-year follow-up period. The LS/CMI was predictive of time to recidivism, with lower risk cases taking longer to reoffend than higher risk cases. Despite the robust predictive validity of the LS/CMI, case management plans developed by probation officers generally reflected poor adherence to the RNR principles. These findings highlight the need for better training on how to transfer risk appraisal information from valid risk tools to case plans to better meet the best-practice principles of risk, need, and responsivity for criminal behavior risk reduction. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  4. Laminar Heating Validation of the OVERFLOW Code

    NASA Technical Reports Server (NTRS)

    Lillard, Randolph P.; Dries, Kevin M.

    2005-01-01

    OVERFLOW, a structured finite difference code, was applied to the solution of hypersonic laminar flow over several configurations assuming perfect gas chemistry. By testing OVERFLOW's capabilities over several configurations encompassing a variety of flow physics a validated laminar heating was produced. Configurations tested were a flat plate at 0 degrees incidence, a sphere, a compression ramp, and the X-38 re-entry vehicle. This variety of test cases shows the ability of the code to predict boundary layer flow, stagnation heating, laminar separation with re-attachment heating, and complex flow over a three-dimensional body. In addition, grid resolutions studies were done to give recommendations for the correct number of off-body points to be applied to generic problems and for wall-spacing values to capture heat transfer and skin friction. Numerical results show good comparison to the test data for all the configurations.

  5. The case against one-shot testing for initial dental licensure.

    PubMed

    Chambers, David W; Dugoni, Arthur A; Paisley, Ian

    2004-03-01

    High-stakes testing are expected to meet standards for cost-effectiveness, fairness, transparency, high reliability, and high validity. It is questionable whether initial licensure examinations in dentistry meet such standards. Decades of piecemeal adjustments in the system have resulted in limited improvement. The essential flaw in the system is reliance on a one-shot sample of a small segment of the skills, understanding, and supporting values needed for today's professional practice of dentistry. The "snapshot" approach to testing produces inherently substandard levels of reliability and validity. A three-step alternative is proposed: boards should (1) define the competencies required of beginning practitioners, (2) establish the psychometric standards needed to make defensible judgments about candidates, and (3) base licensure decisions only on portfolios of evidence that test for defined competencies at established levels of quality.

  6. Standardisation of gujrati version of middlesex hospital questionnaire.

    PubMed

    Gada, M T

    1981-04-01

    The Middlesex Hospital Questionnaire is a short clinical diagnostic self rating scale for psychoneurotic patients constructed by Crown and Crisp (1966). Aim of the present study was to prepare Gujarati Version of the M.H.Q. and to establish the reliability and validity of the same.Gujarati version of the M.H.Q. was given to 204 normal population consisting of university students, school teachers, factory workers, house wives and middle aged men from different walks of the life to test the validity. The test was also administered to 30 neurotic patients. This Gujarati version was found to be reliable. There was highly significant difference between normal population and neurotic patients on total score and on all the six subtests, thus establishing the validity of the Gujarati version. It also related well with the clinical diagnosis in most of the cases.

  7. Antecedents and Consequences of Supplier Performance Evaluation Efficacy

    DTIC Science & Technology

    2016-06-30

    forming groups of high and low values. These tests are contingent on the reliable and valid measure of high and low rating inflation and high and...year)? Future research could deploy a SPM system as a test case on a limited set of transactions. Using a quasi-experimental design , comparisons...single source, common method bias must be of concern. Harmon’s one -factor test showed that when latent-indicator items were forced onto a single

  8. Extension and Validation of a Hybrid Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 2

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.; Shivarama, Ravishankar

    2004-01-01

    The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.

  9. ExEP yield modeling tool and validation test results

    NASA Astrophysics Data System (ADS)

    Morgan, Rhonda; Turmon, Michael; Delacroix, Christian; Savransky, Dmitry; Garrett, Daniel; Lowrance, Patrick; Liu, Xiang Cate; Nunez, Paul

    2017-09-01

    EXOSIMS is an open-source simulation tool for parametric modeling of the detection yield and characterization of exoplanets. EXOSIMS has been adopted by the Exoplanet Exploration Programs Standards Definition and Evaluation Team (ExSDET) as a common mechanism for comparison of exoplanet mission concept studies. To ensure trustworthiness of the tool, we developed a validation test plan that leverages the Python-language unit-test framework, utilizes integration tests for selected module interactions, and performs end-to-end crossvalidation with other yield tools. This paper presents the test methods and results, with the physics-based tests such as photometry and integration time calculation treated in detail and the functional tests treated summarily. The test case utilized a 4m unobscured telescope with an idealized coronagraph and an exoplanet population from the IPAC radial velocity (RV) exoplanet catalog. The known RV planets were set at quadrature to allow deterministic validation of the calculation of physical parameters, such as working angle, photon counts and integration time. The observing keepout region was tested by generating plots and movies of the targets and the keepout zone over a year. Although the keepout integration test required the interpretation of a user, the test revealed problems in the L2 halo orbit and the parameterization of keepout applied to some solar system bodies, which the development team was able to address. The validation testing of EXOSIMS was performed iteratively with the developers of EXOSIMS and resulted in a more robust, stable, and trustworthy tool that the exoplanet community can use to simulate exoplanet direct-detection missions from probe class, to WFIRST, up to large mission concepts such as HabEx and LUVOIR.

  10. NDARC - NASA Design and Analysis of Rotorcraft Validation and Demonstration

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2010-01-01

    Validation and demonstration results from the development of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are presented. The principal tasks of NDARC are to design a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft chosen as NDARC development test cases are the UH-60A single main-rotor and tail-rotor helicopter, the CH-47D tandem helicopter, the XH-59A coaxial lift-offset helicopter, and the XV-15 tiltrotor. These aircraft were selected because flight performance data, a weight statement, detailed geometry information, and a correlated comprehensive analysis model are available for each. Validation consists of developing the NDARC models for these aircraft by using geometry and weight information, airframe wind tunnel test data, engine decks, rotor performance tests, and comprehensive analysis results; and then comparing the NDARC results for aircraft and component performance with flight test data. Based on the calibrated models, the capability of the code to size rotorcraft is explored.

  11. MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.

    2001-01-01

    The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations of clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including LCO behavior.

  12. MAVRIC Flutter Model Transonic Limit Cycle Oscillation Test

    NASA Technical Reports Server (NTRS)

    Edwards, John W.; Schuster, David M.; Spain, Charles V.; Keller, Donald F.; Moses, Robert W.

    2001-01-01

    The Models for Aeroelastic Validation Research Involving Computation semi-span wind-tunnel model (MAVRIC-I), a business jet wing-fuselage flutter model, was tested in NASA Langley's Transonic Dynamics Tunnel with the goal of obtaining experimental data suitable for Computational Aeroelasticity code validation at transonic separation onset conditions. This research model is notable for its inexpensive construction and instrumentation installation procedures. Unsteady pressures and wing responses were obtained for three wingtip configurations clean, tipstore, and winglet. Traditional flutter boundaries were measured over the range of M = 0.6 to 0.9 and maps of Limit Cycle Oscillation (LCO) behavior were made in the range of M = 0.85 to 0.95. Effects of dynamic pressure and angle-of-attack were measured. Testing in both R134a heavy gas and air provided unique data on Reynolds number, transition effects, and the effect of speed of sound on LCO behavior. The data set provides excellent code validation test cases for the important class of flow conditions involving shock-induced transonic flow separation onset at low wing angles, including Limit Cycle Oscillation behavior.

  13. Testing and Validating Machine Learning Classifiers by Metamorphic Testing☆

    PubMed Central

    Xie, Xiaoyuan; Ho, Joshua W. K.; Murphy, Christian; Kaiser, Gail; Xu, Baowen; Chen, Tsong Yueh

    2011-01-01

    Machine Learning algorithms have provided core functionality to many application domains - such as bioinformatics, computational linguistics, etc. However, it is difficult to detect faults in such applications because often there is no “test oracle” to verify the correctness of the computed outputs. To help address the software quality, in this paper we present a technique for testing the implementations of machine learning classification algorithms which support such applications. Our approach is based on the technique “metamorphic testing”, which has been shown to be effective to alleviate the oracle problem. Also presented include a case study on a real-world machine learning application framework, and a discussion of how programmers implementing machine learning algorithms can avoid the common pitfalls discovered in our study. We also conduct mutation analysis and cross-validation, which reveal that our method has high effectiveness in killing mutants, and that observing expected cross-validation result alone is not sufficiently effective to detect faults in a supervised classification program. The effectiveness of metamorphic testing is further confirmed by the detection of real faults in a popular open-source classification program. PMID:21532969

  14. Propeller aircraft interior noise model. II - Scale-model and flight-test comparisons

    NASA Technical Reports Server (NTRS)

    Willis, C. M.; Mayes, W. H.

    1987-01-01

    A program for predicting the sound levels inside propeller driven aircraft arising from sidewall transmission of airborne exterior noise is validated through comparisons of predictions with both scale-model test results and measurements obtained in flight tests on a turboprop aircraft. The program produced unbiased predictions for the case of the scale-model tests, with a standard deviation of errors of about 4 dB. For the case of the flight tests, the predictions revealed a bias of 2.62-4.28 dB (depending upon whether or not the data for the fourth harmonic were included) and the standard deviation of the errors ranged between 2.43 and 4.12 dB. The analytical model is shown to be capable of taking changes in the flight environment into account.

  15. Validity and Reliability of the Alcohol, Smoking, and Substance Involvement Screening Test (ASSIST) in University Students.

    PubMed

    Tiburcio Sainz, Marcela; Rosete-Mohedano, Ma Guadalupe; Natera Rey, Guillermina; Martínez Vélez, Nora Angélica; Carreño García, Silvia; Pérez Cisneros, Daniel

    2016-03-02

    The Alcohol, Smoking and Substance Involvement Screening Test (ASSIST), developed by the World Health Organization (WHO), has been used successfully in many countries, but there are few studies of its validity and reliability for the Mexican population. The objective of this study was to determine the psychometric properties of the self-administered ASSIST test in university students in Mexico. This was an ex post facto non-experimental study with 1,176 undergraduate students, the majority women (70.1%) aged 18-23 years (89.5%) and single (87.5%). To estimate concurrent validity, factor analysis and tests of reliability and correlation were carried out between the subscale for alcohol and AUDIT, those for tobacco and the Fagerström Test, and those for marijuana and DAST-20. Adequate reliability coefficients were obtained for ASSIST subscales for tobacco (alpha = 0.83), alcohol (alpha = 0.76), and marijuana (alpha = 0.73). Significant correlations were found only with the AUDIT (r = 0.71) and the alcohol subscale. The best balance of sensitivity and specificity of the alcohol subscale (83.8% and 80%, respectively) and the largest area under the ROC curve (81.9%) was found with a cutoff score of 8. The self-administered version of ASSIST is a valid screening instrument to identify at-risk cases due to substance use in this population.

  16. 14 CFR 33.70 - Engine life-limited parts.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ..., hubs, shafts, high-pressure casings, and non-redundant mount components. For the purposes of this... life before hazardous engine effects can occur. These steps include validated analysis, test, or... assessments to address the potential for failure from material, manufacturing, and service induced anomalies...

  17. 14 CFR 33.70 - Engine life-limited parts.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ..., hubs, shafts, high-pressure casings, and non-redundant mount components. For the purposes of this... life before hazardous engine effects can occur. These steps include validated analysis, test, or... assessments to address the potential for failure from material, manufacturing, and service induced anomalies...

  18. Evaluating construct validity of the second version of the Copenhagen Psychosocial Questionnaire through analysis of differential item functioning and differential item effect.

    PubMed

    Bjorner, Jakob Bue; Pejtersen, Jan Hyld

    2010-02-01

    To evaluate the construct validity of the Copenhagen Psychosocial Questionnaire II (COPSOQ II) by means of tests for differential item functioning (DIF) and differential item effect (DIE). We used a Danish general population postal survey (n = 4,732 with 3,517 wage earners) with a one-year register based follow up for long-term sickness absence. DIF was evaluated against age, gender, education, social class, public/private sector employment, and job type using ordinal logistic regression. DIE was evaluated against job satisfaction and self-rated health (using ordinal logistic regression), against depressive symptoms, burnout, and stress (using multiple linear regression), and against long-term sick leave (using a proportional hazards model). We used a cross-validation approach to counter the risk of significant results due to multiple testing. Out of 1,052 tests, we found 599 significant instances of DIF/DIE, 69 of which showed both practical and statistical significance across two independent samples. Most DIF occurred for job type (in 20 cases), while we found little DIF for age, gender, education, social class and sector. DIE seemed to pertain to particular items, which showed DIE in the same direction for several outcome variables. The results allowed a preliminary identification of items that have a positive impact on construct validity and items that have negative impact on construct validity. These results can be used to develop better shortform measures and to improve the conceptual framework, items and scales of the COPSOQ II. We conclude that tests of DIF and DIE are useful for evaluating construct validity.

  19. Development of an Axisymmetric Afterbody Test Case for Turbulent Flow Separation Validation

    NASA Technical Reports Server (NTRS)

    Disotell, Kevin J.; Rumsey, Christopher L.

    2017-01-01

    As identified in the CFD Vision 2030 Study commissioned by NASA, validation of advanced RANS models and scale-resolving methods for computing turbulent flows must be supported by improvements in high-quality experiments designed specifically for CFD implementation. A new test platform referred to as the Axisymmetric Afterbody allows for a range of flow behaviors to be studied on interchangeable afterbodies while facilitating access to higher Reynolds number facilities. A priori RANS computations are reported for a risk-reduction configuration to demonstrate critical variation among turbulence model results for a given afterbody, ranging from barely-attached to mild separated flow. The effects of body nose geometry and tunnel-wall boundary condition on the computed afterbody flow are explored to inform the design of an experimental test program.

  20. In-flight speech intelligibility evaluation of a service member with sensorineural hearing loss: case report.

    PubMed

    Casto, Kristen L; Cho, Timothy H

    2012-09-01

    This case report describes the in-flight speech intelligibility evaluation of an aircraft crewmember with pure tone audiometric thresholds that exceed the U.S. Army's flight standards. Results of in-flight speech intelligibility testing highlight the inability to predict functional auditory abilities from pure tone audiometry and underscore the importance of conducting validated functional hearing evaluations to determine aviation fitness-for-duty.

  1. Validation of Test Performance and Clinical Time Zero for an Electronic Health Record Embedded Severe Sepsis Alert

    PubMed Central

    Downing, N. Lance; Shepard, John; Chu, Weihan; Tam, Julia; Wessels, Alexander; Li, Ron; Dietrich, Brian; Rudy, Michael; Castaneda, Leon; Shieh, Lisa

    2016-01-01

    Summary Bachground Increasing use of EHRs has generated interest in the potential of computerized clinical decision support to improve treatment of sepsis. Electronic sepsis alerts have had mixed results due to poor test characteristics, the inability to detect sepsis in a timely fashion and the use of outside software limiting widespread adoption. We describe the development, evaluation and validation of an accurate and timely severe sepsis alert with the potential to impact sepsis management. Objective To develop, evaluate, and validate an accurate and timely severe sepsis alert embedded in a commercial EHR. Methods The sepsis alert was developed by identifying the most common severe sepsis criteria among a cohort of patients with ICD 9 codes indicating a diagnosis of sepsis. This alert requires criteria in three categories: indicators of a systemic inflammatory response, evidence of suspected infection from physician orders, and markers of organ dysfunction. Chart review was used to evaluate test performance and the ability to detect clinical time zero, the point in time when a patient develops severe sepsis. Results Two physicians reviewed 100 positive cases and 75 negative cases. Based on this review, sensitivity was 74.5%, specificity was 86.0%, the positive predictive value was 50.3%, and the negative predictive value was 94.7%. The most common source of end-organ dysfunction was MAP less than 70 mm/Hg (59%). The alert was triggered at clinical time zero in 41% of cases and within three hours in 53.6% of cases. 96% of alerts triggered before a manual nurse screen. Conclusion We are the first to report the time between a sepsis alert and physician chart-review clinical time zero. Incorporating physician orders in the alert criteria improves specificity while maintaining sensitivity, which is important to reduce alert fatigue. By leveraging standard EHR functionality, this alert could be implemented by other healthcare systems. PMID:27437061

  2. Validation of Test Performance and Clinical Time Zero for an Electronic Health Record Embedded Severe Sepsis Alert.

    PubMed

    Rolnick, Joshua; Downing, N Lance; Shepard, John; Chu, Weihan; Tam, Julia; Wessels, Alexander; Li, Ron; Dietrich, Brian; Rudy, Michael; Castaneda, Leon; Shieh, Lisa

    2016-01-01

    Increasing use of EHRs has generated interest in the potential of computerized clinical decision support to improve treatment of sepsis. Electronic sepsis alerts have had mixed results due to poor test characteristics, the inability to detect sepsis in a timely fashion and the use of outside software limiting widespread adoption. We describe the development, evaluation and validation of an accurate and timely severe sepsis alert with the potential to impact sepsis management. To develop, evaluate, and validate an accurate and timely severe sepsis alert embedded in a commercial EHR. The sepsis alert was developed by identifying the most common severe sepsis criteria among a cohort of patients with ICD 9 codes indicating a diagnosis of sepsis. This alert requires criteria in three categories: indicators of a systemic inflammatory response, evidence of suspected infection from physician orders, and markers of organ dysfunction. Chart review was used to evaluate test performance and the ability to detect clinical time zero, the point in time when a patient develops severe sepsis. Two physicians reviewed 100 positive cases and 75 negative cases. Based on this review, sensitivity was 74.5%, specificity was 86.0%, the positive predictive value was 50.3%, and the negative predictive value was 94.7%. The most common source of end-organ dysfunction was MAP less than 70 mm/Hg (59%). The alert was triggered at clinical time zero in 41% of cases and within three hours in 53.6% of cases. 96% of alerts triggered before a manual nurse screen. We are the first to report the time between a sepsis alert and physician chart-review clinical time zero. Incorporating physician orders in the alert criteria improves specificity while maintaining sensitivity, which is important to reduce alert fatigue. By leveraging standard EHR functionality, this alert could be implemented by other healthcare systems.

  3. Validation of a new algorithm for a quick and easy RT-PCR-based ALK test in a large series of lung adenocarcinomas: Comparison with FISH, immunohistochemistry and next generation sequencing assays.

    PubMed

    Marchetti, Antonio; Pace, Maria Vittoria; Di Lorito, Alessia; Canarecci, Sara; Felicioni, Lara; D'Antuono, Tommaso; Liberatore, Marcella; Filice, Giampaolo; Guetti, Luigi; Mucilli, Felice; Buttitta, Fiamma

    2016-09-01

    Anaplastic Lymphoma Kinase (ALK) gene rearrangements have been described in 3-5% of lung adenocarcinomas (ADC) and their identification is essential to select patients for treatment with ALK tyrosine kinase inhibitors. For several years, fluorescent in situ hybridization (FISH) has been considered as the only validated diagnostic assay. Currently, alternative methods are commercially available as diagnostic tests. A series of 217 ADC comprising 196 consecutive resected tumors and 21 ALK FISH-positive cases from an independent series of 702 ADC were investigated. All specimens were screened by IHC (ALK-D5F3-CDx-Ventana), FISH (Vysis ALK Break-Apart-Abbott) and RT-PCR (ALK RGQ RT-PCR-Qiagen). Results were compared and discordant cases subjected to Next Generation Sequencing. Thirty-nine of 217 samples were positive by the ALK RGQ RT-PCR assay, using a threshold cycle (Ct) cut-off ≤35.9, as recommended. Of these positive samples, 14 were negative by IHC and 12 by FISH. ALK RGQ RT-PCR/FISH discordant cases were analyzed by the NGS assay with results concordant with FISH data. In order to obtain the maximum level of agreement between FISH and ALK RGQ RT-PCR data, we introduced a new scoring algorithm based on the ΔCt value. A ΔCt cut-off level ≤3.5 was used in a pilot series. Then the algorithm was tested on a completely independent validation series. By using the new scoring algorithm and FISH as reference standard, the sensitivity and the specificity of the ALK RGQ RT-PCR(ΔCt) assay were 100% and 100%, respectively. Our results suggest that the ALK RGQ RT-PCR test could be useful in clinical practice as a complementary assay in multi-test diagnostic algorithms or even, if our data will be confirmed in independent studies, as a standalone or screening test for the selection of patients to be treated with ALK inhibitors. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  4. Development and psychometric testing of the Knowledge, Attitudes and Practices (KAP) questionnaire among student Tuberculosis (TB) Patients (STBP-KAPQ) in China.

    PubMed

    Fan, Yahui; Zhang, Shaoru; Li, Yan; Li, Yuelu; Zhang, Tianhua; Liu, Weiping; Jiang, Hualin

    2018-05-08

    TB outbreaking in schools is extremely complex, and presents a major challenge for public health. Understanding the knowledge, attitudes and practices among student TB patients in such settings is fundamental when it comes to decreasing future TB cases. The objective of this study was to develop a Knowledge, Attitudes and Practices Questionnaire among Student Tuberculosis Patients (STBP-KAPQ), and evaluate its psychometric properties. This study was conducted in three stages: item construction, pilot testing in 10 student TB patients and psychometric testing, including reliability and validity. The item pool for the questionnaire was compiled from literature review and early individual interviews. The questionnaire items were evaluated by the Delphi method based on 12 experts. Reliability and validity were assessed using student TB patients (n = 416) and healthy students (n = 208). Reliability was examined with internal consistency reliability and test-retest reliability. Content validity was calculated by content validity index (CVI); Construct validity was examined using exploratory factor analysis (EFA) and confirmatory factor analysis (CFA); The Public Tuberculosis Knowledge, Attitudes and Practices Questionnaire (PTB-KAPQ) was applied to evaluate criterion validity; As concerning discriminant validity, T-test was performed. The final STBP-KAPQ consisted of three dimensions and 25 items. Cronbach's α coefficient and intraclass correlation coefficient (ICC) was 0.817 and 0.765, respectively. Content validity index (CVI) was 0.962. Seven common factors were extracted by principal factor analysis and varimax rotation, with a cumulative contribution of 66.253%. The resulting CFA model of the STBP-KAPQ exhibited an appropriate model fit (χ2/df = 1.74, RMSEA = 0.082, CFI = 0.923, NNFI = 0.962). STBP-KAPQ and PTB-KAPQ had a strong correlation in the knowledge part, and the correlation coefficient was 0.606 (p < 0.05). Discriminant validity was supported through a significant difference between student TB patients and healthy students across all domains (p < 0.05). An instrument, "Knowledge, Attitudes and Practices Questionnaire among Student Tuberculosis Patients (STBP-KAPQ)" was developed. Psychometric testing indicated that it had adequate validity and reliability for use in KAP researches with student TB patients in China. The new tool might help public health researchers evaluate the level of KAP in student TB patients, and it could also be used to examine the effects of TB health education.

  5. Sociopathic Knowledge Bases: Correct Knowledge Can Be Harmful Even Given Unlimited Computation

    DTIC Science & Technology

    1989-08-01

    pobitive, as false positives generated by a medical program can often be caught by a physician upon further testing . False negatives, however, may be...improvement over the knowledge base tested is obtained. Although our work is pretty much theoretical research oriented one example of ex- periments is...knowledge base, improves the performance by about 10%. of tests . First, we divide the cases into a training set and a validation set with 70% vs. 30% each

  6. Validation: Codes to compare simulation data to various observations

    NASA Astrophysics Data System (ADS)

    Cohn, J. D.

    2017-02-01

    Validation provides codes to compare several observations to simulated data with stellar mass and star formation rate, simulated data stellar mass function with observed stellar mass function from PRIMUS or SDSS-GALEX in several redshift bins from 0.01-1.0, and simulated data B band luminosity function with observed stellar mass function, and to create plots for various attributes, including stellar mass functions, and stellar mass to halo mass. These codes can model predictions (in some cases alongside observational data) to test other mock catalogs.

  7. Generator Dynamic Model Validation and Parameter Calibration Using Phasor Measurements at the Point of Connection

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry

    2013-05-01

    Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.

  8. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  9. Postmortem diagnosis and toxicological validation of illicit substance use

    PubMed Central

    Lehrmann, E; Afanador, ZR; Deep-Soboslay, A; Gallegos, G; Darwin, WD; Lowe, RH; Barnes, AJ; Huestis, MA; Cadet, JL; Herman, MM; Hyde, TM; Kleinman, JE; Freed, WJ

    2008-01-01

    The present study examines the diagnostic challenges of identifying ante-mortem illicit substance use in human postmortem cases. Substance use, assessed by clinical case history reviews, structured next-of-kin interviews, by general toxicology of blood, urine, and/or brain, and by scalp hair testing, identified 33 cocaine, 29 cannabis, 10 phencyclidine and 9 opioid cases. Case history identified 42% cocaine, 76% cannabis, 10% phencyclidine, and 33% opioid cases. Next-of-kin interviews identified almost twice as many cocaine and cannabis cases as Medical Examiner (ME) case histories, and were crucial in establishing a detailed lifetime substance use history. Toxicology identified 91% cocaine, 68% cannabis, 80% phencyclidine, and 100% opioid cases, with hair testing increasing detection for all drug classes. A cocaine or cannabis use history was corroborated by general toxicology with 50% and 32% sensitivity, respectively, and with 82% and 64% sensitivity by hair testing. Hair testing corroborated a positive general toxicology for cocaine and cannabis with 91% and 100% sensitivity, respectively. Case history corroborated hair toxicology with 38% sensitivity for cocaine and 79% sensitivity for cannabis, suggesting that both case history and general toxicology underestimated cocaine use. Identifying ante-mortem substance use in human postmortem cases are key considerations in case diagnosis and for characterization of disorder-specific changes in neurobiology. The sensitivity and specificity of substance use assessments increased when ME case history was supplemented with structured next-of-kin interviews to establish a detailed lifetime substance use history, while comprehensive toxicology, and hair testing in particular, increased detection of recent illicit substance use. PMID:18201295

  10. Supersonic Retropropulsion CFD Validation with Ames Unitary Plan Wind Tunnel Test Data

    NASA Technical Reports Server (NTRS)

    Schauerhamer, Daniel G.; Zarchi, Kerry A.; Kleb, William L.; Edquist, Karl T.

    2013-01-01

    A validation study of Computational Fluid Dynamics (CFD) for Supersonic Retropropulsion (SRP) was conducted using three Navier-Stokes flow solvers (DPLR, FUN3D, and OVERFLOW). The study compared results from the CFD codes to each other and also to wind tunnel test data obtained in the NASA Ames Research Center 90 70 Unitary PlanWind Tunnel. Comparisons include surface pressure coefficient as well as unsteady plume effects, and cover a range of Mach numbers, levels of thrust, and angles of orientation. The comparisons show promising capability of CFD to simulate SRP, and best agreement with the tunnel data exists for the steadier cases of the 1-nozzle and high thrust 3-nozzle configurations.

  11. Validation of a school-based amblyopia screening protocol in a kindergarten population.

    PubMed

    Casas-Llera, Pilar; Ortega, Paula; Rubio, Inmaculada; Santos, Verónica; Prieto, María J; Alio, Jorge L

    2016-08-04

    To validate a school-based amblyopia screening program model by comparing its outcomes to those of a state-of-the-art conventional ophthalmic clinic examination in a kindergarten population of children between the ages of 4 and 5 years. An amblyopia screening protocol, which consisted of visual acuity measurement using Lea charts, ocular alignment test, ocular motility assessment, and stereoacuity with TNO random-dot test, was performed at school in a pediatric 4- to 5-year-old population by qualified healthcare professionals. The outcomes were validated in a selected group by a conventional ophthalmologic examination performed in a fully equipped ophthalmologic center. The ophthalmologic evaluation was used to confirm whether or not children were correctly classified by the screening protocol. The sensitivity and specificity of the test model to detect amblyopia were established. A total of 18,587 4- to 5-year-old children were subjected to the amblyopia screening program during the 2010-2011 school year. A population of 100 children were selected for the ophthalmologic validation screening. A sensitivity of 89.3%, specificity of 93.1%, positive predictive value of 83.3%, negative predictive value of 95.7%, positive likelihood ratio of 12.86, and negative likelihood ratio of 0.12 was obtained for the amblyopia screening validation model. The amblyopia screening protocol model tested in this investigation shows high sensitivity and specificity in detecting high-risk cases of amblyopia compared to the standard ophthalmologic examination. This screening program may be highly relevant for amblyopia screening at schools.

  12. Investigation of the effects of braking system configurations on thermal input to commuter car wheels

    DOT National Transportation Integrated Search

    1996-03-01

    A heat transfer model, previously developed to estimate wheel rim temperatures during tread braking of MU power cars and validated by comparison with operational test results, is extended and appplied to cases involving several different blended brak...

  13. Test Cases for Modeling and Validation of Structures with Piezoelectric Actuators

    NASA Technical Reports Server (NTRS)

    Reaves, Mercedes C.; Horta, Lucas G.

    2001-01-01

    A set of benchmark test articles were developed to validate techniques for modeling structures containing piezoelectric actuators using commercially available finite element analysis packages. The paper presents the development, modeling, and testing of two structures: an aluminum plate with surface mounted patch actuators and a composite box beam with surface mounted actuators. Three approaches for modeling structures containing piezoelectric actuators using the commercially available packages: MSC/NASTRAN and ANSYS are presented. The approaches, applications, and limitations are discussed. Data for both test articles are compared in terms of frequency response functions from deflection and strain data to input voltage to the actuator. Frequency response function results using the three different analysis approaches provided comparable test/analysis results. It is shown that global versus local behavior of the analytical model and test article must be considered when comparing different approaches. Also, improper bonding of actuators greatly reduces the electrical to mechanical effectiveness of the actuators producing anti-resonance errors.

  14. Experimental validation of clock synchronization algorithms

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Graham, R. Lynn

    1992-01-01

    The objective of this work is to validate mathematically derived clock synchronization theories and their associated algorithms through experiment. Two theories are considered, the Interactive Convergence Clock Synchronization Algorithm and the Midpoint Algorithm. Special clock circuitry was designed and built so that several operating conditions and failure modes (including malicious failures) could be tested. Both theories are shown to predict conservative upper bounds (i.e., measured values of clock skew were always less than the theory prediction). Insight gained during experimentation led to alternative derivations of the theories. These new theories accurately predict the behavior of the clock system. It is found that a 100 percent penalty is paid to tolerate worst-case failures. It is also shown that under optimal conditions (with minimum error and no failures) the clock skew can be as much as three clock ticks. Clock skew grows to six clock ticks when failures are present. Finally, it is concluded that one cannot rely solely on test procedures or theoretical analysis to predict worst-case conditions.

  15. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications

    PubMed Central

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation. PMID:26958441

  16. An Improved Algorithm of Congruent Matching Cells (CMC) Method for Firearm Evidence Identifications.

    PubMed

    Tong, Mingsi; Song, John; Chu, Wei

    2015-01-01

    The Congruent Matching Cells (CMC) method was invented at the National Institute of Standards and Technology (NIST) for firearm evidence identifications. The CMC method divides the measured image of a surface area, such as a breech face impression from a fired cartridge case, into small correlation cells and uses four identification parameters to identify correlated cell pairs originating from the same firearm. The CMC method was validated by identification tests using both 3D topography images and optical images captured from breech face impressions of 40 cartridge cases fired from a pistol with 10 consecutively manufactured slides. In this paper, we discuss the processing of the cell correlations and propose an improved algorithm of the CMC method which takes advantage of the cell correlations at a common initial phase angle and combines the forward and backward correlations to improve the identification capability. The improved algorithm is tested by 780 pairwise correlations using the same optical images and 3D topography images as the initial validation.

  17. Experience with Aero- and Fluid-Dynamic Testing for Engineering and CFD Validation

    NASA Technical Reports Server (NTRS)

    Ross, James C.

    2016-01-01

    Ever since computations have been used to simulate aerodynamics the need to ensure that the computations adequately represent real life has followed. Many experiments have been performed specifically for validation and as computational methods have improved, so have the validation experiments. Validation is also a moving target because computational methods improve requiring validation for the new aspect of flow physics that the computations aim to capture. Concurrently, new measurement techniques are being developed that can help capture more detailed flow features pressure sensitive paint (PSP) and particle image velocimetry (PIV) come to mind. This paper will present various wind-tunnel tests the author has been involved with and how they were used for validation of various kinds of CFD. A particular focus is the application of advanced measurement techniques to flow fields (and geometries) that had proven to be difficult to predict computationally. Many of these difficult flow problems arose from engineering and development problems that needed to be solved for a particular vehicle or research program. In some cases the experiments required to solve the engineering problems were refined to provide valuable CFD validation data in addition to the primary engineering data. All of these experiments have provided physical insight and validation data for a wide range of aerodynamic and acoustic phenomena for vehicles ranging from tractor-trailers to crewed spacecraft.

  18. Empirical Derivation and Validation of a Clinical Case Definition for Neuropsychological Impairment in Children and Adolescents.

    PubMed

    Beauchamp, Miriam H; Brooks, Brian L; Barrowman, Nick; Aglipay, Mary; Keightley, Michelle; Anderson, Peter; Yeates, Keith O; Osmond, Martin H; Zemek, Roger

    2015-09-01

    Neuropsychological assessment aims to identify individual performance profiles in multiple domains of cognitive functioning; however, substantial variation exists in how deficits are defined and what cutoffs are used, and there is no universally accepted definition of neuropsychological impairment. The aim of this study was to derive and validate a clinical case definition rule to identify neuropsychological impairment in children and adolescents. An existing normative pediatric sample was used to calculate base rates of abnormal functioning on eight measures covering six domains of neuropsychological functioning. The dataset was analyzed by varying the range of cutoff levels [1, 1.5, and 2 standard deviations (SDs) below the mean] and number of indicators of impairment. The derived rule was evaluated by bootstrap, internal and external clinical validation (orthopedic and traumatic brain injury). Our neuropsychological impairment (NPI) rule was defined as "two or more test scores that fall 1.5 SDs below the mean." The rule identifies 5.1% of the total sample as impaired in the assessment battery and consistently targets between 3 and 7% of the population as impaired even when age, domains, and number of tests are varied. The NPI rate increases in groups known to exhibit cognitive deficits. The NPI rule provides a psychometrically derived method for interpreting performance across multiple tests and may be used in children 6-18 years. The rule may be useful to clinicians and scientists who wish to establish whether specific individuals or clinical populations present within expected norms versus impaired function across a battery of neuropsychological tests.

  19. Use of clinical movement screening tests to predict injury in sport

    PubMed Central

    Chimera, Nicole J; Warren, Meghan

    2016-01-01

    Clinical movement screening tests are gaining popularity as a means to determine injury risk and to implement training programs to prevent sport injury. While these screens are being used readily in the clinical field, it is only recently that some of these have started to gain attention from a research perspective. This limits applicability and poses questions to the validity, and in some cases the reliability, of the clinical movement tests as they relate to injury prediction, intervention, and prevention. This editorial will review the following clinical movement screening tests: Functional Movement Screen™, Star Excursion Balance Test, Y Balance Test, Drop Jump Screening Test, Landing Error Scoring System, and the Tuck Jump Analysis in regards to test administration, reliability, validity, factors that affect test performance, intervention programs, and usefulness for injury prediction. It is important to review the aforementioned factors for each of these clinical screening tests as this may help clinicians interpret the current body of literature. While each of these screening tests were developed by clinicians based on what appears to be clinical practice, this paper brings to light that this is a need for collaboration between clinicians and researchers to ensure validity of clinically meaningful tests so that they are used appropriately in future clinical practice. Further, this editorial may help to identify where the research is lacking and, thus, drive future research questions in regards to applicability and appropriateness of clinical movement screening tools. PMID:27114928

  20. Development and Validation of a Predictive Model to Identify Individuals Likely to Have Undiagnosed Chronic Obstructive Pulmonary Disease Using an Administrative Claims Database.

    PubMed

    Moretz, Chad; Zhou, Yunping; Dhamane, Amol D; Burslem, Kate; Saverno, Kim; Jain, Gagan; Devercelli, Giovanna; Kaila, Shuchita; Ellis, Jeffrey J; Hernandez, Gemzel; Renda, Andrew

    2015-12-01

    Despite the importance of early detection, delayed diagnosis of chronic obstructive pulmonary disease (COPD) is relatively common. Approximately 12 million people in the United States have undiagnosed COPD. Diagnosis of COPD is essential for the timely implementation of interventions, such as smoking cessation programs, drug therapies, and pulmonary rehabilitation, which are aimed at improving outcomes and slowing disease progression. To develop and validate a predictive model to identify patients likely to have undiagnosed COPD using administrative claims data. A predictive model was developed and validated utilizing a retro-spective cohort of patients with and without a COPD diagnosis (cases and controls), aged 40-89, with a minimum of 24 months of continuous health plan enrollment (Medicare Advantage Prescription Drug [MAPD] and commercial plans), and identified between January 1, 2009, and December 31, 2012, using Humana's claims database. Stratified random sampling based on plan type (commercial or MAPD) and index year was performed to ensure that cases and controls had a similar distribution of these variables. Cases and controls were compared to identify demographic, clinical, and health care resource utilization (HCRU) characteristics associated with a COPD diagnosis. Stepwise logistic regression (SLR), neural networking, and decision trees were used to develop a series of models. The models were trained, validated, and tested on randomly partitioned subsets of the sample (Training, Validation, and Test data subsets). Measures used to evaluate and compare the models included area under the curve (AUC); index of the receiver operating characteristics (ROC) curve; sensitivity, specificity, positive predictive value (PPV); and negative predictive value (NPV). The optimal model was selected based on AUC index on the Test data subset. A total of 50,880 cases and 50,880 controls were included, with MAPD patients comprising 92% of the study population. Compared with controls, cases had a statistically significantly higher comorbidity burden and HCRU (including hospitalizations, emergency room visits, and medical procedures). The optimal predictive model was generated using SLR, which included 34 variables that were statistically significantly associated with a COPD diagnosis. After adjusting for covariates, anticholinergic bronchodilators (OR = 3.336) and tobacco cessation counseling (OR = 2.871) were found to have a large influence on the model. The final predictive model had an AUC of 0.754, sensitivity of 60%, specificity of 78%, PPV of 73%, and an NPV of 66%. This claims-based predictive model provides an acceptable level of accuracy in identifying patients likely to have undiagnosed COPD in a large national health plan. Identification of patients with undiagnosed COPD may enable timely management and lead to improved health outcomes and reduced COPD-related health care expenditures.

  1. Crucial aspects of high performance thin layer chromatography quantitative validation. The case of determination of rosmarinic acid in different matrices.

    PubMed

    Coran, Silvia A; Mulas, Stefano; Mulinacci, Nadia

    2012-01-13

    A new HPTLC method was envisaged to determine rosmarinic acid (RA) in different matrices with the aim of testing the influence of optimizing the main HPTLC operative parameters in view of a more stringent validation process. HPTLC LiChrospher silica gel 60 F254s, 20 cm × 10 cm, plates with toluene:ethyl formate:formic acid (6:4:1, v/v) as the mobile phase were used. Densitometric determinations were performed in reflectance mode at 330 nm. The method was validated giving rise to a dependable and high throughput procedure well suited to routine applications. RA was quantified in the range of 132-660 ng with RSD of repeatability and intermediate precision not exceeding 2.0% and accuracy within the acceptance limits. The method was tested on several commercial preparations containing RA in different amounts. Copyright © 2011 Elsevier B.V. All rights reserved.

  2. A biopsychosocial vignette for case conceptualization in dementia (VIG-Dem): development and pilot study.

    PubMed

    Spector, Aimee; Hebditch, Molly; Stoner, Charlotte R; Gibbor, Luke

    2016-09-01

    The ability to identify biological, social, and psychological issues for people with dementia is an important skill for healthcare professionals. Therefore, valid and reliable measures are needed to assess this ability. This study involves the development of a vignette style measure to capture the extent to which health professionals use "Biopsychosocial" thinking in dementia care (VIG-Dem), based on the framework of the model developed by Spector and Orrell (2010). The development process consisted of Phase 1: Developing and refining the vignettes; Phase 2: Field testing (N = 9), and Phase 3: A pilot study to assess reliability and validity (N = 131). The VIG-Dem, consisting of two vignettes with open-ended questions and a standardized scoring scheme, was developed. Evidence for the good inter-rater reliability, convergent validity, and test-retest reliability were established. The VIG-Dem has good psychometric properties and may provide a useful tool in dementia care research and practice.

  3. Flight Test 4 Preliminary Results: NASA Ames SSI

    NASA Technical Reports Server (NTRS)

    Isaacson, Doug; Gong, Chester; Reardon, Scott; Santiago, Confesor

    2016-01-01

    Realization of the expected proliferation of Unmanned Aircraft System (UAS) operations in the National Airspace System (NAS) depends on the development and validation of performance standards for UAS Detect and Avoid (DAA) Systems. The RTCA Special Committee 228 is charged with leading the development of draft Minimum Operational Performance Standards (MOPS) for UAS DAA Systems. NASA, as a participating member of RTCA SC-228 is committed to supporting the development and validation of draft requirements as well as the safety substantiation and end-to-end assessment of DAA system performance. The Unmanned Aircraft System (UAS) Integration into the National Airspace System (NAS) Project conducted flight test program, referred to as Flight Test 4, at Armstrong Flight Research Center from April -June 2016. Part of the test flights were dedicated to the NASA Ames-developed Detect and Avoid (DAA) System referred to as JADEM (Java Architecture for DAA Extensibility and Modeling). The encounter scenarios, which involved NASA's Ikhana UAS and a manned intruder aircraft, were designed to collect data on DAA system performance in real-world conditions and uncertainties with four different surveillance sensor systems. Flight test 4 has four objectives: (1) validate DAA requirements in stressing cases that drive MOPS requirements, including: high-speed cooperative intruder, low-speed non-cooperative intruder, high vertical closure rate encounter, and Mode CS-only intruder (i.e. without ADS-B), (2) validate TCASDAA alerting and guidance interoperability concept in the presence of realistic sensor, tracking and navigational errors and in multiple-intruder encounters against both cooperative and non-cooperative intruders, (3) validate Well Clear Recovery guidance in the presence of realistic sensor, tracking and navigational errors, and (4) validate DAA alerting and guidance requirements in the presence of realistic sensor, tracking and navigational errors. The results will be presented at RTCA Special Committee 228 in support of final verification and validation of the DAA MOPS.

  4. Acoustic-Structure Interaction in Rocket Engines: Validation Testing

    NASA Technical Reports Server (NTRS)

    Davis, R. Benjamin; Joji, Scott S.; Parks, Russel A.; Brown, Andrew M.

    2009-01-01

    While analyzing a rocket engine component, it is often necessary to account for any effects that adjacent fluids (e.g., liquid fuels or oxidizers) might have on the structural dynamics of the component. To better characterize the fully coupled fluid-structure system responses, an analytical approach that models the system as a coupled expansion of rigid wall acoustic modes and in vacuo structural modes has been proposed. The present work seeks to experimentally validate this approach. To experimentally observe well-coupled system modes, the test article and fluid cavities are designed such that the uncoupled structural frequencies are comparable to the uncoupled acoustic frequencies. The test measures the natural frequencies, mode shapes, and forced response of cylindrical test articles in contact with fluid-filled cylindrical and/or annular cavities. The test article is excited with a stinger and the fluid-loaded response is acquired using a laser-doppler vibrometer. The experimentally determined fluid-loaded natural frequencies are compared directly to the results of the analytical model. Due to the geometric configuration of the test article, the analytical model is found to be valid for natural modes with circumferential wave numbers greater than four. In the case of these modes, the natural frequencies predicted by the analytical model demonstrate excellent agreement with the experimentally determined natural frequencies.

  5. The script concordance test in radiation oncology: validation study of a new tool to assess clinical reasoning

    PubMed Central

    Lambert, Carole; Gagnon, Robert; Nguyen, David; Charlin, Bernard

    2009-01-01

    Background The Script Concordance test (SCT) is a reliable and valid tool to evaluate clinical reasoning in complex situations where experts' opinions may be divided. Scores reflect the degree of concordance between the performance of examinees and that of a reference panel of experienced physicians. The purpose of this study is to demonstrate SCT's usefulness in radiation oncology. Methods A 90 items radiation oncology SCT was administered to 155 participants. Three levels of experience were tested: medical students (n = 70), radiation oncology residents (n = 38) and radiation oncologists (n = 47). Statistical tests were performed to assess reliability and to document validity. Results After item optimization, the test comprised 30 cases and 70 questions. Cronbach alpha was 0.90. Mean scores were 51.62 (± 8.19) for students, 71.20 (± 9.45) for residents and 76.67 (± 6.14) for radiation oncologists. The difference between the three groups was statistically significant when compared by the Kruskall-Wallis test (p < 0.001). Conclusion The SCT is reliable and useful to discriminate among participants according to their level of experience in radiation oncology. It appears as a useful tool to document the progression of reasoning during residency training. PMID:19203358

  6. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  7. A design of experiments approach to validation sampling for logistic regression modeling with error-prone medical records.

    PubMed

    Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay

    2016-04-01

    Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Artificial neural networks for modeling ammonia emissions released from sewage sludge composting

    NASA Astrophysics Data System (ADS)

    Boniecki, P.; Dach, J.; Pilarski, K.; Piekarska-Boniecka, H.

    2012-09-01

    The project was designed to develop, test and validate an original Neural Model describing ammonia emissions generated in composting sewage sludge. The composting mix was to include the addition of such selected structural ingredients as cereal straw, sawdust and tree bark. All created neural models contain 7 input variables (chemical and physical parameters of composting) and 1 output (ammonia emission). The α data file was subdivided into three subfiles: the learning file (ZU) containing 330 cases, the validation file (ZW) containing 110 cases and the test file (ZT) containing 110 cases. The standard deviation ratios (for all 4 created networks) ranged from 0.193 to 0.218. For all of the selected models, the correlation coefficient reached the high values of 0.972-0.981. The results show that he predictive neural model describing ammonia emissions from composted sewage sludge is well suited for assessing such emissions. The sensitivity analysis of the model for the input of variables of the process in question has shown that the key parameters describing ammonia emissions released in composting sewage sludge are pH and the carbon to nitrogen ratio (C:N).

  9. Practical Applications of a Building Method to Construct Aerodynamic Database of Guided Missile Using Wind Tunnel Test Data

    NASA Astrophysics Data System (ADS)

    Kim, Duk-hyun; Lee, Hyoung-Jin

    2018-04-01

    A study of efficient aerodynamic database modeling method was conducted. A creation of database using periodicity and symmetry characteristic of missile aerodynamic coefficient was investigated to minimize the number of wind tunnel test cases. In addition, studies of how to generate the aerodynamic database when the periodicity changes due to installation of protuberance and how to conduct a zero calibration were carried out. Depending on missile configurations, the required number of test cases changes and there exist tests that can be omitted. A database of aerodynamic on deflection angle of control surface can be constituted using phase shift. A validity of modeling method was demonstrated by confirming that the result which the aerodynamic coefficient calculated by using the modeling method was in agreement with wind tunnel test results.

  10. Finite Element Vibration Modeling and Experimental Validation for an Aircraft Engine Casing

    NASA Astrophysics Data System (ADS)

    Rabbitt, Christopher

    This thesis presents a procedure for the development and validation of a theoretical vibration model, applies this procedure to a pair of aircraft engine casings, and compares select parameters from experimental testing of those casings to those from a theoretical model using the Modal Assurance Criterion (MAC) and linear regression coefficients. A novel method of determining the optimal MAC between axisymmetric results is developed and employed. It is concluded that the dynamic finite element models developed as part of this research are fully capable of modelling the modal parameters within the frequency range of interest. Confidence intervals calculated in this research for correlation coefficients provide important information regarding the reliability of predictions, and it is recommended that these intervals be calculated for all comparable coefficients. The procedure outlined for aligning mode shapes around an axis of symmetry proved useful, and the results are promising for the development of further optimization techniques.

  11. Coupled incompressible Smoothed Particle Hydrodynamics model for continuum-based modelling sediment transport

    NASA Astrophysics Data System (ADS)

    Pahar, Gourabananda; Dhar, Anirban

    2017-04-01

    A coupled solenoidal Incompressible Smoothed Particle Hydrodynamics (ISPH) model is presented for simulation of sediment displacement in erodible bed. The coupled framework consists of two separate incompressible modules: (a) granular module, (b) fluid module. The granular module considers a friction based rheology model to calculate deviatoric stress components from pressure. The module is validated for Bagnold flow profile and two standardized test cases of sediment avalanching. The fluid module resolves fluid flow inside and outside porous domain. An interaction force pair containing fluid pressure, viscous term and drag force acts as a bridge between two different flow modules. The coupled model is validated against three dambreak flow cases with different initial conditions of movable bed. The simulated results are in good agreement with experimental data. A demonstrative case considering effect of granular column failure under full/partial submergence highlights the capability of the coupled model for application in generalized scenario.

  12. Sex offender polygraph examination: an evidence-based case management tool for social workers.

    PubMed

    Levenson, Jill S

    2009-10-01

    This article will review the use of polygraphy in the assessment and treatment of sexual perpetrators. Such information can be utilized by social workers who are involved in the treatment and case management of child sexual abuse cases. First, the controversial literature regarding the validity and reliability of polygraph examination in general will be reviewed. Next, an emerging body of evidence supporting the utility of polygraph testing with sex offenders will be discussed. Finally, ways that social workers can incorporate this knowledge into their case management and clinical roles will be offered.

  13. A Comparative Investigation into Understandings and Uses of the "TOEFL iBT"® Test, the International English Language Testing Service (Academic) Test, and the Pearson Test of English for Graduate Admissions in the United States and Australia: A Case Study of Two University Contexts. "TOEFL iBT"® Research Report. TOEFL iBT-24. ETS Research Report. RR-14-44

    ERIC Educational Resources Information Center

    Ginther, April; Elder, Catherine

    2014-01-01

    In line with expanded conceptualizations of validity that encompass the interpretations and uses of test scores in particular policy contexts, this report presents results of a comparative analysis of institutional understandings and uses of 3 international English proficiency tests widely used for tertiary selection--the "TOEFL iBT"®…

  14. Validating archetypes for the Multiple Sclerosis Functional Composite.

    PubMed

    Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin

    2014-08-03

    Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model.

  15. Validating archetypes for the Multiple Sclerosis Functional Composite

    PubMed Central

    2014-01-01

    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model. PMID:25087081

  16. Development and validation of techniques for improving software dependability

    NASA Technical Reports Server (NTRS)

    Knight, John C.

    1992-01-01

    A collection of document abstracts are presented on the topic of improving software dependability through NASA grant NAG-1-1123. Specific topics include: modeling of error detection; software inspection; test cases; Magnetic Stereotaxis System safety specifications and fault trees; and injection of synthetic faults into software.

  17. Operational Art Requirements in the Korean War

    DTIC Science & Technology

    2012-05-17

    Historical case studies can provide concrete examples to test the validity of theory and doctrine. A critical analysis of the examples provided by...war into a global conflagration .45 However, as early as 13 July 1950, MacArthur developed his plan to do more than reestablish the territorial

  18. [DNA prints instead of plantar prints in neonatal identification].

    PubMed

    Rodríguez-Alarcón Gómez, J; Martińez de Pancorbo Gómez, M; Santillana Ferrer, L; Castro Espido, A; Melchor Maros, J C; Linares Uribe, M A; Fernández-Llebrez del Rey, L; Aranguren Dúo, G

    1996-06-22

    To check the possible usefulness in studying DNA in dried blood spots taken on filter paper blotters for newborn identification. It set out to establish: 1. The validity of the method for analysis; 2. The validity of all stored samples (such as those kept in clinical records); 3. Guarantee of non-intrusion in the genetic code; 4. Acceptable price and execution time. Forty (40) anonymous 13-year-old samples of 20 subjects (2 per subject) were studied. DNA was extracted using Chelex resin and the STR ("small tandem repeat") of microsatellite DNA was studies using the "polimerase chain reaction method" (PCR). Three non coding DNA loci (CSF1PO, TPOX and THO1) were analyzed by Multiplex amplification. It was possible to type 39 samples, making it possible to match the 20 cases (one by exclusion). The complete procedure yielded the results within 24 hours in all cases. The estimated final cost was found to be a fifth of that conventional maternity/paternity tests. The study carried out made matching possible in all 20 cases (directly in 19 cases). It was not necessary to study DNA coding areas. The validity of the method for analyzing samples stored for 13 years without any special care was also demonstrated. The technic was fast, producing the results within 24 hours, and at reasonable cost.

  19. Experience using the «Shetty test» for initial foot and ankle fracture screening in the Emergency Department.

    PubMed

    Ojeda-Jiménez, J; Méndez-Ojeda, M M; Martín-Vélez, P; Tejero-García, S; Pais-Brito, J L; Herrera-Pérez, M

    2018-03-20

    The indiscriminate practice of radiographs for foot and ankle injuries is not justified and numerous studies have corroborated the usefulness of clinical screening tests such as the Ottawa Ankle Rules. The aim of our study is to clinically validate the so-called Shetty Test in our area. A cross-sectional observational study by applying the Shetty test to patients seen in the Emergency Department. We enrolled 100 patients with an average age of 39.25 (16-86). The Shetty test was positive on 14 occasions. Subsequent radiography revealed a fracture in 10 cases: 4 were false positives. The test was negative in the remaining 86 patients and radiography confirmed the absence of fracture (with sensitivity of 100% and specificity of 95.56%, positive predictive value of 71.40%, and negative predictive value of 100%). The Shetty test is a valid clinical screening tool to decide whether simple radiography is indicated for foot and ankle injuries. It is a simple, quick and reproducible test. Copyright © 2018 SECOT. Publicado por Elsevier España, S.L.U. All rights reserved.

  20. Specifications for a coupled neutronics thermal-hydraulics SFR test case

    NASA Astrophysics Data System (ADS)

    Tassone, A.; Smirnov, A. D.; Tikhomirov, G. V.

    2017-01-01

    Coupling neutronics/thermal-hydraulics calculations for the design of nuclear reactors are a growing trend in the scientific community. This approach allows to properly represent the mutual feedbacks between the neutronic distribution and the thermal-hydraulics properties of the materials composing the reactor, details which are often lost when separate analysis are performed. In this work, a test case for a generation IV sodium-cooled fast reactor (SFR), based on the ASTRID concept developed by CEA, is proposed. Two sub-assemblies (SA) characterized by different fuel enrichment and layout are considered. Specifications for the test case are provided including geometrical data, material compositions, thermo-physical properties and coupling scheme details. Serpent and ANSYS-CFX are used as reference in the description of suitable inputs for the performing of the benchmark, but the use of other code combinations for the purpose of validation of the results is encouraged. The expected outcome of the test case are the axial distribution of volumetric power generation term (q‴), density and temperature for the fuel, the cladding and the coolant.

  1. Easiness of Use and Validity Testing of VS-SENSE Device for Detection of Abnormal Vaginal Flora and Bacterial Vaginosis

    PubMed Central

    Donders, Gilbert G. G.; Marconi, Camila; Bellen, Gert

    2010-01-01

    Accessing vaginal pH is fundamental during gynaecological visit for the detection of abnormal vaginal flora (AVF), but use of pH strips may be time-consuming and difficult to interpret. The aim of this study was to evaluate the VS-SENSE test (Common Sense Ltd, Caesarea, Israel) as a tool for the diagnosis of AVF and its correlation with abnormal pH and bacterial vaginosis (BV). The study population consisted of 45 women with vaginal pH ≥ 4.5 and 45 women with normal pH. Vaginal samples were evaluated by VS-SENSE test, microscopy and microbiologic cultures. Comparing with pH strips results, VS-SENSE test specificity was 97.8% and sensitivity of 91%. All severe cases of BV and aerobic vaginitis (AV) were detected by the test. Only one case with normal pH had an unclear result. Concluding, VS-SENSE test is easy to perform, and it correlates with increased pH, AVF, and the severe cases of BV and AV. PMID:20953405

  2. Easiness of use and validity testing of VS-SENSE device for detection of abnormal vaginal flora and bacterial vaginosis.

    PubMed

    Donders, Gilbert G G; Marconi, Camila; Bellen, Gert

    2010-01-01

    Accessing vaginal pH is fundamental during gynaecological visit for the detection of abnormal vaginal flora (AVF), but use of pH strips may be time-consuming and difficult to interpret. The aim of this study was to evaluate the VS-SENSE test (Common Sense Ltd, Caesarea, Israel) as a tool for the diagnosis of AVF and its correlation with abnormal pH and bacterial vaginosis (BV). The study population consisted of 45 women with vaginal pH ≥ 4.5 and 45 women with normal pH. Vaginal samples were evaluated by VS-SENSE test, microscopy and microbiologic cultures. Comparing with pH strips results, VS-SENSE test specificity was 97.8% and sensitivity of 91%. All severe cases of BV and aerobic vaginitis (AV) were detected by the test. Only one case with normal pH had an unclear result. Concluding, VS-SENSE test is easy to perform, and it correlates with increased pH, AVF, and the severe cases of BV and AV.

  3. The Korean version of the Sniffin' stick (KVSS) test and its validity in comparison with the cross-cultural smell identification test (CC-SIT).

    PubMed

    Cho, Jae Hoon; Jeong, Yong Soo; Lee, Yeo Jin; Hong, Seok-Chan; Yoon, Joo-Heon; Kim, Jin Kook

    2009-06-01

    The Korean Version of the Sniffin' stick (KVSS) is the first olfactory test for Koreans. Although we adopted the Sniffin' Stick, we modified it to make it more suitable for Koreans. KVSS I is a screening test, and KVSS II a more comprehensive test. The aims of this study were to apply the KVSS test and assess its clinical validity and reliability in comparison to CC-SIT. One hundred and seventy-four healthy volunteers and 206 patients with subjective decreased olfaction participated. Each participant was tested with both the CC-SIT and KVSS tests and then the correlation between these two tests was analyzed. The correlation between CC-SIT and KVSS I was 0.720 (p<0.01) and 0.714 between the CC-SIT and KVSS II total scores (p<0.01). When the degree of olfaction based on the KVSS I was used, the mean CC-SIT score was 8.6+/-1.8 for normosmia, 7.3+/-2.2 for hyposmia, and 4.2+/-2.3 for anosmia. When the KVSS II total was applied, the mean CC-SIT score was 8.4+/-1.8 for normosmia, 7.3+/-2.0 for hyposmia, and 3.7+/-2.0 for anosmia. The means of the three group differed significantly in both cases (p<0.01). Thus, the KVSS test demonstrates validity and reliability for Korean in comparison with CC-SIT.

  4. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing.

    PubMed

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D

    2014-10-01

    We treat multireader multicase (MRMC) reader studies for which a reader's diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities ([Formula: see text]). This model can be used to validate the coverage probabilities of 95% confidence intervals (of [Formula: see text], [Formula: see text], or [Formula: see text] when [Formula: see text]), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes [Formula: see text]). To illustrate the utility of our simulation model, we adapt the Obuchowski-Rockette-Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data.

  5. Multireader multicase reader studies with binary agreement data: simulation, analysis, validation, and sizing

    PubMed Central

    Chen, Weijie; Wunderlich, Adam; Petrick, Nicholas; Gallas, Brandon D.

    2014-01-01

    Abstract. We treat multireader multicase (MRMC) reader studies for which a reader’s diagnostic assessment is converted to binary agreement (1: agree with the truth state, 0: disagree with the truth state). We present a mathematical model for simulating binary MRMC data with a desired correlation structure across readers, cases, and two modalities, assuming the expected probability of agreement is equal for the two modalities (P1=P2). This model can be used to validate the coverage probabilities of 95% confidence intervals (of P1, P2, or P1−P2 when P1−P2=0), validate the type I error of a superiority hypothesis test, and size a noninferiority hypothesis test (which assumes P1=P2). To illustrate the utility of our simulation model, we adapt the Obuchowski–Rockette–Hillis (ORH) method for the analysis of MRMC binary agreement data. Moreover, we use our simulation model to validate the ORH method for binary data and to illustrate sizing in a noninferiority setting. Our software package is publicly available on the Google code project hosting site for use in simulation, analysis, validation, and sizing of MRMC reader studies with binary agreement data. PMID:26158051

  6. Development and validation of an instrument to measure nurse educator perceived confidence in clinical teaching.

    PubMed

    Nguyen, Van N B; Forbes, Helen; Mohebbi, Mohammadreza; Duke, Maxine

    2017-12-01

    Teaching nursing in clinical environments is considered complex and multi-faceted. Little is known about the role of the clinical nurse educator, specifically the challenges related to transition from clinician, or in some cases, from newly-graduated nurse to that of clinical nurse educator, as occurs in developing countries. Confidence in the clinical educator role has been associated with successful transition and the development of role competence. There is currently no valid and reliable instrument to measure clinical nurse educator confidence. This study was conducted to develop and psychometrically test an instrument to measure perceived confidence among clinical nurse educators. A multi-phase, multi-setting survey design was used. A total of 468 surveys were distributed, and 363 were returned. Data were analyzed using exploratory and confirmatory factor analyses. The instrument was successfully tested and modified in phase 1, and factorial validity was subsequently confirmed in phase 2. There was strong evidence of internal consistency, reliability, content, and convergent validity of the Clinical Nurse Educator Skill Acquisition Assessment instrument. The resulting instrument is applicable in similar contexts due to its rigorous development and validation process. © 2017 The Authors. Nursing & Health Sciences published by John Wiley & Sons Australia, Ltd.

  7. Comparison of two control groups for estimation of oral cholera vaccine effectiveness using a case-control study design.

    PubMed

    Franke, Molly F; Jerome, J Gregory; Matias, Wilfredo R; Ternier, Ralph; Hilaire, Isabelle J; Harris, Jason B; Ivers, Louise C

    2017-10-13

    Case-control studies to quantify oral cholera vaccine effectiveness (VE) often rely on neighbors without diarrhea as community controls. Test-negative controls can be easily recruited and may minimize bias due to differential health-seeking behavior and recall. We compared VE estimates derived from community and test-negative controls and conducted bias-indicator analyses to assess potential bias with community controls. From October 2012 through November 2016, patients with acute watery diarrhea were recruited from cholera treatment centers in rural Haiti. Cholera cases had a positive stool culture. Non-cholera diarrhea cases (test-negative controls and non-cholera diarrhea cases for bias-indicator analyses) had a negative culture and rapid test. Up to four community controls were matched to diarrhea cases by age group, time, and neighborhood. Primary analyses included 181 cholera cases, 157 non-cholera diarrhea cases, 716 VE community controls and 625 bias-indicator community controls. VE for self-reported vaccination with two doses was consistent across the two control groups, with statistically significant VE estimates ranging from 72 to 74%. Sensitivity analyses revealed similar, though somewhat attenuated estimates for self-reported two dose VE. Bias-indicator estimates were consistently less than one, with VE estimates ranging from 19 to 43%, some of which were statistically significant. OCV estimates from case-control analyses using community and test-negative controls were similar. While bias-indicator analyses suggested possible over-estimation of VE estimates using community controls, test-negative analyses suggested this bias, if present, was minimal. Test-negative controls can be a valid low-cost and time-efficient alternative to community controls for OCV effectiveness estimation and may be especially relevant in emergency situations. Copyright © 2017. Published by Elsevier Ltd.

  8. BrainCheck - a very brief tool to detect incipient cognitive decline: optimized case-finding combining patient- and informant-based data.

    PubMed

    Ehrensperger, Michael M; Taylor, Kirsten I; Berres, Manfred; Foldi, Nancy S; Dellenbach, Myriam; Bopp, Irene; Gold, Gabriel; von Gunten, Armin; Inglin, Daniel; Müri, René; Rüegger, Brigitte; Kressig, Reto W; Monsch, Andreas U

    2014-01-01

    Optimal identification of subtle cognitive impairment in the primary care setting requires a very brief tool combining (a) patients' subjective impairments, (b) cognitive testing, and (c) information from informants. The present study developed a new, very quick and easily administered case-finding tool combining these assessments ('BrainCheck') and tested the feasibility and validity of this instrument in two independent studies. We developed a case-finding tool comprised of patient-directed (a) questions about memory and depression and (b) clock drawing, and (c) the informant-directed 7-item version of the Informant Questionnaire on Cognitive Decline in the Elderly (IQCODE). Feasibility study: 52 general practitioners rated the feasibility and acceptance of the patient-directed tool. Validation study: An independent group of 288 Memory Clinic patients (mean ± SD age = 76.6 ± 7.9, education = 12.0 ± 2.6; 53.8% female) with diagnoses of mild cognitive impairment (n = 80), probable Alzheimer's disease (n = 185), or major depression (n = 23) and 126 demographically matched, cognitively healthy volunteer participants (age = 75.2 ± 8.8, education = 12.5 ± 2.7; 40% female) partook. All patient and healthy control participants were administered the patient-directed tool, and informants of 113 patient and 70 healthy control participants completed the very short IQCODE. Feasibility study: General practitioners rated the patient-directed tool as highly feasible and acceptable. Validation study: A Classification and Regression Tree analysis generated an algorithm to categorize patient-directed data which resulted in a correct classification rate (CCR) of 81.2% (sensitivity = 83.0%, specificity = 79.4%). Critically, the CCR of the combined patient- and informant-directed instruments (BrainCheck) reached nearly 90% (that is 89.4%; sensitivity = 97.4%, specificity = 81.6%). A new and very brief instrument for general practitioners, 'BrainCheck', combined three sources of information deemed critical for effective case-finding (that is, patients' subject impairments, cognitive testing, informant information) and resulted in a nearly 90% CCR. Thus, it provides a very efficient and valid tool to aid general practitioners in deciding whether patients with suspected cognitive impairments should be further evaluated or not ('watchful waiting').

  9. Validation of a vector version of the 6S radiative transfer code for atmospheric correction of satellite data. Part II. Homogeneous Lambertian and anisotropic surfaces.

    PubMed

    Kotchenova, Svetlana Y; Vermote, Eric F

    2007-07-10

    This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.

  10. Validation of a vector version of the 6S radiative transfer code for atmospheric correction of satellite data. Part II. Homogeneous Lambertian and anisotropic surfaces

    NASA Astrophysics Data System (ADS)

    Kotchenova, Svetlana Y.; Vermote, Eric F.

    2007-07-01

    This is the second part of the validation effort of the recently developed vector version of the 6S (Second Simulation of a Satellite Signal in the Solar Spectrum) radiative transfer code (6SV1), primarily used for the calculation of look-up tables in the Moderate Resolution Imaging Spectroradiometer (MODIS) atmospheric correction algorithm. The 6SV1 code was tested against a Monte Carlo code and Coulson's tabulated values for molecular and aerosol atmospheres bounded by different Lambertian and anisotropic surfaces. The code was also tested in scalar mode against the scalar code SHARM to resolve the previous 6S accuracy issues in the case of an anisotropic surface. All test cases were characterized by good agreement between the 6SV1 and the other codes: The overall relative error did not exceed 0.8%. The study also showed that ignoring the effects of radiation polarization in the atmosphere led to large errors in the simulated top-of-atmosphere reflectances: The maximum observed error was approximately 7.2% for both Lambertian and anisotropic surfaces.

  11. Propagation of an ultrashort, intense laser pulse in a relativistic plasma

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ritchie, B.; Decker, C.D.

    1997-12-31

    A Maxwell-relativistic fluid model is developed for the propagation of an ultrashort, intense laser pulse through an underdense plasma. The separability of plasma and optical frequencies ({omega}{sub p} and {omega} respectively) for small {omega}{sub p}/{omega} is not assumed; thus the validity of multiple-scales theory (MST) can be tested. The theory is valid when {omega}{sub p}/{omega} is of order unity or for cases in which {omega}{sub p}/{omega} {much_lt} 1 but strongly relativistic motion causes higher-order plasma harmonics to be generated which overlap the region of the first-order laser harmonic, such that MST would not expected to be valid although its principalmore » validity criterion {omega}{sub p}/{omega} {much_lt} 1 holds.« less

  12. Weatherford Inclined Wellbore Construction

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Schulte, R.

    The Rocky Mountain Oilfield Testing Center (RMOTC) has recently completed construction of an inclined wellbore with seven (7) inch, twenty-three (23) pound casing at a total depth of 1296 feet. The inclined wellbore is near vertical to 180 feet with a build angle of approximately 4.5 degrees per hundred feet thereafter. The inclined wellbore was utilized for further proprietary testing after construction and validation. The wellbore is available to other companies requiring a cased hole environment with known deviation out to fifty degrees (50) from vertical. The wellbore may also be used by RMOTC for further deepening into the fracturedmore » shales of the Steele and Niobrara formation.« less

  13. Case study to illustrate an approach for detecting contamination and impurities in pesticide formulations.

    PubMed

    Karasali, Helen; Kasiotis, Konstantinos M; Machera, Kyriaki; Ambrus, Arpad

    2014-11-26

    Counterfeit pesticides threaten public health, food trade, and the environment. The present work draws attention to the importance of regular monitoring of impurities in formulated pesticide products. General screening revealed the presence of carbaryl as a contaminant in a copper oxychloride formulated product. In this paper, as a case study, a liquid chromatographic diode array-mass spectrometric method developed for general screening of pesticide products and quantitative determination of carbaryl together with its validation is presented. The proposed testing strategy is considered suitable for use as a general approach for testing organic contaminants and impurities in solid pesticide formulations.

  14. Performance Indicators: Information in Search of a Valid and Reliable Use.

    ERIC Educational Resources Information Center

    Carrigan, Sarah D.; Hackett, E. Raymond

    1998-01-01

    Examined the usefulness of performance indicators in campus decision making at 20 institutions with Carnegie Baccalaureate II classification using hypothesis testing and case-study approaches. Performance measures most commonly cited as measures of financial viability were of limited use for specific policy development, but were most useful within…

  15. Chronic pesticide poisoning from persistent low-dose exposures in Ecuadorean floriculture workers: toward validating a low-cost test battery.

    PubMed

    Breilh, Jaime; Pagliccia, Nino; Yassi, Annalee

    2012-01-01

    Chronic pesticide poisoning is difficult to detect. We sought to develop a low-cost test battery for settings such as Ecuador's floriculture industry. First we had to develop a case definition; as with all occupational diseases a case had to have both sufficient effective dose and associated health effects. For the former, using canonical discriminant analysis, we found that adding measures of protection and overall environmental stressors to occupational category and duration of exposure was useful. For the latter, factor analysis suggested three distinct manifestations of pesticide poisoning. We then determined sensitivity and specificity of various combinations of symptoms and simple neurotoxicity tests from the Pentox questionnaire, and found that doing so increased sensitivity and specificity compared to use of acethylcholinesterase alone--the current screening standard. While sensitivity and specificity varied with different case definitions, our results support the development of a low-cost test battery for screening in such settings.

  16. Practical mental health assessment in primary care. Validity and utility of the Quick PsychoDiagnostics Panel.

    PubMed

    Shedler, J; Beck, A; Bensen, S

    2000-07-01

    Many case-finding instruments are available to help primary care physicians (PCPs) diagnose depression, but they are not widely used. Physicians often consider these instruments too time consuming or feel they do not provide sufficient diagnostic information. Our study examined the validity and utility of the Quick PsychoDiagnostics (QPD) Panel, an automated mental health test designed to meet the special needs of PCPs. The test screens for 9 common psychiatric disorders and requires no physician time to administer or score. We evaluated criterion validity relative to the Structured Clinical Interview for DSM-IV (SCID), and evaluated convergent validity by correlating QPD Panel scores with established mental health measures. Sensitivity to change was examined by readministering the test to patients pretreatment and posttreatment. Utility was evaluated through physician and patient satisfaction surveys. For major depression, sensitivity and specificity were 81% and 96%, respectively. For other disorders, sensitivities ranged from 69% to 98%, and specificities ranged from 90% to 97%. The depression severity score correlated highly with the Beck, Hamilton, Zung, and CES-D depression scales, and the anxiety score correlated highly with the Spielberger State-Trait Anxiety Inventory and the anxiety subscale of the Symptom Checklist 90 (Ps <.001). The test was sensitive to change. All PCPs agreed or strongly agreed that the QPD Panel "is convenient and easy to use," "can be used immediately by any physician," and "helps provide better patient care." Patients also rated the test favorably. The QPD Panel is a valid mental health assessment tool that can diagnose a range of common psychiatric disorders and is practical for routine use in primary care.

  17. Dementia does not preclude very reliable responding on the MMPI-2 RF: a case report.

    PubMed

    Carone, Dominic A; Ben-Porath, Yossef S

    2014-01-01

    The Minnesota Multiphasic Personality Inventory (MMPI) family of personality tests has long been used by psychologists, in part because it provides extensive information on the validity of patient responses. Although much of the research on MMPI validity indicators has focused on over-reporting or under-reporting symptoms, the consistency (i.e., reliability, a requirement for validity) of responding is also critical to examine. Clinicians tend to avoid using the MMPI-2 or the MMPI-2-RF (Restructured Form) in patients with dementia based on the belief that severe cognitive impairment would make reliable responding impossible given the large number of items (567 and 338, respectively). In contrast with this belief we present the case of a 65-year-old woman with severe memory impairments and executive dysfunction due to a non-specific dementia syndrome who was able to provide remarkably consistent responding on the MMPI-2-RF. Implications and future directions are discussed.

  18. [Inappropriate test methods in allergy].

    PubMed

    Kleine-Tebbe, J; Herold, D A

    2010-11-01

    Inappropriate test methods are increasingly utilized to diagnose allergy. They fall into two categories: I. Tests with obscure theoretical basis, missing validity and lacking reproducibility, such as bioresonance, electroacupuncture, applied kinesiology and the ALCAT-test. These methods lack both the technical and clinical validation needed to justify their use. II. Tests with real data, but misleading interpretation: Detection of IgG or IgG4-antibodies or lymphocyte proliferation tests to foods do not allow to separate healthy from diseased subjects, neither in case of food intolerance, allergy or other diagnoses. The absence of diagnostic specificity induces many false positive findings in healthy subjects. As a result unjustified diets might limit quality of life and lead to malnutrition. Proliferation of lymphocytes in response to foods can show elevated rates in patients with allergies. These values do not allow individual diagnosis of hypersensitivity due to their broad variation. Successful internet marketing, infiltration of academic programs and superficial reporting by the media promote the popularity of unqualified diagnostic tests; also in allergy. Therefore, critical observation and quick analysis of and clear comments to unqualified methods by the scientific medical societies are more important than ever.

  19. Development and validation of a whole-exome sequencing test for simultaneous detection of point mutations, indels and copy-number alterations for precision cancer care

    PubMed Central

    Rennert, Hanna; Eng, Kenneth; Zhang, Tuo; Tan, Adrian; Xiang, Jenny; Romanel, Alessandro; Kim, Robert; Tam, Wayne; Liu, Yen-Chun; Bhinder, Bhavneet; Cyrta, Joanna; Beltran, Himisha; Robinson, Brian; Mosquera, Juan Miguel; Fernandes, Helen; Demichelis, Francesca; Sboner, Andrea; Kluk, Michael; Rubin, Mark A; Elemento, Olivier

    2016-01-01

    We describe Exome Cancer Test v1.0 (EXaCT-1), the first New York State-Department of Health-approved whole-exome sequencing (WES)-based test for precision cancer care. EXaCT-1 uses HaloPlex (Agilent) target enrichment followed by next-generation sequencing (Illumina) of tumour and matched constitutional control DNA. We present a detailed clinical development and validation pipeline suitable for simultaneous detection of somatic point/indel mutations and copy-number alterations (CNAs). A computational framework for data analysis, reporting and sign-out is also presented. For the validation, we tested EXaCT-1 on 57 tumours covering five distinct clinically relevant mutations. Results demonstrated elevated and uniform coverage compatible with clinical testing as well as complete concordance in variant quality metrics between formalin-fixed paraffin embedded and fresh-frozen tumours. Extensive sensitivity studies identified limits of detection threshold for point/indel mutations and CNAs. Prospective analysis of 337 cancer cases revealed mutations in clinically relevant genes in 82% of tumours, demonstrating that EXaCT-1 is an accurate and sensitive method for identifying actionable mutations, with reasonable costs and time, greatly expanding its utility for advanced cancer care. PMID:28781886

  20. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASAs Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Johnson, Stephen B.; Patterson, Jonathan; Teare, David

    2015-01-01

    The engineering development of the National Aeronautics and Space Administration's (NASA) new Space Launch System (SLS) requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The nominal and off-nominal characteristics of SLS's elements and subsystems must be understood and matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex systems engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model-based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model-based algorithms and their development lifecycle from inception through FSW certification are an important focus of SLS's development effort to further ensure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. To test and validate these M&FM algorithms a dedicated test-bed was developed for full Vehicle Management End-to-End Testing (VMET). For addressing fault management (FM) early in the development lifecycle for the SLS program, NASA formed the M&FM team as part of the Integrated Systems Health Management and Automation Branch under the Spacecraft Vehicle Systems Department at the Marshall Space Flight Center (MSFC). To support the development of the FM algorithms, the VMET developed by the M&FM team provides the ability to integrate the algorithms, perform test cases, and integrate vendor-supplied physics-based launch vehicle (LV) subsystem models. Additionally, the team has developed processes for implementing and validating the M&FM algorithms for concept validation and risk reduction. The flexibility of the VMET capabilities enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS, GNC, and others. One of the principal functions of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software test and validation processes. In any software development process there is inherent risk in the interpretation and implementation of concepts from requirements and test cases into flight software compounded with potential human errors throughout the development and regression testing lifecycle. Risk reduction is addressed by the M&FM group but in particular by the Analysis Team working with other organizations such as S&MA, Structures and Environments, GNC, Orion, Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission (LOM) and Loss of Crew (LOC) probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses to be tested in VMET to ensure reliable failure detection, and confirm responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - the ARINC 6535-partitioned Operating System, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by FSW. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure their effectiveness and performance in the exterior FSW development and test processes. This paper is outlined in a systematic fashion analogous to a lifecycle process flow for engineering development of algorithms into software and testing. Section I describes the NASA SLS M&FM context, presenting the current infrastructure, leading principles, methods, and participants. Section II defines the testing philosophy of the M&FM algorithms as related to VMET followed by section III, which presents the modeling methods of the algorithms to be tested and validated in VMET. Its details are then further presented in section IV followed by Section V presenting integration, test status, and state analysis. Finally, section VI addresses the summary and forward directions followed by the appendices presenting relevant information on terminology and documentation.

  1. A two-dimensional solution of the FW-H equation for rectilinear motion of sources

    NASA Astrophysics Data System (ADS)

    Bozorgi, Alireza; Siozos-Rousoulis, Leonidas; Nourbakhsh, Seyyed Ahmad; Ghorbaniasl, Ghader

    2017-02-01

    In this paper, a subsonic solution of the two-dimensional Ffowcs Williams and Hawkings (FW-H) equation is presented for calculation of noise generated by sources moving with constant velocity in a medium at rest or in a moving medium. The solution is represented in the frequency domain and is valid for observers located far from the noise sources. In order to verify the validity of the derived formula, three test cases are considered, namely a monopole, a dipole, and a quadrupole source in a medium at rest or in motion. The calculated results well coincide with the analytical solutions, validating the applicability of the formula to rectilinear subsonic motion problems.

  2. Financial decision-making abilities and financial exploitation in older African Americans: Preliminary validity evidence for the Lichtenberg Financial Decision Rating Scale (LFDRS).

    PubMed

    Lichtenberg, Peter A; Ficker, Lisa J; Rahman-Filipiak, Annalise

    2016-01-01

    This study examines preliminary evidence for the Lichtenberg Financial Decision Rating Scale (LFDRS), a new person-centered approach to assessing capacity to make financial decisions, and its relationship to self-reported cases of financial exploitation in 69 older African Americans. More than one third of individuals reporting financial exploitation also had questionable decisional abilities. Overall, decisional ability score and current decision total were significantly associated with cognitive screening test and financial ability scores, demonstrating good criterion validity. Study findings suggest that impaired decisional abilities may render older adults more vulnerable to financial exploitation, and that the LFDRS is a valid tool.

  3. Use of integral experiments in support to the validation of JEFF-3.2 nuclear data evaluation

    NASA Astrophysics Data System (ADS)

    Leclaire, Nicolas; Cochet, Bertrand; Jinaphanh, Alexis; Haeck, Wim

    2017-09-01

    For many years now, IRSN has developed its own Monte Carlo continuous energy capability, which allows testing various nuclear data libraries. In that prospect, a validation database of 1136 experiments was built from cases used for the validation of the APOLLO2-MORET 5 multigroup route of the CRISTAL V2.0 package. In this paper, the keff obtained for more than 200 benchmarks using the JEFF-3.1.1 and JEFF-3.2 libraries are compared to benchmark keff values and main discrepancies are analyzed regarding the neutron spectrum. Special attention is paid on benchmarks for which the results have been highly modified between both JEFF-3 versions.

  4. Study on validity of a rapid diagnostic test kit versus light microscopy for malaria diagnosis in Ahmedabad city, India.

    PubMed

    Vyas, S; Puwar, B; Patel, V; Bhatt, G; Kulkarni, S; Fancy, M

    2014-05-01

    Light microscopy of blood smears for diagnosis of malaria in the field has several limitations, notably delays in diagnosis. This study in Ahmedabad in Gujarat State, India, evaluated the diagnostic performance of a rapid diagnostic test for malaria (SD Bioline Malaria Ag P.f/Pan) versus blood smear examination as the gold standard. All fever cases presenting at 13 urban health centres were subjected to rapid diagnostic testing and thick and thin blood smears. A total of 677 cases with fever were examined; 135 (20.0%) tested positive by rapid diagnostic test and 86 (12.7%) by blood smear. The sensitivity of the rapid diagnostic test for malaria was 98.8%, specificity was 91.5%, positive predictive value 63.0% and negative predictive value 99.8%. For detection of Plasmodium falciparum the sensitivity of rapid diagnostic test was 100% and specificity was 97.3%. The results show the acceptability of the rapid test as an alternative to light microscopy in the field setting.

  5. The 10/66 Dementia Research Group's fully operationalised DSM-IV dementia computerized diagnostic algorithm, compared with the 10/66 dementia algorithm and a clinician diagnosis: a population validation study

    PubMed Central

    Prince, Martin J; de Rodriguez, Juan Llibre; Noriega, L; Lopez, A; Acosta, Daisy; Albanese, Emiliano; Arizaga, Raul; Copeland, John RM; Dewey, Michael; Ferri, Cleusa P; Guerra, Mariella; Huang, Yueqin; Jacob, KS; Krishnamoorthy, ES; McKeigue, Paul; Sousa, Renata; Stewart, Robert J; Salas, Aquiles; Sosa, Ana Luisa; Uwakwa, Richard

    2008-01-01

    Background The criterion for dementia implicit in DSM-IV is widely used in research but not fully operationalised. The 10/66 Dementia Research Group sought to do this using assessments from their one phase dementia diagnostic research interview, and to validate the resulting algorithm in a population-based study in Cuba. Methods The criterion was operationalised as a computerised algorithm, applying clinical principles, based upon the 10/66 cognitive tests, clinical interview and informant reports; the Community Screening Instrument for Dementia, the CERAD 10 word list learning and animal naming tests, the Geriatric Mental State, and the History and Aetiology Schedule – Dementia Diagnosis and Subtype. This was validated in Cuba against a local clinician DSM-IV diagnosis and the 10/66 dementia diagnosis (originally calibrated probabilistically against clinician DSM-IV diagnoses in the 10/66 pilot study). Results The DSM-IV sub-criteria were plausibly distributed among clinically diagnosed dementia cases and controls. The clinician diagnoses agreed better with 10/66 dementia diagnosis than with the more conservative computerized DSM-IV algorithm. The DSM-IV algorithm was particularly likely to miss less severe dementia cases. Those with a 10/66 dementia diagnosis who did not meet the DSM-IV criterion were less cognitively and functionally impaired compared with the DSMIV confirmed cases, but still grossly impaired compared with those free of dementia. Conclusion The DSM-IV criterion, strictly applied, defines a narrow category of unambiguous dementia characterized by marked impairment. It may be specific but incompletely sensitive to clinically relevant cases. The 10/66 dementia diagnosis defines a broader category that may be more sensitive, identifying genuine cases beyond those defined by our DSM-IV algorithm, with relevance to the estimation of the population burden of this disorder. PMID:18577205

  6. Diverticular Disease of the Colon: News From Imaging.

    PubMed

    Flor, Nicola; Soldi, Simone; Zanchetta, Edoardo; Sbaraini, Sara; Pesapane, Filippo

    2016-10-01

    Different scenarios embrace computed tomography imaging and diverticula, including asymptomatic (diverticulosis) and symptomatic patients (acute diverticulitis, follow-up of acute diverticulitis, chronic diverticulitis). If the role of computed tomography is validated and widely supported by evidence in case of acute diverticulitis, this is not the case of patients in their follow-up for acute diverticulitis or with symptoms related to diverticula, but without acute inflammation. In these settings, computed tomography colonography is gaining consensus as the preferred radiologic test.

  7. Validation of the Persian version of the Daily Spiritual Experiences Scale (DSES) in Pregnant Women: A Proper Tool to Assess Spirituality Related to Mental Health.

    PubMed

    Saffari, Mohsen; Amini, Hossein; Sheykh-Oliya, Zarindokht; Pakpour, Amir H; Koenig, Harold G

    2017-12-01

    Assessing spirituality in healthy pregnant women may lead to supportive interventions that will improve their care. A psychometrically valid measure such as the Daily Spiritual Experiences Scale (DSES) may be helpful in this regard. The current study sought to adapt a Persian version of DSES for use in pregnancy. A total of 377 pregnant women were recruited from three general hospitals located in Tehran, Iran. Administered scales were the DSES, Duke University Religion Index, Santa Clara Strength of Religious Faith scale, and Depression Anxiety Stress Scale, as well as demographic measures. Reliability of the DSES was tested using Cronbach's alpha for internal consistency and the intraclass correlation coefficient (ICC) for test-retest stability. Scale validity was assessed by criterion-related tests, known-groups comparison, and exploratory factor analysis. Participant's mean age was 27.7 (4.1), and most were nulliparous (70%). The correlation coefficient between individual items on the scale and the total score was greater than 0.30 in most cases. Cronbach's alpha for the scale was 0.90. The ICC for 2-week test-retest reliability was high (0.86). Relationships between similar and dissimilar scales indicated acceptable convergent and divergent validity. The factor structure of the scale indicated a single factor that explained 59% of the variance. The DSES was found to be a reliable and valid measure of spirituality in pregnant Iranian women. This scale may be used to examine the relationship between spirituality and health outcomes, research that may lead to supportive interventions in this population.

  8. Screening for confabulations with the confabulation screen.

    PubMed

    Dalla Barba, Gianfranco; Brazzarola, Marta; Marangoni, Sara; La Corte, Valentina

    2018-04-24

    The objective of this work is to devise and validate a sensitive and specific test for confabulatory impairment. We conceived a screening test for confabulation, the Confabulation Screen (CS), a brief test using 10 questions of episodic memory (EM), where confabulators most frequently confabulate. It was postulated that the CS would predict confabulations not only in EM, but also in the other subordinate structures of personal temporality, namely the present and the future. Thirty confabulating amnesic patients of various aetiologies and 97 normal controls entered the study. Participants were administered the CS and the Confabulation Battery (Dalla Barba, G., & Decaix, C. (2009). "Do you remeber what you did on March 13 1985?" A case study of confabulatory hypermnesia. Cortex, 45(5), 566-574). Confabulations in the CS positively and significantly correlated with confabulations in personal temporality domains of the CB, namely EM, orientation in time and place and episodic plans. Conversely, as expected, they did not correlate with confabulations in impersonal temporality domains of the CB. Consistent with results of previous studies, the most frequently observed type of confabulation in the CS was Habits Confabulation. The CS had high construct validity and good discriminative validity in terms of sensitivity and specificity. Cut-off scores for clinical and research purposes are proposed. The CS provides efficient and valid screening for confabulatory impairment.

  9. Spanish validation of the Exercise Therapy Burden Questionnaire (ETBQ) for the assessment of barriers associated to doing physical therapy for the treatment of chronic illness.

    PubMed

    Navarro-Albarracín, César; Poiraudeau, Serge; Chico-Matallana, Noelia; Vergara-Martín, Jesús; Martin, William; Castro-Sánchez, Adelaida María; Matarán-Peñarrocha, Guillermo A

    2018-06-08

    To validate the Spanish version of the Exercise Therapy Burden Questionnaire (ETBQ) for the assessment of barriers associated to doing physical therapy for the treatment of chronic ailments. A sample of 177 patients, 55.93% men and 44.07% women, with an average age of 51.03±14.91 was recruited. The reliability of the questionnaire was tested with Cronbach's alpha coefficient, and the validity of the instrument was assessed through the divergent validation process and factor analysis. The factor analysis was different to the original questionnaire, composed of a dimension, in this case determined three dimensions: (1) General limitations for doing physical exercise. (2) Physical limitations for doing physical exercise. (3) Limitations caused by the patients' predisposition to their exercises. The reliability of the test-retest was measured through the intraclass correlation coefficient (ICC) and the Bland-Altman plot. Cronbach's alpha was 0.8715 for the total ETBQ. The ICC of the test-retest was 0.745 and the Bland-Altman plot showed no systematic trend. We have obtained the translated version in Spanish of the ETBQ questionnaire. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  10. Forensic differentiation between peripheral and menstrual blood in cases of alleged sexual assault-validating an immunochromatographic multiplex assay for simultaneous detection of human hemoglobin and D-dimer.

    PubMed

    Holtkötter, Hannah; Dias Filho, Claudemir Rodrigues; Schwender, Kristina; Stadler, Christian; Vennemann, Marielle; Pacheco, Ana Claudia; Roca, Gabriela

    2018-05-01

    Sexual assault is a serious offense and identification of body fluids originating from sexual activity has been a crucial aspect of forensic investigations for a long time. While reliable tests for the detection of semen and saliva have been successfully implemented into forensic laboratories, the detection of other body fluids, such as vaginal or menstrual fluid, is more challenging. Especially, the discrimination between peripheral and menstrual blood can be highly relevant for police investigations because it provides potential evidence regarding the issue of consent. We report the forensic validation of an immunochromatographic test that allows for such discrimination in forensic stains, the SERATEC PMB test, and its performance on real casework samples. The PMB test is a duplex test combining human hemoglobin and D-dimer detection and was developed for the identification of blood and menstrual fluid, both at the crime scene and in the laboratory. The results of this study showed that the duplex D-dimer/hemoglobin assay reliably detects the presence of human hemoglobin and identifies samples containing menstrual fluid by detecting the presence of D-dimers. The method distinguished between menstrual and peripheral blood in a swab from a historical artifact and in real casework samples of alleged sexual assaults. Results show that the development of the new duplex test is a substantial progress towards analyzing and interpreting evidence from sexual assault cases.

  11. Urine specimen validity test for drug abuse testing in workplace and court settings.

    PubMed

    Lin, Shin-Yu; Lee, Hei-Hwa; Lee, Jong-Feng; Chen, Bai-Hsiun

    2018-01-01

    In recent decades, urine drug testing in the workplace has become common in many countries in the world. There have been several studies concerning the use of the urine specimen validity test (SVT) for drug abuse testing administered in the workplace. However, very little data exists concerning the urine SVT on drug abuse tests from court specimens, including dilute, substituted, adulterated, and invalid tests. We investigated 21,696 submitted urine drug test samples for SVT from workplace and court settings in southern Taiwan over 5 years. All immunoassay screen-positive urine specimen drug tests were confirmed by gas chromatography/mass spectrometry. We found that the mean 5-year prevalence of tampering (dilute, substituted, or invalid tests) in urine specimens from the workplace and court settings were 1.09% and 3.81%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the workplace were 89.2%, 6.8%, and 4.1%, respectively. The mean 5-year percentage of dilute, substituted, and invalid urine specimens from the court were 94.8%, 1.4%, and 3.8%, respectively. No adulterated cases were found among the workplace or court samples. The most common drug identified from the workplace specimens was amphetamine, followed by opiates. The most common drug identified from the court specimens was ketamine, followed by amphetamine. We suggest that all urine specimens taken for drug testing from both the workplace and court settings need to be tested for validity. Copyright © 2017. Published by Elsevier B.V.

  12. Utilizing population controls in rare-variant case-parent association tests.

    PubMed

    Jiang, Yu; Satten, Glen A; Han, Yujun; Epstein, Michael P; Heinzen, Erin L; Goldstein, David B; Allen, Andrew S

    2014-06-05

    There is great interest in detecting associations between human traits and rare genetic variation. To address the low power implicit in single-locus tests of rare genetic variants, many rare-variant association approaches attempt to accumulate information across a gene, often by taking linear combinations of single-locus contributions to a statistic. Using the right linear combination is key-an optimal test will up-weight true causal variants, down-weight neutral variants, and correctly assign the direction of effect for causal variants. Here, we propose a procedure that exploits data from population controls to estimate the linear combination to be used in an case-parent trio rare-variant association test. Specifically, we estimate the linear combination by comparing population control allele frequencies with allele frequencies in the parents of affected offspring. These estimates are then used to construct a rare-variant transmission disequilibrium test (rvTDT) in the case-parent data. Because the rvTDT is conditional on the parents' data, using parental data in estimating the linear combination does not affect the validity or asymptotic distribution of the rvTDT. By using simulation, we show that our new population-control-based rvTDT can dramatically improve power over rvTDTs that do not use population control information across a wide variety of genetic architectures. It also remains valid under population stratification. We apply the approach to a cohort of epileptic encephalopathy (EE) trios and find that dominant (or additive) inherited rare variants are unlikely to play a substantial role within EE genes previously identified through de novo mutation studies. Copyright © 2014 The American Society of Human Genetics. Published by Elsevier Inc. All rights reserved.

  13. Defining surgical criteria for empty nose syndrome: Validation of the office-based cotton test and clinical interpretability of the validated Empty Nose Syndrome 6-Item Questionnaire.

    PubMed

    Thamboo, Andrew; Velasquez, Nathalia; Habib, Al-Rahim R; Zarabanda, David; Paknezhad, Hassan; Nayak, Jayakar V

    2017-08-01

    The validated Empty Nose Syndrome 6-Item Questionnaire (ENS6Q) identifies empty nose syndrome (ENS) patients. The unvalidated cotton test assesses improvement in ENS-related symptoms. By first validating the cotton test using the ENS6Q, we define the minimal clinically important difference (MCID) score for the ENS6Q. Individual case-control study. Fifteen patients diagnosed with ENS and 18 controls with non-ENS sinonasal conditions underwent office cotton placement. Both groups completed ENS6Q testing in three conditions-precotton, cotton in situ, and postcotton-to measure the reproducibility of ENS6Q scoring. Participants also completed a five-item transition scale ranging from "much better" to "much worse" to rate subjective changes in nasal breathing with and without cotton placement. Mean changes for each transition point, and the ENS6Q MCID, were then calculated. In the precotton condition, significant differences (P < .001) in all ENS6Q questions between ENS and controls were noted. With cotton in situ, nearly all prior ENS6Q differences normalized between ENS and control patients. For ENS patients, the changes in the mean differences between the precotton and cotton in situ conditions compared to postcotton versus cotton in situ conditions were insignificant among individuals. Including all 33 participants, the mean change in the ENS6Q between the parameters "a little better" and "about the same" was 4.25 (standard deviation [SD] = 5.79) and -2.00 (SD = 3.70), giving an MCID of 6.25. Cotton testing is a validated office test to assess for ENS patients. Cotton testing also helped to determine the MCID of the ENS6Q, which is a 7-point change from the baseline ENS6Q score. 3b. Laryngoscope, 127:1746-1752, 2017. © 2017 The American Laryngological, Rhinological and Otological Society, Inc.

  14. An arbitrary grid CFD algorithm for configuration aerodynamics analysis. Volume 1: Theory and validations

    NASA Technical Reports Server (NTRS)

    Baker, A. J.; Iannelli, G. S.; Manhardt, Paul D.; Orzechowski, J. A.

    1993-01-01

    This report documents the user input and output data requirements for the FEMNAS finite element Navier-Stokes code for real-gas simulations of external aerodynamics flowfields. This code was developed for the configuration aerodynamics branch of NASA ARC, under SBIR Phase 2 contract NAS2-124568 by Computational Mechanics Corporation (COMCO). This report is in two volumes. Volume 1 contains the theory for the derived finite element algorithm and describes the test cases used to validate the computer program described in the Volume 2 user guide.

  15. A Selection of Experimental Test Cases for the Validation of CFD Codes. Volume 2. (Recueil de cas d’essai Experimentaux Pour la Validation des Codes de L’Aerodynamique Numerique. Volume 2)

    DTIC Science & Technology

    1994-08-01

    c S c o I -2 b I c 5 ^ A9-I0 Kfc 0) >% 3 .« W) o w O OJ a) 5. u > o ^a 5 ~ ^ o ra to *- w 0 " ro d) iO II...12). The technique was modified to calculate the drag %*<• 4 «.c* A12-II using ihc ncin-minjsivc LUV and sidewall pressure measure- menu rather

  16. Propagation and Directional Scattering of Ocean Waves in the Marginal Ice Zone and Neighboring Seas

    DTIC Science & Technology

    2015-09-30

    expected to be the average of the kernel for 10 s and 12 s. This means that we should be able to calculate empirical formulas for 2 the scattering kernel...floe packing. Thus, establish a way to incorporate what has been done by Squire and co-workers into the wave model paradigm (in which the phase of the...cases observed by Kohout et al. (2014) in Antarctica . vii. Validation: We are planning validation tests for wave-ice scattering / attenuation model by

  17. Two-view information fusion for improvement of computer-aided detection (CAD) of breast masses on mammograms

    NASA Astrophysics Data System (ADS)

    Wei, Jun; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chan, Heang-Ping; Helvie, Mark A.; Roubidoux, Marilyn A.; Zhou, Chuan; Ge, Jun; Zhang, Yiheng

    2006-03-01

    We are developing a two-view information fusion method to improve the performance of our CAD system for mass detection. Mass candidates on each mammogram were first detected with our single-view CAD system. Potential object pairs on the two-view mammograms were then identified by using the distance between the object and the nipple. Morphological features, Hessian feature, correlation coefficients between the two paired objects and texture features were used as input to train a similarity classifier that estimated a similarity scores for each pair. Finally, a linear discriminant analysis (LDA) classifier was used to fuse the score from the single-view CAD system and the similarity score. A data set of 475 patients containing 972 mammograms with 475 biopsy-proven masses was used to train and test the CAD system. All cases contained the CC view and the MLO or LM view. We randomly divided the data set into two independent sets of 243 cases and 232 cases. The training and testing were performed using the 2-fold cross validation method. The detection performance of the CAD system was assessed by free response receiver operating characteristic (FROC) analysis. The average test FROC curve was obtained from averaging the FP rates at the same sensitivity along the two corresponding test FROC curves from the 2-fold cross validation. At the case-based sensitivities of 90%, 85% and 80% on the test set, the single-view CAD system achieved an FP rate of 2.0, 1.5, and 1.2 FPs/image, respectively. With the two-view fusion system, the FP rates were reduced to 1.7, 1.3, and 1.0 FPs/image, respectively, at the corresponding sensitivities. The improvement was found to be statistically significant (p<0.05) by the AFROC method. Our results indicate that the two-view fusion scheme can improve the performance of mass detection on mammograms.

  18. [The Basel Screening Instrument for Psychosis (BSIP): development, structure, reliability and validity].

    PubMed

    Riecher-Rössler, A; Aston, J; Ventura, J; Merlo, M; Borgwardt, S; Gschwandtner, U; Stieglitz, R-D

    2008-04-01

    Early detection of psychosis is of growing clinical importance. So far there is, however, no screening instrument for detecting individuals with beginning psychosis in the atypical early stages of the disease with sufficient validity. We have therefore developed the Basel Screening Instrument for Psychosis (BSIP) and tested its feasibility, interrater-reliability and validity. Aim of this paper is to describe the development and structure of the instrument, as well as to report the results of the studies on reliability and validity. The instrument was developed based on a comprehensive search of literature on the most important risk factors and early signs of schizophrenic psychoses. The interraterreliability study was conducted on 24 psychiatric cases. Validity was tested based on 206 individuals referred to our early detection clinic from 3/1/2000 until 2/28/2003. We identified seven categories of relevance for early detection of psychosis and used them to construct a semistructured interview. Interrater-reliability for high risk individuals was high (Kappa .87). Predictive validity was comparable to other, more comprehensive instruments: 16 (32 %) of 50 individuals classified as being at risk for psychosis by the BSIP have in fact developed frank psychosis within an follow-up period of two to five years. The BSIP is the first screening instrument for the early detection of psychosis which has been validated based on transition to psychosis. The BSIP is easy to use by experienced psychiatrists and has a very good interrater-reliability and predictive validity.

  19. High Accuracy Liquid Propellant Slosh Predictions Using an Integrated CFD and Controls Analysis Interface

    NASA Technical Reports Server (NTRS)

    Marsell, Brandon; Griffin, David; Schallhorn, Dr. Paul; Roth, Jacob

    2012-01-01

    Coupling computational fluid dynamics (CFD) with a controls analysis tool elegantly allows for high accuracy predictions of the interaction between sloshing liquid propellants and th e control system of a launch vehicle. Instead of relying on mechanical analogs which are not valid during aU stages of flight, this method allows for a direct link between the vehicle dynamic environments calculated by the solver in the controls analysis tool to the fluid flow equations solved by the CFD code. This paper describes such a coupling methodology, presents the results of a series of test cases, and compares said results against equivalent results from extensively validated tools. The coupling methodology, described herein, has proven to be highly accurate in a variety of different cases.

  20. SHERMAN, a shape-based thermophysical model. I. Model description and validation

    NASA Astrophysics Data System (ADS)

    Magri, Christopher; Howell, Ellen S.; Vervack, Ronald J.; Nolan, Michael C.; Fernández, Yanga R.; Marshall, Sean E.; Crowell, Jenna L.

    2018-03-01

    SHERMAN, a new thermophysical modeling package designed for analyzing near-infrared spectra of asteroids and other solid bodies, is presented. The model's features, the methods it uses to solve for surface and subsurface temperatures, and the synthetic data it outputs are described. A set of validation tests demonstrates that SHERMAN produces accurate output in a variety of special cases for which correct results can be derived from theory. These cases include a family of solutions to the heat equation for which thermal inertia can have any value and thermophysical properties can vary with depth and with temperature. An appendix describes a new approximation method for estimating surface temperatures within spherical-section craters, more suitable for modeling infrared beaming at short wavelengths than the standard method.

  1. Bad Behavior: Improving Reproducibility in Behavior Testing.

    PubMed

    Andrews, Anne M; Cheng, Xinyi; Altieri, Stefanie C; Yang, Hongyan

    2018-01-24

    Systems neuroscience research is increasingly possible through the use of integrated molecular and circuit-level analyses. These studies depend on the use of animal models and, in many cases, molecular and circuit-level analyses. Associated with genetic, pharmacologic, epigenetic, and other types of environmental manipulations. We illustrate typical pitfalls resulting from poor validation of behavior tests. We describe experimental designs and enumerate controls needed to improve reproducibility in investigating and reporting of behavioral phenotypes.

  2. Pediatric Contact Dermatitis Registry Inaugural Case Data.

    PubMed

    Goldenberg, Alina; Mousdicas, Nico; Silverberg, Nanette; Powell, Douglas; Pelletier, Janice L; Silverberg, Jonathan I; Zippin, Jonathan; Fonacier, Luz; Tosti, Antonella; Lawley, Leslie; Wu Chang, Mary; Scheman, Andrew; Kleiner, Gary; Williams, Judith; Watsky, Kalman; Dunnick, Cory A; Frederickson, Rachel; Matiz, Catalina; Chaney, Keri; Estes, Tracy S; Botto, Nina; Draper, Michelle; Kircik, Leon; Lugo-Somolinos, Aida; Machler, Brian; Jacob, Sharon E

    2016-01-01

    Little is known about the epidemiology of allergic contact dermatitis (ACD) in US children. More widespread diagnostic confirmation through epicutaneous patch testing is needed. The aim was to quantify patch test results from providers evaluating US children. The study is a retrospective analysis of deidentified patch test results of children aged 18 years or younger, entered by participating providers in the Pediatric Contact Dermatitis Registry, during the first year of data collection (2015-2016). One thousand one hundred forty-two cases from 34 US states, entered by 84 providers, were analyzed. Sixty-five percent of cases had one or more positive patch test (PPT), with 48% of cases having 1 or more relevant positive patch test (RPPT). The most common PPT allergens were nickel (22%), fragrance mix I (11%), cobalt (9.1%), balsam of Peru (8.4%), neomycin (7.2%), propylene glycol (6.8%), cocamidopropyl betaine (6.4%), bacitracin (6.2%), formaldehyde (5.7%), and gold (5.7%). This US database provides multidisciplinary information on pediatric ACD, rates of PPT, and relevant RPPT reactions, validating the high rates of pediatric ACD previously reported in the literature. The registry database is the largest comprehensive collection of US-only pediatric patch test cases on which future research can be built. Continued collaboration between patients, health care providers, manufacturers, and policy makers is needed to decrease the most common allergens in pediatric consumer products.

  3. Testing the validity and acceptability of the diagnostic criteria for Hoarding Disorder: a DSM-5 survey.

    PubMed

    Mataix-Cols, D; Fernández de la Cruz, L; Nakao, T; Pertusa, A

    2011-12-01

    The DSM-5 Obsessive-Compulsive Spectrum Sub-Workgroup is recommending the creation of a new diagnostic category named Hoarding Disorder (HD). The validity and acceptability of the proposed diagnostic criteria have yet to be formally tested. Obsessive-compulsive disorder/hoarding experts and random members of the American Psychiatric Association (APA) were shown eight brief clinical vignettes (four cases meeting criteria for HD, three with hoarding behaviour secondary to other mental disorders, and one with subclinical hoarding behaviour) and asked to decide the most appropriate diagnosis in each case. Participants were also asked about the perceived acceptability of the criteria and whether they supported the inclusion of HD in the main manual. Altogether, 211 experts and 48 APA members completed the survey (30% and 10% response rates, respectively). The sensitivity and specificity of the HD diagnosis and the individual criteria were high (80-90%) across various types of professionals, irrespective of their experience with hoarding cases. About 90% of participants in both samples thought the criteria would be very/somewhat acceptable for professionals and sufferers. Most experts (70%) supported the inclusion of HD in the main manual, whereas only 50% of the APA members did. The proposed criteria for HD have high sensitivity and specificity. The criteria are also deemed acceptable for professionals and sufferers alike. Training of professionals and the development and validation of semi-structured diagnostic instruments should improve diagnostic accuracy even further. A field trial is now needed to confirm these encouraging findings with real patients in real clinical settings.

  4. A study of patient safety management in the framework of clinical governance according to the nurses working in the ICU of the hospitals in the East of Tehran.

    PubMed

    Sahebalzamani, Mohammad; Mohammady, Mohsen

    2014-05-01

    The improvement of patient safety conditions in the framework of clinical service governance is one of the most important concerns worldwide. The importance of this issue and its effects on the health of patients encouraged the researcher to conduct this study to evaluate patient safety management in the framework of clinical governance according to the nurses working in the intensive care units (ICUs) of the hospitals of the east of Tehran, Iran in 2012. This descriptive study, which was based on census method, was conducted on 250 nurses sampled from the hospitals located in the east of Tehran. For the collection of data, a researcher-made questionnaire in five categories, including culture, leadership, training, environment, and technology, as well as on safety items was used. To test the validity of the questionnaire, content validity test was conducted, and the reliability of the questionnaire was assessed by retest method, in which the value of alpha was equal to 91%. The results showed that safety culture was at a high level in 55% of cases, safety leadership was at a high level in 40% cases and at a low level in 2.04% cases, safety training was at a high level in 64.8% cases and at a low level in 4% cases, safety of environment and technology was at a high level in 56.8% cases and at a low level in 1.6% cases, and safety items of the patients in their reports were at a high level in approximately 44% cases and at a low level in 6.5% cases. The results of Student's t-test (P < 0.001) showed that the average score of all safety categories of the patients was significantly higher than the average points. Diligence of the management and personnel of the hospital is necessary for the improvement of safety management. For this purpose, the management of hospitals can show interest in safety, develop an events reporting system, enhance teamwork, and implement clinical governance plans.

  5. Reverse lactate threshold: a novel single-session approach to reliable high-resolution estimation of the anaerobic threshold.

    PubMed

    Dotan, Raffy

    2012-06-01

    The multisession maximal lactate steady-state (MLSS) test is the gold standard for anaerobic threshold (AnT) estimation. However, it is highly impractical, requires high fitness level, and suffers additional shortcomings. Existing single-session AnT-estimating tests are of compromised validity, reliability, and resolution. The presented reverse lactate threshold test (RLT) is a single-session, AnT-estimating test, aimed at avoiding the pitfalls of existing tests. It is based on the novel concept of identifying blood lactate's maximal appearance-disappearance equilibrium by approaching the AnT from higher, rather than from lower exercise intensities. Rowing, cycling, and running case data (4 recreational and competitive athletes, male and female, aged 17-39 y) are presented. Subjects performed the RLT test and, on a separate session, a single 30-min MLSS-type verification test at the RLT-determined intensity. The RLT and its MLSS verification exhibited exceptional agreement at 0.5% discrepancy or better. The RLT's training sensitivity was demonstrated by a case of 2.5-mo training regimen following which the RLT's 15-W improvement was fully MLSS-verified. The RLT's test-retest reliability was examined in 10 trained and untrained subjects. Test 2 differed from test 1 by only 0.3% with an intraclass correlation of 0.997. The data suggest RLT to accurately and reliably estimate AnT (as represented by MLSS verification) with high resolution and in distinctly different sports and to be sensitive to training adaptations. Compared with MLSS, the single-session RLT is highly practical and its lower fitness requirements make it applicable to athletes and untrained individuals alike. Further research is needed to establish RLT's validity and accuracy in larger samples.

  6. Specificity of the histopathological triad for the diagnosis of toxoplasmic lymphadenitis: polymerase chain reaction study.

    PubMed

    Lin, M H; Kuo, T T

    2001-08-01

    Toxoplasmosis is a common cause of lymphadenopathy, but toxoplasmic cysts are not usually found in histological sections used for establishing diagnosis, except on extremely rare occasions. The histopathological triad of florid reactive follicular hyperplasia, clusters of epithelioid histiocytes, and focal sinusoidal distention by monocytoid B cells has been considered to be diagnostic of toxoplasmic lymphadenitis, but the validity of the histopathological triad is based indirectly on serological correlation only. The demonstration of Toxoplasma gondii DNA in lymph nodes displaying the histopathological triad will indicate the validity of the histopathological triad as the criterion for the histopathological diagnosis of toxoplasmic lymphadenitis. We used frozen tissues of 12 lymph nodes with the histopathological triad and tissues of 27 lymph nodes from patients with various other conditions (including 13 cases of follicular lymphoid hyperplasia, FLH; three cases of dermatopathic lymphadenopathy, DPL; two cases of plasmacytosis; two cases of Castleman's disease; two cases of metastatic adenocarcinoma; and five cases of lymphoma) to detect T. gondii DNA by polymerase chain reaction. Ten out of 12 lymph nodes with the triad and six out of 27 lymph nodes without the triad were positive for T. gondii DNA. Thus, the sensitivity of the triad was 62.5% (10/16) and the specificity was 91.3% (21/23). The predictive value of positive tests was 83.3% (10/12) and the predictive value of negative tests was 77.7% (21/27). The six cases positive for T. gondii DNA without the triad were four cases of FLH, one case of DPL, and one case of plasmacytosis. None of the neoplastic diseases was positive. The false positive and negative cases could be due to sampling problems or past T. gondii infection. The results confirm that the histopathological triad is highly specific for the diagnosis of toxoplasmic lymphadenitis and can be used confidently.

  7. Evaluating physician performance at individualizing care: a pilot study tracking contextual errors in medical decision making.

    PubMed

    Weiner, Saul J; Schwartz, Alan; Yudkowsky, Rachel; Schiff, Gordon D; Weaver, Frances M; Goldberg, Julie; Weiss, Kevin B

    2007-01-01

    Clinical decision making requires 2 distinct cognitive skills: the ability to classify patients' conditions into diagnostic and management categories that permit the application of research evidence and the ability to individualize or-more specifically-to contextualize care for patients whose circumstances and needs require variation from the standard approach to care. The purpose of this study was to develop and test a methodology for measuring physicians' performance at contextualizing care and compare it to their performance at planning biomedically appropriate care. First, the authors drafted 3 cases, each with 4 variations, 3 of which are embedded with biomedical and/or contextual information that is essential to planning care. Once the cases were validated as instruments for assessing physician performance, 54 internal medicine residents were then presented with opportunities to make these preidentified biomedical or contextual errors, and data were collected on information elicitation and error making. The case validation process was successful in that, in the final iteration, the physicians who received the contextual variant of cases proposed an alternate plan of care to those who received the baseline variant 100% of the time. The subsequent piloting of these validated cases unmasked previously unmeasured differences in physician performance at contextualizing care. The findings, which reflect the performance characteristics of the study population, are presented. This pilot study demonstrates a methodology for measuring physician performance at contextualizing care and illustrates the contribution of such information to an overall assessment of physician practice.

  8. Testing Pairwise Association between Spatially Autocorrelated Variables: A New Approach Using Surrogate Lattice Data

    PubMed Central

    Deblauwe, Vincent; Kennel, Pol; Couteron, Pierre

    2012-01-01

    Background Independence between observations is a standard prerequisite of traditional statistical tests of association. This condition is, however, violated when autocorrelation is present within the data. In the case of variables that are regularly sampled in space (i.e. lattice data or images), such as those provided by remote-sensing or geographical databases, this problem is particularly acute. Because analytic derivation of the null probability distribution of the test statistic (e.g. Pearson's r) is not always possible when autocorrelation is present, we propose instead the use of a Monte Carlo simulation with surrogate data. Methodology/Principal Findings The null hypothesis that two observed mapped variables are the result of independent pattern generating processes is tested here by generating sets of random image data while preserving the autocorrelation function of the original images. Surrogates are generated by matching the dual-tree complex wavelet spectra (and hence the autocorrelation functions) of white noise images with the spectra of the original images. The generated images can then be used to build the probability distribution function of any statistic of association under the null hypothesis. We demonstrate the validity of a statistical test of association based on these surrogates with both actual and synthetic data and compare it with a corrected parametric test and three existing methods that generate surrogates (randomization, random rotations and shifts, and iterative amplitude adjusted Fourier transform). Type I error control was excellent, even with strong and long-range autocorrelation, which is not the case for alternative methods. Conclusions/Significance The wavelet-based surrogates are particularly appropriate in cases where autocorrelation appears at all scales or is direction-dependent (anisotropy). We explore the potential of the method for association tests involving a lattice of binary data and discuss its potential for validation of species distribution models. An implementation of the method in Java for the generation of wavelet-based surrogates is available online as supporting material. PMID:23144961

  9. Structural Dynamics Modeling of HIRENASD in Support of the Aeroelastic Prediction Workshop

    NASA Technical Reports Server (NTRS)

    Wieseman, Carol; Chwalowski, Pawel; Heeg, Jennifer; Boucke, Alexander; Castro, Jack

    2013-01-01

    An Aeroelastic Prediction Workshop (AePW) was held in April 2012 using three aeroelasticity case study wind tunnel tests for assessing the capabilities of various codes in making aeroelasticity predictions. One of these case studies was known as the HIRENASD model that was tested in the European Transonic Wind Tunnel (ETW). This paper summarizes the development of a standardized enhanced analytical HIRENASD structural model for use in the AePW effort. The modifications to the HIRENASD finite element model were validated by comparing modal frequencies, evaluating modal assurance criteria, comparing leading edge, trailing edge and twist of the wing with experiment and by performing steady and unsteady CFD analyses for one of the test conditions on the same grid, and identical processing of results.

  10. Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Test act system validation

    NASA Technical Reports Server (NTRS)

    1985-01-01

    The primary objective of the Test Active Control Technology (ACT) System laboratory tests was to verify and validate the system concept, hardware, and software. The initial lab tests were open loop hardware tests of the Test ACT System as designed and built. During the course of the testing, minor problems were uncovered and corrected. Major software tests were run. The initial software testing was also open loop. These tests examined pitch control laws, wing load alleviation, signal selection/fault detection (SSFD), and output management. The Test ACT System was modified to interface with the direct drive valve (DDV) modules. The initial testing identified problem areas with DDV nonlinearities, valve friction induced limit cycling, DDV control loop instability, and channel command mismatch. The other DDV issue investigated was the ability to detect and isolate failures. Some simple schemes for failure detection were tested but were not completely satisfactory. The Test ACT System architecture continues to appear promising for ACT/FBW applications in systems that must be immune to worst case generic digital faults, and be able to tolerate two sequential nongeneric faults with no reduction in performance. The challenge in such an implementation would be to keep the analog element sufficiently simple to achieve the necessary reliability.

  11. Field application of serodiagnostics to identify elephants with Tuberculosis prior to case confirmation by culture

    USDA-ARS?s Scientific Manuscript database

    Three serologic methods for antibody detection in elephant tuberculosis (TB), multiantigen print immunoassay (MAPIA), ElephantTB STAT-PAK kit, and DPP VetTB test, were validated prospectively using serial serum samples from 14 captive elephants in 5 countries which were diagnosed with TB by positive...

  12. Assessment in Immersive Virtual Environments: Cases for Learning, of Learning, and as Learning

    ERIC Educational Resources Information Center

    Code, Jillianne; Zap, Nick

    2017-01-01

    The key to education reform lies in exploring alternative forms of assessment. Alternative performance assessments provide a more valid measure than multiple-choice tests of students' conceptual understanding and higher-level skills such as problem solving and inquiry. Advances in game-based and virtual environment technologies are creating new…

  13. High School Prayers at Graduation: Will the Supreme Court Pronounce the Benediction?

    ERIC Educational Resources Information Center

    Mawdsley, Ralph D.; Russo, Charles J.

    1991-01-01

    The Supreme Court has decided to address the facts in "Lee v. Weisman" involving the validity of graduation prayer. Reviews the opinions of the current justices regarding the role of the tripartite establishment clause "Lemon" test and concludes with a projection of the court's resolution of the "Lee" case. (73…

  14. The Censorship Game and How to Play It. Bulletin 50.

    ERIC Educational Resources Information Center

    Cox, C. Benjamin

    The pamphlet is designed to help teachers, administrators, and parents understand the meanings, implications, and methods of censorship in public schools. Censorship is defined as "any limitation placed on a curricular choice for the wrong reasons." A test of the validity of a censorship case lies in the justifications offered for the…

  15. Validation of administrative case ascertainment algorithms for chronic childhood arthritis in Manitoba, Canada.

    PubMed

    Shiff, Natalie Jane; Oen, Kiem; Rabbani, Rasheda; Lix, Lisa M

    2017-09-01

    We validated case ascertainment algorithms for juvenile idiopathic arthritis (JIA) in the provincial health administrative databases of Manitoba, Canada. A population-based pediatric rheumatology clinical database from April 1st 1980 to March 31st 2012 was used to test case definitions in individuals diagnosed at ≤15 years of age. The case definitions varied the number of diagnosis codes (1, 2, or 3), time frame (1, 2 or 3 years), time between diagnoses (ever, >1 day, or ≥8 weeks), and physician specialty. Positive predictive value (PPV), sensitivity, and specificity with 95% confidence intervals (CIs) are reported. A case definition of 1 hospitalization or ≥2 diagnoses in 2 years by any provider ≥8 weeks apart using diagnosis codes for rheumatoid arthritis and ankylosing spondylitis produced a sensitivity of 89.2% (95% CI 86.8, 91.6), specificity of 86.3% (95% CI 83.0, 89.6), and PPV of 90.6% (95% CI 88.3, 92.9) when seronegative enthesopathy and arthropathy (SEA) was excluded as JIA; and sensitivity of 88.2% (95% CI 85.7, 90.7), specificity of 90.4% (95% CI 87.5, 93.3), and PPV of 93.9% (95% CI 92.0, 95.8) when SEA was included as JIA. This study validates case ascertainment algorithms for JIA in Canadian administrative health data using diagnosis codes for both rheumatoid arthritis (RA) and ankylosing spondylitis, to better reflect current JIA classification than codes for RA alone. Researchers will be able to use these results to define cohorts for population-based studies.

  16. Using a bayesian latent class model to evaluate the utility of investigating persons with negative polymerase chain reaction results for pertussis.

    PubMed

    Tarr, Gillian A M; Eickhoff, Jens C; Koepke, Ruth; Hopfensperger, Daniel J; Davis, Jeffrey P; Conway, James H

    2013-07-15

    Pertussis remains difficult to control. Imperfect sensitivity of diagnostic tests and lack of specific guidance regarding interpretation of negative test results among patients with compatible symptoms may contribute to its spread. In this study, we examined whether additional pertussis cases could be identified if persons with negative pertussis test results were routinely investigated. We conducted interviews among 250 subjects aged ≤18 years with pertussis polymerase chain reaction (PCR) results reported from 2 reference laboratories in Wisconsin during July-September 2010 to determine whether their illnesses met the Centers for Disease Control and Prevention's clinical case definition (CCD) for pertussis. PCR validity measures were calculated using the CCD as the standard for pertussis disease. Two Bayesian latent class models were used to adjust the validity measures for pertussis detectable by 1) culture alone and 2) culture and/or more sensitive measures such as serology. Among 190 PCR-negative subjects, 54 (28%) had illnesses meeting the CCD. In adjusted analyses, PCR sensitivity and the negative predictive value were 1) 94% and 99% and 2) 43% and 87% in the 2 types of models, respectively. The models suggested that public health follow-up of reported pertussis patients with PCR-negative results leads to the detection of more true pertussis cases than follow-up of PCR-positive persons alone. The results also suggest a need for a more specific pertussis CCD.

  17. The Dutch Linguistic Intraoperative Protocol: a valid linguistic approach to awake brain surgery.

    PubMed

    De Witte, E; Satoer, D; Robert, E; Colle, H; Verheyen, S; Visch-Brink, E; Mariën, P

    2015-01-01

    Intraoperative direct electrical stimulation (DES) is increasingly used in patients operated on for tumours in eloquent areas. Although a positive impact of DES on postoperative linguistic outcome is generally advocated, information about the neurolinguistic methods applied in awake surgery is scarce. We developed for the first time a standardised Dutch linguistic test battery (measuring phonology, semantics, syntax) to reliably identify the critical language zones in detail. A normative study was carried out in a control group of 250 native Dutch-speaking healthy adults. In addition, the clinical application of the Dutch Linguistic Intraoperative Protocol (DuLIP) was demonstrated by means of anatomo-functional models and five case studies. A set of DuLIP tests was selected for each patient depending on the tumour location and degree of linguistic impairment. DuLIP is a valid test battery for pre-, intraoperative and postoperative language testing and facilitates intraoperative mapping of eloquent language regions that are variably located. Copyright © 2014 Elsevier Inc. All rights reserved.

  18. Using artificial intelligence for automating testing of a resident space object collision avoidance system on an orbital spacecraft

    NASA Astrophysics Data System (ADS)

    Straub, Jeremy

    2014-06-01

    Resident space objects (RSOs) pose a significant threat to orbital assets. Due to high relative velocities, even a small RSO can cause significant damage to an object that it strikes. Worse, in many cases a collision may create numerous additional RSOs, if the impacted object shatters apart. These new RSOs will have heterogeneous mass, size and orbital characteristics. Collision avoidance systems (CASs) are used to maneuver spacecraft out of the path of RSOs to prevent these impacts. A RSO CAS must be validated to ensure that it is able to perform effectively given a virtually unlimited number of strike scenarios. This paper presents work on the creation of a testing environment and AI testing routine that can be utilized to perform verification and validation activities for cyber-physical systems. It reviews prior work on automated and autonomous testing. Comparative performance (relative to the performance of a human tester) is discussed.

  19. Thermal Vacuum Test Correlation of A Zero Propellant Load Case Thermal Capacitance Propellant Gauging Analytics Model

    NASA Technical Reports Server (NTRS)

    McKim, Stephen A.

    2016-01-01

    This thesis describes the development and test data validation of the thermal model that is the foundation of a thermal capacitance spacecraft propellant load estimator. Specific details of creating the thermal model for the diaphragm propellant tank used on NASA's Magnetospheric Multiscale spacecraft using ANSYS and the correlation process implemented to validate the model are presented. The thermal model was correlated to within plus or minus 3 degrees Centigrade of the thermal vacuum test data, and was found to be relatively insensitive to uncertainties in applied heat flux and mass knowledge of the tank. More work is needed, however, to refine the thermal model to further improve temperature predictions in the upper hemisphere of the propellant tank. Temperatures predictions in this portion were found to be 2-2.5 degrees Centigrade lower than the test data. A road map to apply the model to predict propellant loads on the actual MMS spacecraft toward its end of life in 2017-2018 is also presented.

  20. Validation of powder X-ray diffraction following EN ISO/IEC 17025.

    PubMed

    Eckardt, Regina; Krupicka, Erik; Hofmeister, Wolfgang

    2012-05-01

    Powder X-ray diffraction (PXRD) is used widely in forensic science laboratories with the main focus of qualitative phase identification. Little is found in literature referring to the topic of validation of PXRD in the field of forensic sciences. According to EN ISO/IEC 17025, the method has to be tested for several parameters. Trueness, specificity, and selectivity of PXRD were tested using certified reference materials or a combination thereof. All three tested parameters showed the secure performance of the method. Sample preparation errors were simulated to evaluate the robustness of the method. These errors were either easily detected by the operator or nonsignificant for phase identification. In case of the detection limit, a statistical evaluation of the signal-to-noise ratio showed that a peak criterion of three sigma is inadequate and recommendations for a more realistic peak criterion are given. Finally, the results of an international proficiency test showed the secure performance of PXRD. © 2012 American Academy of Forensic Sciences.

  1. External Validation of a Case-Mix Adjustment Model for the Standardized Reporting of 30-Day Stroke Mortality Rates in China.

    PubMed

    Yu, Ping; Pan, Yuesong; Wang, Yongjun; Wang, Xianwei; Liu, Liping; Ji, Ruijun; Meng, Xia; Jing, Jing; Tong, Xu; Guo, Li; Wang, Yilong

    2016-01-01

    A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke. The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS) score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson's correlation coefficient. The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79-0.82), and the c-statistic of model B was 0.82 (95% confidence interval, 0.81-0.84). Excellent calibration was reported in the two models with Pearson's correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008). The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke.

  2. PDB_REDO: constructive validation, more than just looking for errors.

    PubMed

    Joosten, Robbie P; Joosten, Krista; Murshudov, Garib N; Perrakis, Anastassis

    2012-04-01

    Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R(free) and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise `static' structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets.

  3. PDB_REDO: constructive validation, more than just looking for errors

    PubMed Central

    Joosten, Robbie P.; Joosten, Krista; Murshudov, Garib N.; Perrakis, Anastassis

    2012-01-01

    Developments of the PDB_REDO procedure that combine re-refinement and rebuilding within a unique decision-making framework to improve structures in the PDB are presented. PDB_REDO uses a variety of existing and custom-built software modules to choose an optimal refinement protocol (e.g. anisotropic, isotropic or overall B-factor refinement, TLS model) and to optimize the geometry versus data-refinement weights. Next, it proceeds to rebuild side chains and peptide planes before a final optimization round. PDB_REDO works fully automatically without the need for intervention by a crystallographic expert. The pipeline was tested on 12 000 PDB entries and the great majority of the test cases improved both in terms of crystallographic criteria such as R free and in terms of widely accepted geometric validation criteria. It is concluded that PDB_REDO is useful to update the otherwise ‘static’ structures in the PDB to modern crystallographic standards. The publically available PDB_REDO database provides better model statistics and contributes to better refinement and validation targets. PMID:22505269

  4. Mars Ascent Vehicle Test Requirements and Terrestrial Validation

    NASA Technical Reports Server (NTRS)

    Dankanich, John W.; Cathey, Henry M.; Smith, David A.

    2011-01-01

    The Mars robotic sample return mission has been a potential flagship mission for NASA s science mission directorate for decades. The Mars Exploration Program and the planetary science decadal survey have highlighted both the science return of the Mars Sample Return mission, but also the need for risk reduction through technology development. One of the critical elements of the MSR mission is the Mars Ascent Vehicle, which must launch the sample from the surface of Mars and place it into low Mars orbit. The MAV has significant challenges to overcome due to the Martian environments and the Entry Descent and Landing system constraints. Launch vehicles typically have a relatively low success probability for early flights, and a thorough system level validation is warranted. The MAV flight environments are challenging and in some cases impossible to replicate terrestrially. The expected MAV environments have been evaluated and a first look of potential system test options has been explored. The terrestrial flight requirements and potential validation options are presented herein.

  5. Validation of OpenFoam for heavy gas dispersion applications.

    PubMed

    Mack, A; Spruijt, M P N

    2013-11-15

    In the present paper heavy gas dispersion calculations were performed with OpenFoam. For a wind tunnel test case, numerical data was validated with experiments. For a full scale numerical experiment, a code to code comparison was performed with numerical results obtained from Fluent. The validation was performed in a gravity driven environment (slope), where the heavy gas induced the turbulence. For the code to code comparison, a hypothetical heavy gas release into a strongly turbulent atmospheric boundary layer including terrain effects was selected. The investigations were performed for SF6 and CO2 as heavy gases applying the standard k-ɛ turbulence model. A strong interaction of the heavy gas with the turbulence is present which results in a strong damping of the turbulence and therefore reduced heavy gas mixing. Especially this interaction, based on the buoyancy effects, was studied in order to ensure that the turbulence-buoyancy coupling is the main driver for the reduced mixing and not the global behaviour of the turbulence modelling. For both test cases, comparisons were performed between OpenFoam and Fluent solutions which were mainly in good agreement with each other. Beside steady state solutions, the time accuracy was investigated. In the low turbulence environment (wind tunnel test) which for both codes (laminar solutions) was in good agreement, also with the experimental data. The turbulent solutions of OpenFoam were in much better agreement with the experimental results than the Fluent solutions. Within the strong turbulence environment, both codes showed an excellent comparability. Copyright © 2013 Elsevier B.V. All rights reserved.

  6. Evaluation of a surveillance case definition for anogenital warts, Kaiser Permanente northwest.

    PubMed

    Naleway, Allison L; Weinmann, Sheila; Crane, Brad; Gee, Julianne; Markowitz, Lauri E; Dunne, Eileen F

    2014-08-01

    Most studies of anogenital wart (AGW) epidemiology have used large clinical or administrative databases and unconfirmed case definitions based on combinations of diagnosis and procedure codes. We developed and validated an AGW case definition using a combination of diagnosis codes and other information available in the electronic medical record (provider type, laboratory testing). We calculated the positive predictive value (PPV) of this case definition compared with manual medical record review in a random sample of 250 cases. Using this case definition, we calculated the annual age- and sex-stratified prevalence of AGW among individuals 11 through 30 years of age from 2000 through 2005. We identified 2730 individuals who met the case definition. The PPV of the case definition was 82%, and the average annual prevalence was 4.16 per 1000. Prevalence of AGW was higher in females compared with males in every age group, with the exception of the 27- to 30-year-olds. Among females, prevalence peaked in the 19- to 22-year-olds, and among males, the peak was observed in 23- to 26-year-olds. The case definition developed in this study is the first to be validated with medical record review and has a good PPV for the detection of AGW. The prevalence rates observed in this study were higher than other published rates, but the age- and sex-specific patterns observed were consistent with previous reports.

  7. Impact of Expanding ELISA Screening in DUID Investigations to Include Carisoprodol/Meprobamate and Zolpidem.

    PubMed

    Lu, Aileen; Scott, Karen S; Chan-Hosokawa, Aya; Logan, Barry K

    2017-03-01

    In 2013, the National Safety Council's Alcohol Drugs and Impairment Division added zolpidem and carisoprodol and its metabolite meprobamate to the list of Tier 1 drugs that should be tested for in all suspected drug impaired driving and motor vehicle fatality investigations. We describe the validation of an enzyme linked immunosorbent assays (ELISA) for both drugs in whole blood, and the utilization of the ELISA to assess their positivity in a sample of 322 suspected impaired driving cases that was retrospectively screened using the validated assays. The occurrence of carisoprodol/meprobamate was found to be 1.2%, and for zolpidem, 1.6%. In addition, we analyzed a large dataset (n = 1,672) of Driving Under the Influence of Drugs (DUID) test results from a laboratory performing high volume DUID testing to assess the frequency of detection of both drugs after implementing the expanded NSC scope. Carisoprodol or meprobamate were found positive in 5.9% (n = 99) of these samples, while zolpidem was found positive in 5.3% (n = 89) in drivers who in many cases had been found to be negative for other drugs. Carisoprodol and zolpidem are both potent CNS depressants and are appropriate additions to the recommended NSC scope of testing. © The Author 2016. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  8. Screening for Binge Eating Disorders Using the Patient Health Questionnaire in a Community Sample

    PubMed Central

    Striegel-Moore, Ruth H.; Perrin, Nancy; DeBar, Lynn; Wilson, G. Terence; Rosselli, Francine; Kraemer, Helena C.

    2009-01-01

    Objective To examine the operating characteristics of the Patient Health Questionnaire eating disorder module (PHQ-ED) for identifying bulimia nervosa/binge eating disorder (BN/BED) or recurrent binge eating (RBE) in a community sample, and to compare true positive (TP) versus false positive (FP) cases on clinical validators. Method 259 screen positive individuals and a random sample of 89 screen negative cases completed a diagnostic interview. Sensitivity, specificity, and Positive Predictive Value (PPV) were calculated. TP and FP cases were compared using t-tests and Chi-Square tests. Results The PHQ-ED had high sensitivity (100%) and specificity (92%) for detecting BN/BED or RBE, but PPV was low (15% or 19%). TP and FP cases did not differ significantly on frequency of subjective bulimic episodes, objective overeating, restraint, on BMI, and on self-rated health. Conclusions The PHQ-ED is recommended for use in large populations only in conjunction with follow-up questions to rule out cases without objective bulimic episodes. PMID:19424976

  9. Validation of equations for pleural effusion volume estimation by ultrasonography.

    PubMed

    Hassan, Maged; Rizk, Rana; Essam, Hatem; Abouelnour, Ahmed

    2017-12-01

    To validate the accuracy of previously published equations that estimate pleural effusion volume using ultrasonography. Only equations using simple measurements were tested. Three measurements were taken at the posterior axillary line for each case with effusion: lateral height of effusion ( H ), distance between collapsed lung and chest wall ( C ) and distance between lung and diaphragm ( D ). Cases whose effusion was aspirated to dryness were included and drained volume was recorded. Intra-class correlation coefficient (ICC) was used to determine the predictive accuracy of five equations against the actual volume of aspirated effusion. 46 cases with effusion were included. The most accurate equation in predicting effusion volume was ( H  +  D ) × 70 (ICC 0.83). The simplest and yet accurate equation was H  × 100 (ICC 0.79). Pleural effusion height measured by ultrasonography gives a reasonable estimate of effusion volume. Incorporating distance between lung base and diaphragm into estimation improves accuracy from 79% with the first method to 83% with the latter.

  10. Construction and validation of a low-cost surgical trainer based on iPhone technology for training laparoscopic skills.

    PubMed

    Pérez Escamirosa, Fernando; Ordorica Flores, Ricardo; Minor Martínez, Arturo

    2015-04-01

    In this article, we describe the construction and validation of a laparoscopic trainer using an iPhone 5 and a plastic document holder case. The abdominal cavity was simulated with a clear plastic document holder case. On 1 side of the case, 2 holes for entry of laparoscopic instruments were drilled. We added a window to place the camera of the iPhone, which works as our camera of the trainer. Twenty residents carried out 4 tasks using the iPhone Trainer and a physical laparoscopic trainer. The time of all tasks were analyzed with a simple paired t test. The construction of the trainer took 1 hour, with a cost of

  11. History and development of the Schmidt-Hunter meta-analysis methods.

    PubMed

    Schmidt, Frank L

    2015-09-01

    In this article, I provide answers to the questions posed by Will Shadish about the history and development of the Schmidt-Hunter methods of meta-analysis. In the 1970s, I headed a research program on personnel selection at the US Office of Personnel Management (OPM). After our research showed that validity studies have low statistical power, OPM felt a need for a better way to demonstrate test validity, especially in light of court cases challenging selection methods. In response, we created our method of meta-analysis (initially called validity generalization). Results showed that most of the variability of validity estimates from study to study was because of sampling error and other research artifacts such as variations in range restriction and measurement error. Corrections for these artifacts in our research and in replications by others showed that the predictive validity of most tests was high and generalizable. This conclusion challenged long-standing beliefs and so provoked resistance, which over time was overcome. The 1982 book that we published extending these methods to research areas beyond personnel selection was positively received and was followed by expanded books in 1990, 2004, and 2014. Today, these methods are being applied in a wide variety of areas. Copyright © 2015 John Wiley & Sons, Ltd.

  12. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    NASA Astrophysics Data System (ADS)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  13. Measuring Decision-Making During Thyroidectomy: Validity Evidence for a Web-Based Assessment Tool.

    PubMed

    Madani, Amin; Gornitsky, Jordan; Watanabe, Yusuke; Benay, Cassandre; Altieri, Maria S; Pucher, Philip H; Tabah, Roger; Mitmaker, Elliot J

    2018-02-01

    Errors in judgment during thyroidectomy can lead to recurrent laryngeal nerve injury and other complications. Despite the strong link between patient outcomes and intraoperative decision-making, methods to evaluate these complex skills are lacking. The purpose of this study was to develop objective metrics to evaluate advanced cognitive skills during thyroidectomy and to obtain validity evidence for them. An interactive online learning platform was developed ( www.thinklikeasurgeon.com ). Trainees and surgeons from four institutions completed a 33-item assessment, developed based on a cognitive task analysis and expert Delphi consensus. Sixteen items required subjects to make annotations on still frames of thyroidectomy videos, and accuracy scores were calculated based on an algorithm derived from experts' responses ("visual concordance test," VCT). Seven items were short answer (SA), requiring users to type their answers, and scores were automatically calculated based on their similarity to a pre-populated repertoire of correct responses. Test-retest reliability, internal consistency, and correlation of scores with self-reported experience and training level (novice, intermediate, expert) were calculated. Twenty-eight subjects (10 endocrine surgeons and otolaryngologists, 18 trainees) participated. There was high test-retest reliability (intraclass correlation coefficient = 0.96; n = 10) and internal consistency (Cronbach's α = 0.93). The assessment demonstrated significant differences between novices, intermediates, and experts in total score (p < 0.01), VCT score (p < 0.01) and SA score (p < 0.01). There was high correlation between total case number and total score (ρ = 0.95, p < 0.01), between total case number and VCT score (ρ = 0.93, p < 0.01), and between total case number and SA score (ρ = 0.83, p < 0.01). This study describes the development of novel metrics and provides validity evidence for an interactive Web-based platform to objectively assess decision-making during thyroidectomy.

  14. Validation of a New Elastoplastic Constitutive Model Dedicated to the Cyclic Behaviour of Brittle Rock Materials

    NASA Astrophysics Data System (ADS)

    Cerfontaine, B.; Charlier, R.; Collin, F.; Taiebat, M.

    2017-10-01

    Old mines or caverns may be used as reservoirs for fuel/gas storage or in the context of large-scale energy storage. In the first case, oil or gas is stored on annual basis. In the second case pressure due to water or compressed air varies on a daily basis or even faster. In both cases a cyclic loading on the cavern's/mine's walls must be considered for the design. The complexity of rockwork geometries or coupling with water flow requires finite element modelling and then a suitable constitutive law for the rock behaviour modelling. This paper presents and validates the formulation of a new constitutive law able to represent the inherently cyclic behaviour of rocks at low confinement. The main features of the behaviour evidenced by experiments in the literature depict a progressive degradation and strain of the material with the number of cycles. A constitutive law based on a boundary surface concept is developed. It represents the brittle failure of the material as well as its progressive degradation. Kinematic hardening of the yield surface allows the modelling of cycles. Isotropic softening on the cohesion variable leads to the progressive degradation of the rock strength. A limit surface is introduced and has a lower opening than the bounding surface. This surface describes the peak strength of the material and allows the modelling of a brittle behaviour. In addition a fatigue limit is introduced such that no cohesion degradation occurs if the stress state lies inside this surface. The model is validated against three different rock materials and types of experiments. Parameters of the constitutive laws are calibrated against uniaxial tests on Lorano marble, triaxial test on a sandstone and damage-controlled test on Lac du Bonnet granite. The model is shown to reproduce correctly experimental results, especially the evolution of strain with number of cycles.

  15. Assessing local instrument reliability and validity: a field-based example from northern Uganda.

    PubMed

    Betancourt, Theresa S; Bass, Judith; Borisova, Ivelina; Neugebauer, Richard; Speelman, Liesbeth; Onyango, Grace; Bolton, Paul

    2009-08-01

    This paper presents an approach for evaluating the reliability and validity of mental health measures in non-Western field settings. We describe this approach using the example of our development of the Acholi psychosocial assessment instrument (APAI), which is designed to assess depression-like (two tam, par and kumu), anxiety-like (ma lwor) and conduct problems (kwo maraco) among war-affected adolescents in northern Uganda. To examine the criterion validity of this measure in the absence of a traditional gold standard, we derived local syndrome terms from qualitative data and used self reports of these syndromes by indigenous people as a reference point for determining caseness. Reliability was examined using standard test-retest and inter-rater methods. Each of the subscale scores for the depression-like syndromes exhibited strong internal reliability ranging from alpha = 0.84-0.87. Internal reliability was good for anxiety (0.70), conduct problems (0.83), and the pro-social attitudes and behaviors (0.70) subscales. Combined inter-rater reliability and test-retest reliability were good for most subscales except for the conduct problem scale and prosocial scales. The pattern of significant mean differences in the corresponding APAI problem scale score between self-reported cases vs. noncases on local syndrome terms was confirmed in the data for all of the three depression-like syndromes, but not for the anxiety-like syndrome ma lwor or the conduct problem kwo maraco.

  16. New method for detection of gastric cancer by hyperspectral imaging: a pilot study

    NASA Astrophysics Data System (ADS)

    Kiyotoki, Shu; Nishikawa, Jun; Okamoto, Takeshi; Hamabe, Kouichi; Saito, Mari; Goto, Atsushi; Fujita, Yusuke; Hamamoto, Yoshihiko; Takeuchi, Yusuke; Satori, Shin; Sakaida, Isao

    2013-02-01

    We developed a new, easy, and objective method to detect gastric cancer using hyperspectral imaging (HSI) technology combining spectroscopy and imaging A total of 16 gastroduodenal tumors removed by endoscopic resection or surgery from 14 patients at Yamaguchi University Hospital, Japan, were recorded using a hyperspectral camera (HSC) equipped with HSI technology Corrected spectral reflectance was obtained from 10 samples of normal mucosa and 10 samples of tumors for each case The 16 cases were divided into eight training cases (160 training samples) and eight test cases (160 test samples) We established a diagnostic algorithm with training samples and evaluated it with test samples Diagnostic capability of the algorithm for each tumor was validated, and enhancement of tumors by image processing using the HSC was evaluated The diagnostic algorithm used the 726-nm wavelength, with a cutoff point established from training samples The sensitivity, specificity, and accuracy rates of the algorithm's diagnostic capability in the test samples were 78.8% (63/80), 92.5% (74/80), and 85.6% (137/160), respectively Tumors in HSC images of 13 (81.3%) cases were well enhanced by image processing Differences in spectral reflectance between tumors and normal mucosa suggested that tumors can be clearly distinguished from background mucosa with HSI technology.

  17. The Selective Arterial Calcium Injection Test is a Valid Diagnostic Method for Invisible Gastrinoma with Duodenal Ulcer Stenosis: A Case Report.

    PubMed

    Okada, Kenjiro; Sudo, Takeshi; Miyamoto, Katsunari; Yokoyama, Yujiro; Sakashita, Yoshihiro; Hashimoto, Yasushi; Kobayashi, Hironori; Otsuka, Hiroyuki; Sakoda, Takuya; Shimamoto, Fumio

    2016-03-01

    The localization and diagnosis of microgastrinomas in a patient with multiple endocrine neoplasia type 1 is difficult preoperatively. The selective arterial calcium injection (SACI) test is a valid diagnostic method for the preoperative diagnosis of these invisible microgastrinomas. We report a rare case of multiple invisible duodenal microgastrinomas with severe duodenal stenosis diagnosed preoperatively by using the SACI test. A 50-year-old man was admitted to our hospital with recurrent duodenal ulcers. His serum gastrin level was elevated to 730 pg/ml. It was impossible for gastrointestinal endoscopy to pass through to visualize the inferior part of the duodenum, because recurrent duodenal ulcers had resulted in severe duodenal stenosis. The duodenal stenosis also prevented additional endoscopic examinations such as endoscopic ultrasonography. Computed tomography did not show any tumors in the duodenum and pancreas. The SACI test provided the evidence for a gastrinoma in the vascular territory of the inferior pancreatic-duodenal artery. We diagnosed a gastrinoma in the peri- ampullary lesion, so we performed Subtotal Stomach-Preserving Pancreatico- duodenectomy with regional lymphadenectomy. Histopathological findings showed multiple duodenal gastrinomas with lymph node metastasis and nonfunctioning pancreatic neuroendocrine tumors. Twenty months after surgery, the patient is alive with no evidence of recurrence and a normal gastrin level. In conclusion, the SACI test can enhance the accuracy of preoperative localization and diagnosis of invisible microgastrinomas, especially in the setting of severe duodenal stenosis.

  18. In-Flight Validation of Mid and Thermal Infrared Remotely Sensed Data Using the Lake Tahoe and Salton Sea Automated Validation Sites

    NASA Technical Reports Server (NTRS)

    Hook, Simon J.

    2008-01-01

    The presentation includes an introduction, Lake Tahoe site layout and measurements, Salton Sea site layout and measurements, field instrument calibration and cross-calculations, data reduction methodology and error budgets, and example results for MODIS. Summary and conclusions are: 1) Lake Tahoe CA/NV automated validation site was established in 1999 to assess radiometric accuracy of satellite and airborne mid and thermal infrared data and products. Water surface temperatures range from 4-25C.2) Salton Sea CA automated validation site was established in 2008 to broaden range of available water surface temperatures and atmospheric water vapor test cases. Water surface temperatures range from 15-35C. 3) Sites provide all information necessary for validation every 2 mins (bulk temperature, skin temperature, air temperature, wind speed, wind direction, net radiation, relative humidity). 4) Sites have been used to validate mid and thermal infrared data and products from: ASTER, AATSR, ATSR2, MODIS-Terra, MODIS-Aqua, Landsat 5, Landsat 7, MTI, TES, MASTER, MAS. 5) Approximately 10 years of data available to help validate AVHRR.

  19. Validation of NASA Thermal Ice Protection Computer Codes. Part 3; The Validation of Antice

    NASA Technical Reports Server (NTRS)

    Al-Khalil, Kamel M.; Horvath, Charles; Miller, Dean R.; Wright, William B.

    2001-01-01

    An experimental program was generated by the Icing Technology Branch at NASA Glenn Research Center to validate two ice protection simulation codes: (1) LEWICE/Thermal for transient electrothermal de-icing and anti-icing simulations, and (2) ANTICE for steady state hot gas and electrothermal anti-icing simulations. An electrothermal ice protection system was designed and constructed integral to a 36 inch chord NACA0012 airfoil. The model was fully instrumented with thermo-couples, RTD'S, and heat flux gages. Tests were conducted at several icing environmental conditions during a two week period at the NASA Glenn Icing Research Tunnel. Experimental results of running-wet and evaporative cases were compared to the ANTICE computer code predictions and are presented in this paper.

  20. Assessing clinical competency in the health sciences

    NASA Astrophysics Data System (ADS)

    Panzarella, Karen Joanne

    To test the success of integrated curricula in schools of health sciences, meaningful measurements of student performance are required to assess clinical competency. This research project analyzed a new performance assessment tool, the Integrated Standardized Patient Examination (ISPE), for assessing clinical competency: specifically, to assess Doctor of Physical Therapy (DPT) students' clinical competence as the ability to integrate basic science knowledge with clinical communication skills. Thirty-four DPT students performed two ISPE cases, one of a patient who sustained a stroke and the other a patient with a herniated lumbar disc. Cases were portrayed by standardized patients (SPs) in a simulated clinical setting. Each case was scored by an expert evaluator in the exam room and then by one investigator and the students themselves via videotape. The SPs scored each student on an overall encounter rubric. Written feedback was obtained from all participants in the study. Acceptable reliability was demonstrated via inter-rater agreement as well as inter-rater correlations on items that used a dichotomous scale, whereas the items requiring the use of the 4-point rubric were somewhat less reliable. For the entire scale both cases had a significant correlation between the Expert-Investigator pair of raters, for the CVA case r = .547, p < .05 and for the HD case r = .700, p < .01. The SPs scored students higher than the other raters. Students' self-assessments were most closely aligned with the investigator. Effects were apparent due to case. Content validity was gathered in the process of developing cases and patient scenarios that were used in this study. Construct validity was obtained from the survey results analyzed from the experts and students. Future studies should examine the effect of rater training upon the reliability. Criterion or predictive validity could be further studied by comparing students' performances on the ISPE with other independent estimates of students' competence. The unique integration questions of the ISPE were judged to have good content validity from experts and students, suggestive that integration, a most crucial element of clinical competence, while done in the mind of the student, can be practiced, learned and assessed.

  1. Validation of a Three-Dimensional Ablation and Thermal Response Simulation Code

    NASA Technical Reports Server (NTRS)

    Chen, Yih-Kanq; Milos, Frank S.; Gokcen, Tahir

    2010-01-01

    The 3dFIAT code simulates pyrolysis, ablation, and shape change of thermal protection materials and systems in three dimensions. The governing equations, which include energy conservation, a three-component decomposition model, and a surface energy balance, are solved with a moving grid system to simulate the shape change due to surface recession. This work is the first part of a code validation study for new capabilities that were added to 3dFIAT. These expanded capabilities include a multi-block moving grid system and an orthotropic thermal conductivity model. This paper focuses on conditions with minimal shape change in which the fluid/solid coupling is not necessary. Two groups of test cases of 3dFIAT analyses of Phenolic Impregnated Carbon Ablator in an arc-jet are presented. In the first group, axisymmetric iso-q shaped models are studied to check the accuracy of three-dimensional multi-block grid system. In the second group, similar models with various through-the-thickness conductivity directions are examined. In this group, the material thermal response is three-dimensional, because of the carbon fiber orientation. Predictions from 3dFIAT are presented and compared with arcjet test data. The 3dFIAT predictions agree very well with thermocouple data for both groups of test cases.

  2. Validation of CFD/Heat Transfer Software for Turbine Blade Analysis

    NASA Technical Reports Server (NTRS)

    Kiefer, Walter D.

    2004-01-01

    I am an intern in the Turbine Branch of the Turbomachinery and Propulsion Systems Division. The division is primarily concerned with experimental and computational methods of calculating heat transfer effects of turbine blades during operation in jet engines and land-based power systems. These include modeling flow in internal cooling passages and film cooling, as well as calculating heat flux and peak temperatures to ensure safe and efficient operation. The branch is research-oriented, emphasizing the development of tools that may be used by gas turbine designers in industry. The branch has been developing a computational fluid dynamics (CFD) and heat transfer code called GlennHT to achieve the computational end of this analysis. The code was originally written in FORTRAN 77 and run on Silicon Graphics machines. However the code has been rewritten and compiled in FORTRAN 90 to take advantage of more modem computer memory systems. In addition the branch has made a switch in system architectures from SGI's to Linux PC's. The newly modified code therefore needs to be tested and validated. This is the primary goal of my internship. To validate the GlennHT code, it must be run using benchmark fluid mechanics and heat transfer test cases, for which there are either analytical solutions or widely accepted experimental data. From the solutions generated by the code, comparisons can be made to the correct solutions to establish the accuracy of the code. To design and create these test cases, there are many steps and programs that must be used. Before a test case can be run, pre-processing steps must be accomplished. These include generating a grid to describe the geometry, using a software package called GridPro. Also various files required by the GlennHT code must be created including a boundary condition file, a file for multi-processor computing, and a file to describe problem and algorithm parameters. A good deal of this internship will be to become familiar with these programs and the structure of the GlennHT code. Additional information is included in the original extended abstract.

  3. Combination of DNA-based and conventional methods to detect human leukocyte antigen polymorphism and its use for paternity testing.

    PubMed

    Kereszturya, László; Rajczya, Katalin; Lászikb, András; Gyódia, Eva; Pénzes, Mária; Falus, András; Petrányia, Gyõzõ G

    2002-03-01

    In cases of disputed paternity, the scientific goal is to promote either the exclusion of a falsely accused man or the affiliation of the alleged father. Until now, in addition to anthropologic characteristics, the determination of genetic markers included human leukocyte antigen gene variants; erythrocyte antigens and serum proteins were used for that reason. Recombinant DNA techniques provided a new set of highly variable genetic markers based on DNA nucleotide sequence polymorphism. From the practical standpoint, the application of these techniques to paternity testing provides greater versatility than do conventional genetic marker systems. The use of methods to detect the polymorphism of human leukocyte antigen loci significantly increases the chance of validation of ambiguous results in paternity testing. The outcome of 2384 paternity cases investigated by serologic and/or DNA-based human leukocyte antigen typing was statistically analyzed. Different cases solved by DNA typing are presented involving cases with one or two accused men, exclusions and nonexclusions, and tests of the paternity of a deceased man. The results provide evidence for the advantage of the combined application of various techniques in forensic diagnostics and emphasizes the outstanding possibilities of DNA-based assays. Representative examples demonstrate the strength of combined techniques in paternity testing.

  4. A broad scope knowledge based model for optimization of VMAT in esophageal cancer: validation and assessment of plan quality among different treatment centers.

    PubMed

    Fogliata, Antonella; Nicolini, Giorgia; Clivio, Alessandro; Vanetti, Eugenio; Laksar, Sarbani; Tozzi, Angelo; Scorsetti, Marta; Cozzi, Luca

    2015-10-31

    To evaluate the performance of a broad scope model-based optimisation process for volumetric modulated arc therapy applied to esophageal cancer. A set of 70 previously treated patients in two different institutions, were selected to train a model for the prediction of dose-volume constraints. The model was built with a broad-scope purpose, aiming to be effective for different dose prescriptions and tumour localisations. It was validated on three groups of patients from the same institution and from another clinic not providing patients for the training phase. Comparison of the automated plans was done against reference cases given by the clinically accepted plans. Quantitative improvements (statistically significant for the majority of the analysed dose-volume parameters) were observed between the benchmark and the test plans. Of 624 dose-volume objectives assessed for plan evaluation, in 21 cases (3.3 %) the reference plans failed to respect the constraints while the model-based plans succeeded. Only in 3 cases (<0.5 %) the reference plans passed the criteria while the model-based failed. In 5.3 % of the cases both groups of plans failed and in the remaining cases both passed the tests. Plans were optimised using a broad scope knowledge-based model to determine the dose-volume constraints. The results showed dosimetric improvements when compared to the benchmark data. Particularly the plans optimised for patients from the third centre, not participating to the training, resulted in superior quality. The data suggests that the new engine is reliable and could encourage its application to clinical practice.

  5. Detecting a Defective Casing Seal at the Top of a Bedrock Aquifer.

    PubMed

    Richard, Sandra K; Chesnaux, Romain; Rouleau, Alain

    2016-03-01

    An improperly sealed casing can produce a direct hydraulic connection between two or more originally isolated aquifers with important consequences regarding groundwater quantity and quality. A recent study by Richard et al. (2014) investigated a monitoring well installed in a fractured rock aquifer with a defective casing seal at the soil-bedrock interface. A hydraulic short circuit was detected that produced some leakage between the rock and the overlying deposits. A falling-head permeability test performed in this well showed that the usual method of data interpretation is not valid in this particular case due to the presence of a piezometric error. This error is the direct result of the preferential flow originating from the hydraulic short circuit and the subsequent re-equilibration of the piezometric levels of both aquifers in the vicinity of the inlet and the outlet of the defective seal. Numerical simulations of groundwater circulation around the well support the observed impact of the hydraulic short circuit on the results of the falling-head permeability test. These observations demonstrate that a properly designed falling-head permeability test may be useful in the detection of defective casing seals. © 2015, National Ground Water Association.

  6. Development of a Quantitative Decision Metric for Selecting the Most Suitable Discretization Method for SN Transport Problems

    NASA Astrophysics Data System (ADS)

    Schunert, Sebastian

    In this work we develop a quantitative decision metric for spatial discretization methods of the SN equations. The quantitative decision metric utilizes performance data from selected test problems for computing a fitness score that is used for the selection of the most suitable discretization method for a particular SN transport application. The fitness score is aggregated as a weighted geometric mean of single performance indicators representing various performance aspects relevant to the user. Thus, the fitness function can be adjusted to the particular needs of the code practitioner by adding/removing single performance indicators or changing their importance via the supplied weights. Within this work a special, broad class of methods is considered, referred to as nodal methods. This class is naturally comprised of the DGFEM methods of all function space families. Within this work it is also shown that the Higher Order Diamond Difference (HODD) method is a nodal method. Building on earlier findings that the Arbitrarily High Order Method of the Nodal type (AHOTN) is also a nodal method, a generalized finite-element framework is created to yield as special cases various methods that were developed independently using profoundly different formalisms. A selection of test problems related to a certain performance aspect are considered: an Method of Manufactured Solutions (MMS) test suite for assessing accuracy and execution time, Lathrop's test problem for assessing resilience against occurrence of negative fluxes, and a simple, homogeneous cube test problem to verify if a method possesses the thick diffusive limit. The contending methods are implemented as efficiently as possible under a common SN transport code framework to level the playing field for a fair comparison of their computational load. Numerical results are presented for all three test problems and a qualitative rating of each method's performance is provided for each aspect: accuracy/efficiency, resilience against negative fluxes, and possession of the thick diffusion limit, separately. The choice of the most efficient method depends on the utilized error norm: in Lp error norms higher order methods such as the AHOTN method of order three perform best, while for computing integral quantities the linear nodal (LN) method is most efficient. The most resilient method against occurrence of negative fluxes is the simple corner balance (SCB) method. A validation of the quantitative decision metric is performed based on the NEA box-inbox suite of test problems. The validation exercise comprises two stages: first prediction of the contending methods' performance via the decision metric and second computing the actual scores based on data obtained from the NEA benchmark problem. The comparison of predicted and actual scores via a penalty function (ratio of predicted best performer's score to actual best score) completes the validation exercise. It is found that the decision metric is capable of very accurate predictions (penalty < 10%) in more than 83% of the considered cases and features penalties up to 20% for the remaining cases. An exception to this rule is the third test case NEA-III intentionally set up to incorporate a poor match of the benchmark with the "data" problems. However, even under these worst case conditions the decision metric's suggestions are never detrimental. Suggestions for improving the decision metric's accuracy are to increase the pool of employed data, to refine the mapping of a given configuration to a case in the database, and to better characterize the desired target quantities.

  7. Preservation of Fine-Needle Aspiration Specimens for Future Use in RNA-Based Molecular Testing

    PubMed Central

    Ladd, Amy C.; O'Sullivan-Mejia, Emerald; Lea, Tasha; Perry, Jessica; Dumur, Catherine I.; Dragoescu, Ema; Garrett, Carleton T.; Powers, Celeste N.

    2015-01-01

    Background The application of ancillary molecular testing is becoming more important for the diagnosis and classification of disease. The use of fine-needle aspiration (FNA) biopsy as the means of sampling tumors in conjunction with molecular testing could be a powerful combination. FNA is minimally invasive, cost effective, and usually demonstrates accuracy comparable to diagnoses based on excisional biopsies. Quality control (QC) and test validation requirements for development of molecular tests impose a need for access to pre-existing clinical samples. Tissue banking of excisional biopsy specimens is frequently performed at large research institutions, but few have developed protocols for preservation of cytologic specimens. This study aimed to evaluate cryopreservation of FNA specimens as a method of maintaining cellular morphology and ribonucleic acid (RNA) integrity in banked tissues. Methods FNA specimens were obtained from fresh tumor resections, processed by using a cryopreservation protocol, and stored for up to 27 weeks. Upon retrieval, samples were made into slides for morphological evaluation, and RNA was extracted and assessed for integrity by using the Agilent Bioanalyzer (Agilent Technologies, Santa Clara, Calif). Results Cryopreserved specimens showed good cell morphology and, in many cases, yielded intact RNA. Cases showing moderate or severe RNA degradation could generally be associated with prolonged specimen handling or sampling of necrotic areas. Conclusions FNA specimens can be stored in a manner that maintains cellular morphology and RNA integrity necessary for studies of gene expression. In addition to addressing quality control (QC) and test validation needs, cytology banks will be an invaluable resource for future molecular morphologic and diagnostic research studies. PMID:21287691

  8. Testability of evolutionary game dynamics based on experimental economics data

    NASA Astrophysics Data System (ADS)

    Wang, Yijia; Chen, Xiaojie; Wang, Zhijian

    In order to better understand the dynamic processes of a real game system, we need an appropriate dynamics model, so to evaluate the validity of a model is not a trivial task. Here, we demonstrate an approach, considering the dynamical macroscope patterns of angular momentum and speed as the measurement variables, to evaluate the validity of various dynamics models. Using the data in real time Rock-Paper-Scissors (RPS) games experiments, we obtain the experimental dynamic patterns, and then derive the related theoretical dynamic patterns from a series of typical dynamics models respectively. By testing the goodness-of-fit between the experimental and theoretical patterns, the validity of the models can be evaluated. One of the results in our study case is that, among all the nonparametric models tested, the best-known Replicator dynamics model performs almost worst, while the Projection dynamics model performs best. Besides providing new empirical macroscope patterns of social dynamics, we demonstrate that the approach can be an effective and rigorous tool to test game dynamics models. Fundamental Research Funds for the Central Universities (SSEYI2014Z) and the National Natural Science Foundation of China (Grants No. 61503062).

  9. A Multi-Scale Method for Dynamics Simulation in Continuum Solvent Models I: Finite-Difference Algorithm for Navier-Stokes Equation.

    PubMed

    Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2014-11-25

    A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design.

  10. Validation of an algorithm to identify children with biopsy-proven celiac disease from within health administrative data: An assessment of health services utilization patterns in Ontario, Canada

    PubMed Central

    Chan, Jason; Mack, David R.; Manuel, Douglas G.; Mojaverian, Nassim; de Nanassy, Joseph

    2017-01-01

    Importance Celiac disease (CD) is a common pediatric illness, and awareness of gluten-related disorders including CD is growing. Health administrative data represents a unique opportunity to conduct population-based surveillance of this chronic condition and assess the impact of caring for children with CD on the health system. Objective The objective of the study was to validate an algorithm based on health administrative data diagnostic codes to accurately identify children with biopsy-proven CD. We also evaluated trends over time in the use of health services related to CD by children in Ontario, Canada. Study design and setting We conducted a retrospective cohort study and validation study of population-based health administrative data in Ontario, Canada. All cases of biopsy-proven CD diagnosed 2005–2011 in Ottawa were identified through chart review from a large pediatric health care center, and linked to the Ontario health administrative data to serve as positive reference standard. All other children living within Ottawa served as the negative reference standard. Case-identifying algorithms based on outpatient physician visits with associated ICD-9 code for CD plus endoscopy billing code were constructed and tested. Sensitivity, specificity, PPV and NPV were tested for each algorithm (with 95% CI). Poisson regression, adjusting for sex and age at diagnosis, was used to explore the trend in outpatient visits associated with a CD diagnostic code from 1995–2011. Results The best algorithm to identify CD consisted of an endoscopy billing claim follow by 1 or more adult or pediatric gastroenterologist encounters after the endoscopic procedure. The sensitivity, specificity, PPV, and NPV for the algorithm were: 70.4% (95% CI 61.1–78.4%), >99.9% (95% CI >99.9->99.9%), 53.3% (95% CI 45.1–61.4%) and >99.9% (95% CI >99.9->99.9%) respectively. It identified 1289 suspected CD cases from Ontario-wide administrative data. There was a 9% annual increase in the use of this combination of CD-associated diagnostic codes in physician billing data (RR 1.09, 95% CI 1.07–1.10, P<0.001). Conclusions With its current structure and variables Ontario health administrative data is not suitable in identifying incident pediatric CD cases. The tested algorithms suffer from poor sensitivity and/or poor PPV, which increase the risk of case misclassification that could lead to biased estimation of CD incidence rate. This study reinforced the importance of validating the codes used to identify cohorts or outcomes when conducting research using health administrative data. PMID:28662204

  11. Improving strategies for syphilis control in China: selective testing of sexually transmitted disease patients--too little, too late?

    PubMed

    Yin, Y-P; Wong, S P Y; Liu, M-S; Wei, W-H; Yu, Y-H; Gao, X; Chen, Q; Fu, Z-Z; Cheng, F; Chen, X-S; Cohen, M S

    2008-12-01

    Syphilis testing guidelines in China are usually based on symptomatic criteria, overlooking risk assessment and ultimately opportunities for disease detection and control. We used data from 10,695 sexually transmitted disease (STD) clinic patients in Guangxi, China, to assess the efficacy of a potential screening tool inquiring about behavioural and health risk factors in identifying the STD patients who should not be triaged for syphilis testing under current guidelines, but on the contrary receive such testing. Validity testing of the screening tool was performed and receiver-operating characteristic curves were plotted to determine an optimal total risk score cut-off for testing. About 40.9% of patients with positive toluidine red unheated serum test and Treponema pallidum particle agglutination test did not show hallmark signs of syphilis. The screening tool was more sensitive in detecting infection in non-triaged male versus female patients (highest sensitivity = 90% vs. 55%) and the cut-off score to warrant testing was lower in non-triaged female patients than in non-triaged male patients (cut-off = 1 vs. 2). Most of the cases were missed among female STD patients. In spite of selective testing based on behavioural and health indicators that improve case detection, cases were still missed. Our study supports universal testing for syphilis in the STD population.

  12. Incremental Validity of Useful Field of View Subtests for the Prediction of Instrumental Activities of Daily Living

    PubMed Central

    Aust, Frederik; Edwards, Jerri D.

    2015-01-01

    Introduction The Useful Field of View Test (UFOV®) is a cognitive measure that predicts older adults’ ability to perform a range of everyday activities. However, little is known about the individual contribution of each subtest to these predictions and the underlying constructs of UFOV performance remain a topic of debate. Method We investigated the incremental validity of UFOV subtests for the prediction of Instrumental Activities of Daily Living (IADL) performance in two independent datasets, the SKILL (n = 828) and ACTIVE (n = 2426) studies. We, then, explored the cognitive and visual abilities assessed by UFOV using a range of neuropsychological and vision tests administered in the SKILL study. Results In the four subtest variant of UFOV, only subtests 2 and 3 consistently made independent contributions to the prediction of IADL performance across three different behavioral measures. In all cases, the incremental validity of UFOV subtests 1 and 4 was negligible. Furthermore, we found that UFOV was related to processing speed, general non-speeded cognition, and visual function; the omission of subtests 1 and 4 from the test score did not affect these associations. Conclusions UFOV subtests 1 and 4 appear to be of limited use to predict IADL and possibly other everyday activities. Future experimental research should investigate if shortening the UFOV by omitting these subtests is a reliable and valid assessment approach. PMID:26782018

  13. Is the Simple Shoulder Test a valid outcome instrument for shoulder arthroplasty?

    PubMed

    Hsu, Jason E; Russ, Stacy M; Somerson, Jeremy S; Tang, Anna; Warme, Winston J; Matsen, Frederick A

    2017-10-01

    The Simple Shoulder Test (SST) is a brief, inexpensive, and widely used patient-reported outcome tool, but it has not been rigorously evaluated for patients having shoulder arthroplasty. The goal of this study was to rigorously evaluate the validity of the SST for outcome assessment in shoulder arthroplasty using a systematic review of the literature and an analysis of its properties in a series of 408 surgical cases. SST scores, 36-Item Short Form Health Survey scores, and satisfaction scores were collected preoperatively and 2 years postoperatively. Responsiveness was assessed by comparing preoperative and 2-year postoperative scores. Criterion validity was determined by correlating the SST with the 36-Item Short Form Health Survey. Construct validity was tested through 5 clinical hypotheses regarding satisfaction, comorbidities, insurance status, previous failed surgery, and narcotic use. Scores after arthroplasty improved from 3.9 ± 2.8 to 10.2 ± 2.3 (P < .001). The change in SST correlated strongly with patient satisfaction (P < .001). The SST had large Cohen's d effect sizes and standardized response means. Criterion validity was supported by significant differences between satisfied and unsatisfied patients, those with more severe and less severe comorbidities, those with workers' compensation or Medicaid and other types of insurance, those with and without previous failed shoulder surgery, and those taking and those not taking narcotic pain medication before surgery (P < .005). These data combined with a systematic review of the literature demonstrate that the SST is a valid and responsive patient-reported outcome measure for assessing the outcomes of shoulder arthroplasty. Copyright © 2017 Journal of Shoulder and Elbow Surgery Board of Trustees. Published by Elsevier Inc. All rights reserved.

  14. An Inferentialist Perspective on the Coordination of Actions and Reasons Involved in Making a Statistical Inference

    ERIC Educational Resources Information Center

    Bakker, Arthur; Ben-Zvi, Dani; Makar, Katie

    2017-01-01

    To understand how statistical and other types of reasoning are coordinated with actions to reduce uncertainty, we conducted a case study in vocational education that involved statistical hypothesis testing. We analyzed an intern's research project in a hospital laboratory in which reducing uncertainties was crucial to make a valid statistical…

  15. What Predicts Injury from Physical Punishment? A Test of the Typologies of Violence Hypothesis

    ERIC Educational Resources Information Center

    Gonzalez, Miriam; Durrant, Joan E.; Chabot, Martin; Trocme, Nico; Brown, Jason

    2008-01-01

    Objective: This study examined the power of child, perpetrator, and socio-economic characteristics to predict injury in cases of reported child physical abuse. The study was designed to assess the validity of the assumption that physically injurious incidents of child physical abuse are qualitatively different from those that do not result in…

  16. Assessing Fit of Item Response Models Using the Information Matrix Test

    ERIC Educational Resources Information Center

    Ranger, Jochen; Kuhn, Jorg-Tobias

    2012-01-01

    The information matrix can equivalently be determined via the expectation of the Hessian matrix or the expectation of the outer product of the score vector. The identity of these two matrices, however, is only valid in case of a correctly specified model. Therefore, differences between the two versions of the observed information matrix indicate…

  17. A data driven partial ambiguity resolution: Two step success rate criterion, and its simulation demonstration

    NASA Astrophysics Data System (ADS)

    Hou, Yanqing; Verhagen, Sandra; Wu, Jie

    2016-12-01

    Ambiguity Resolution (AR) is a key technique in GNSS precise positioning. In case of weak models (i.e., low precision of data), however, the success rate of AR may be low, which may consequently introduce large errors to the baseline solution in cases of wrong fixing. Partial Ambiguity Resolution (PAR) is therefore proposed such that the baseline precision can be improved by fixing only a subset of ambiguities with high success rate. This contribution proposes a new PAR strategy, allowing to select the subset such that the expected precision gain is maximized among a set of pre-selected subsets, while at the same time the failure rate is controlled. These pre-selected subsets are supposed to obtain the highest success rate among those with the same subset size. The strategy is called Two-step Success Rate Criterion (TSRC) as it will first try to fix a relatively large subset with the fixed failure rate ratio test (FFRT) to decide on acceptance or rejection. In case of rejection, a smaller subset will be fixed and validated by the ratio test so as to fulfill the overall failure rate criterion. It is shown how the method can be practically used, without introducing a large additional computation effort. And more importantly, how it can improve (or at least not deteriorate) the availability in terms of baseline precision comparing to classical Success Rate Criterion (SRC) PAR strategy, based on a simulation validation. In the simulation validation, significant improvements are obtained for single-GNSS on short baselines with dual-frequency observations. For dual-constellation GNSS, the improvement for single-frequency observations on short baselines is very significant, on average 68%. For the medium- to long baselines, with dual-constellation GNSS the average improvement is around 20-30%.

  18. Validation of α-Synuclein as a CSF Biomarker for Sporadic Creutzfeldt-Jakob Disease.

    PubMed

    Llorens, Franc; Kruse, Niels; Karch, André; Schmitz, Matthias; Zafar, Saima; Gotzmann, Nadine; Sun, Ting; Köchy, Silja; Knipper, Tobias; Cramm, Maria; Golanska, Ewa; Sikorska, Beata; Liberski, Pawel P; Sánchez-Valle, Raquel; Fischer, Andre; Mollenhauer, Brit; Zerr, Inga

    2018-03-01

    The analysis of cerebrospinal fluid (CSF) biomarkers gains importance in the differential diagnosis of prion diseases. However, no single diagnostic tool or combination of them can unequivocally confirm prion disease diagnosis. Electrochemiluminescence (ECL)-based immunoassays have demonstrated to achieve high diagnostic accuracy in a variety of sample types due to their high sensitivity and dynamic range. Quantification of CSF α-synuclein (a-syn) by an in-house ECL-based ELISA assay has been recently reported as an excellent approach for the diagnosis of sporadic Creutzfeldt-Jakob disease (sCJD), the most prevalent form of human prion disease. In the present study, we validated a commercially available ECL-based a-syn ELISA platform as a diagnostic test for correct classification of sCJD cases. CSF a-syn was analysed in 203 sCJD cases with definite diagnosis and in 445 non-CJD cases. We investigated reproducibility and stability of CSF a-syn and made recommendations for its analysis in the sCJD diagnostic workup. A sensitivity of 98% and a specificity of 97% were achieved when using an optimal cut-off of 820 pg/mL a-syn. Moreover, we were able to show a negative correlation between a-syn levels and disease duration suggesting that CSF a-syn may be a good prognostic marker for sCJD patients. The present study validates the use of a-syn as a CSF biomarker of sCJD and establishes the clinical and pre-analytical parameters for its use in differential diagnosis in clinical routine. Additionally, the current test presents some advantages compared to other diagnostic approaches: it is fast, economic, requires minimal amount of CSF and a-syn levels are stable along disease progression.

  19. Simultaneous LC-MS/MS determination of JWH-210, RCS-4, ∆(9)-tetrahydrocannabinol, and their main metabolites in pig and human serum, whole blood, and urine for comparing pharmacokinetic data.

    PubMed

    Schaefer, Nadine; Kettner, Mattias; Laschke, Matthias W; Schlote, Julia; Peters, Benjamin; Bregel, Dietmar; Menger, Michael D; Maurer, Hans H; Ewald, Andreas H; Schmidt, Peter H

    2015-05-01

    A series of new synthetic cannabinoids (SC) has been consumed without any toxicological testing. For example, pharmacokinetic data have to be collected from forensic toxicological case work and/or animal studies. To develop a corresponding model for assessing such data, samples of controlled pig studies with two selected SC (JWH-210, RCS-4) and, as reference, ∆(9)-tetrahydrocannabinol (THC) should be analyzed as well as those of human cases. Therefore, a method for determination of JWH-210, RCS-4, THC, and their main metabolites in pig and human serum, whole blood, and urine samples is presented. Specimens were analyzed by liquid-chromatography tandem mass spectrometry and multiple-reaction monitoring with three transitions per compound. Full validation was carried out for the pig specimens and cross-validation for the human specimens concerning precision and bias. For the pig studies, the limits of detection were between 0.05 and 0.50 ng/mL in serum and whole blood and between 0.05 and 1.0 ng/mL in urine, the lower limits of quantification between 0.25 and 1.0 ng/mL in serum and 0.50 and 2.0 ng/mL in whole blood and urine, and the intra- and interday precision values lower than 15% and bias values within ±15%. The applicability was tested with samples taken from a pharmacokinetic pilot study with pigs following intravenous administration of a mixture of 200 μg/kg body mass dose each of JWH-210, RCS-4, and THC. The cross-validation data for human serum, whole blood, and urine showed that this approach should also be suitable for human specimens, e.g., of clinical or forensic cases.

  20. Validation of sentinel lymph node biopsy in breast cancer women N1-N2 with complete axillary response after neoadjuvant chemotherapy. Multicentre study in Tarragona.

    PubMed

    Carrera, D; de la Flor, M; Galera, J; Amillano, K; Gomez, M; Izquierdo, V; Aguilar, E; López, S; Martínez, M; Martínez, S; Serra, J M; Pérez, M; Martin, L

    2016-01-01

    The aim of our study was to evaluate sentinel lymph node biopsy as a diagnostic test for assessing the presence of residual metastatic axillary lymph nodes after neoadjuvant chemotherapy, replacing the need for a lymphadenectomy in negative selective lymph node biopsy patients. A multicentre, diagnostic validation study was conducted in the province of Tarragona, on women with T1-T3, N1-N2 breast cancer, who presented with a complete axillary response after neoadjuvant chemotherapy. Study procedures consisted of performing an selective lymph node biopsy followed by lymphadenectomy. A total of 53 women were included in the study. Surgical detection rate was 90.5% (no sentinel node found in 5 patients). Histopathological analysis of the lymphadenectomy showed complete disease regression of axillary nodes in 35.4% (17/48) of the patients, and residual axillary node involvement in 64.6% (31/48) of them. In lymphadenectomy positive patients, 28 had a positive selective lymph node biopsy (true positive), while 3 had a negative selective lymph node biopsy (false negative). Of the 28 true selective lymph node biopsy positives, the sentinel node was the only positive node in 10 cases. All lymphadenectomy negative cases were selective lymph node biopsy negative. These data yield a sensitivity of 93.5%, a false negative rate of 9.7%, and a global test efficiency of 93.7%. Selective lymph node biopsy after chemotherapy in patients with a complete axillary response provides valid and reliable information regarding axillary status after neoadjuvant treatment, and might prevent lymphadenectomy in cases with negative selective lymph node biopsy. Copyright © 2016 Elsevier España, S.L.U. and SEMNIM. All rights reserved.

  1. Proof test of the computer program BUCKY for plasticity problems

    NASA Technical Reports Server (NTRS)

    Smith, James P.

    1994-01-01

    A theoretical equation describing the elastic-plastic deformation of a cantilever beam subject to a constant pressure is developed. The theoretical result is compared numerically to the computer program BUCKY for the case of an elastic-perfectly plastic specimen. It is shown that the theoretical and numerical results compare favorably in the plastic range. Comparisons are made to another research code to further validate the BUCKY results. This paper serves as a quality test for the computer program BUCKY developed at NASA Johnson Space Center.

  2. Predictive Model for Particle Residence Time Distributions in Riser Reactors. Part 1: Model Development and Validation

    DOE PAGES

    Foust, Thomas D.; Ziegler, Jack L.; Pannala, Sreekanth; ...

    2017-02-28

    Here in this computational study, we model the mixing of biomass pyrolysis vapor with solid catalyst in circulating riser reactors with a focus on the determination of solid catalyst residence time distributions (RTDs). A comprehensive set of 2D and 3D simulations were conducted for a pilot-scale riser using the Eulerian-Eulerian two-fluid modeling framework with and without sub-grid-scale models for the gas-solids interaction. A validation test case was also simulated and compared to experiments, showing agreement in the pressure gradient and RTD mean and spread. For simulation cases, it was found that for accurate RTD prediction, the Johnson and Jackson partialmore » slip solids boundary condition was required for all models and a sub-grid model is useful so that ultra high resolutions grids that are very computationally intensive are not required. Finally, we discovered a 2/3 scaling relation for the RTD mean and spread when comparing resolved 2D simulations to validated unresolved 3D sub-grid-scale model simulations.« less

  3. Unilateral neglect: further validation of the baking tray task.

    PubMed

    Appelros, Peter; Karlsson, Gunnel M; Thorwalls, Annika; Tham, Kerstin; Nydevik, Ingegerd

    2004-11-01

    The Baking Tray Task is a comprehensible, simple-to-perform test for use in assessing unilateral neglect. The aim of this study was to validate further its use with stroke patients. The Baking Tray Task was compared with 2 versions of the Behaviour Inattention Test and a test for personal neglect. A total of 270 patients were subjected to a 3-item version of the Behaviour Inattention Test and 40 patients were subjected to an 8-item version of the Behaviour Inattention Test, besides the Baking Tray Task and the personal neglect test. The Baking Tray Task was more sensitive than the 3-item Behaviour Inattention Test, but the 8-item Behaviour Inattention Test was more sensitive than the Baking Tray Task. The best combination of any 3 tests was Baking Tray Task, Reading an article, and Figure copying; the 2 last-mentioned being a part of the 8-item Behaviour Inattention Test. Multi-item tests detect more cases of neglect than do single tests. However, it is tiresome for the patient to undergo a larger test battery than necessary. It is also time-consuming for the staff. Behavioural tests seem more appropriate when assessing neglect. The Baking Tray Task seems to be one of the most sensitive single tests, but its sensitivity can be further enhanced when it is used in combination with other tests.

  4. Coronary heart disease index based on longitudinal electrocardiography

    NASA Technical Reports Server (NTRS)

    Townsend, J. C.; Cronin, J. P.

    1977-01-01

    A coronary heart disease index was developed from longitudinal ECG (LCG) tracings to serve as a cardiac health measure in studies of working and, essentially, asymptomatic populations, such as pilots and executives. For a given subject, the index consisted of a composite score based on the presence of LCG aberrations and weighted values previously assigned to them. The index was validated by correlating it with the known presence or absence of CHD as determined by a complete physical examination, including treadmill, resting ECG, and risk factor information. The validating sample consisted of 111 subjects drawn by a stratified-random procedure from 5000 available case histories. The CHD index was found to be significantly more valid as a sole indicator of CHD than the LCG without the use of the index. The index consistently produced higher validity coefficients in identifying CHD than did treadmill testing, resting ECG, or risk factor analysis.

  5. Behavioral Monitoring of Sexual Offenders Against Children in Virtual Risk Situations: A Feasibility Study.

    PubMed

    Fromberger, Peter; Meyer, Sabrina; Jordan, Kirsten; Müller, Jürgen L

    2018-01-01

    The decision about unsupervised privileges for sexual offenders against children (SOC) is one of the most difficult decisions for practitioners in forensic high-security hospitals. Facing the possible consequences of the decision for the society, a valid and reliable risk management of SOCs is essential. Some risk management approaches provide frameworks for the construction of relevant future risk situations. Due to ethical reasons, it is not possible to evaluate the validity of constructed risk situations in reality. The aim of the study was to test if behavioral monitoring of SOCs in high-immersive virtual risk situations provides additional information for risk management. Six SOCs and seven non-offender controls (NOC) walked through three virtual risk situations, confronting the participant with a virtual child character. The participant had to choose between predefined answers representing approach or avoidance behavior. Frequency of chosen answers were analyzed in regards to knowledge of the participants about coping skills and coping skills focused during therapy. SOCs and NOCs behavior differed only in one risk scenario. Furthermore, SOCs showed in 89% of all cases a behavior not corresponding to their own belief about adequate behavior in comparable risk situations. In 62% of all cases, SOCs behaved not corresponding to coping skills they stated that therapists focused on during therapy. In 50% of all cases, SOCs behaved in correspondence to coping skills therapists stated that they focused on during therapy. Therapists predicted the behavior of SOCs in virtual risk situations incorrect in 25% of all cases. Thus, virtual risk scenarios provide the possibility for practitioners to monitor the behavior of SOCs and to test their decisions on unsupervised privileges without endangering the community. This may provide additional information for therapy progress. Further studies are necessary to evaluate the predictive and ecological validity of behavioral monitoring in virtual risk situations for real life situations.

  6. Behavioral Monitoring of Sexual Offenders Against Children in Virtual Risk Situations: A Feasibility Study

    PubMed Central

    Fromberger, Peter; Meyer, Sabrina; Jordan, Kirsten; Müller, Jürgen L.

    2018-01-01

    The decision about unsupervised privileges for sexual offenders against children (SOC) is one of the most difficult decisions for practitioners in forensic high-security hospitals. Facing the possible consequences of the decision for the society, a valid and reliable risk management of SOCs is essential. Some risk management approaches provide frameworks for the construction of relevant future risk situations. Due to ethical reasons, it is not possible to evaluate the validity of constructed risk situations in reality. The aim of the study was to test if behavioral monitoring of SOCs in high-immersive virtual risk situations provides additional information for risk management. Six SOCs and seven non-offender controls (NOC) walked through three virtual risk situations, confronting the participant with a virtual child character. The participant had to choose between predefined answers representing approach or avoidance behavior. Frequency of chosen answers were analyzed in regards to knowledge of the participants about coping skills and coping skills focused during therapy. SOCs and NOCs behavior differed only in one risk scenario. Furthermore, SOCs showed in 89% of all cases a behavior not corresponding to their own belief about adequate behavior in comparable risk situations. In 62% of all cases, SOCs behaved not corresponding to coping skills they stated that therapists focused on during therapy. In 50% of all cases, SOCs behaved in correspondence to coping skills therapists stated that they focused on during therapy. Therapists predicted the behavior of SOCs in virtual risk situations incorrect in 25% of all cases. Thus, virtual risk scenarios provide the possibility for practitioners to monitor the behavior of SOCs and to test their decisions on unsupervised privileges without endangering the community. This may provide additional information for therapy progress. Further studies are necessary to evaluate the predictive and ecological validity of behavioral monitoring in virtual risk situations for real life situations. PMID:29559934

  7. Development and Validation of the Minnesota Borderline Personality Disorder Scale (MBPD)

    PubMed Central

    Bornovalova, Marina A.; Hicks, Brian M.; Patrick, Christopher J.; Iacono, William G.; McGue, Matt

    2011-01-01

    While large epidemiological datasets can inform research on the etiology and development of borderline personality disorder (BPD), they rarely include BPD measures. In some cases, however, proxy measures can be constructed using instruments already in these datasets. In this study we developed and validated a self-report measure of BPD from the Multidimensional Personality Questionnaire (MPQ). Items for the new instrument—the Minnesota BPD scale (MBPD)—were identified and refined using three large samples: undergraduates, community adolescent twins, and urban substance users. We determined the construct validity of the MBPD by examining its association with (1) diagnosed BPD, (2) questionnaire reported BPD symptoms, and (3) clinical variables associated with BPD: suicidality, trauma, disinhibition, internalizing distress, and substance use. We also tested the MBPD in two prison inmate samples. Across samples, the MBPD correlated with BPD indices and external criteria, and showed incremental validity above measures of negative affect, thus supporting its construct validity as a measure of BPD. PMID:21467094

  8. MATTS- A Step Towards Model Based Testing

    NASA Astrophysics Data System (ADS)

    Herpel, H.-J.; Willich, G.; Li, J.; Xie, J.; Johansen, B.; Kvinnesland, K.; Krueger, S.; Barrios, P.

    2016-08-01

    In this paper we describe a Model Based approach to testing of on-board software and compare it with traditional validation strategy currently applied to satellite software. The major problems that software engineering will face over at least the next two decades are increasing application complexity driven by the need for autonomy and serious application robustness. In other words, how do we actually get to declare success when trying to build applications one or two orders of magnitude more complex than today's applications. To solve the problems addressed above the software engineering process has to be improved at least for two aspects: 1) Software design and 2) Software testing. The software design process has to evolve towards model-based approaches with extensive use of code generators. Today, testing is an essential, but time and resource consuming activity in the software development process. Generating a short, but effective test suite usually requires a lot of manual work and expert knowledge. In a model-based process, among other subtasks, test construction and test execution can also be partially automated. The basic idea behind the presented study was to start from a formal model (e.g. State Machines), generate abstract test cases which are then converted to concrete executable test cases (input and expected output pairs). The generated concrete test cases were applied to an on-board software. Results were collected and evaluated wrt. applicability, cost-efficiency, effectiveness at fault finding, and scalability.

  9. CoRoTlog

    NASA Astrophysics Data System (ADS)

    Plasson, Ph.

    2006-11-01

    LESIA, in close cooperation with CNES, DLR and IWF, is responsible for the tests and validation of the CoRoT instrument digital process unit which is made up of the BEX and DPU assembly. The main part of the work has consisted in validating the DPU software and in testing the BEX/DPU coupling. This work took more than two years due to the central role of the software tested and its technical complexity. The first task, in the validation process, was to carry out the acceptance tests of the DPU software. These tests consisted in checking each of the 325 requirements identified in the URD (User Requirements Document) and were played in a configuration using the DPU coupled to a BEX simulator. During the acceptance tests, all the transversal functionalities of the DPU software, like the TC/TM management, the state machine management, the BEX driving, the system monitoring or the maintenance functionalities were checked in depth. The functionalities associated with the seismology and exoplanetology processing, like the loading of window and mask descriptors or the configuration of the service execution parameters, were also exhaustively tested. After having validated the DPU software against the user requirements using a BEX simulator, the following step consisted in coupling the DPU and the BEX in order to check that the formed unit worked correctly and met the performance requirements. These tests were conducted in two phases: the first one was devoted to the functional aspects and the tests of interface, the second one to the performance aspects. The performance tests were based on the use of the DPU software scientific services and on the use of full images representative of a realistic sky as inputs. These tests were also based on the use of a reference set of windows and parameters, which was provided by the scientific team and was representative, in terms of load and complexity, of the one that could be used during the observation mode of the CoRoT instrument. Theywere played in a configuration using either a BCC simulator or a real BCC coupled to a video simulator, to feed the BEX/DPU unit. The validation of the scientific algorithms was conducted in parallel to the phase of the BEX/DPU coupling tests. The objective of this phase was to check that the algorithms implemented in the scientific services of the DPU software were in good conformity with those specified in the URD and that the obtained numerical precision corresponded to that expected. Forty cases of tests were defined covering the fine and rough angular error measurement processing, the rejection of the brilliant pixels, the subtraction of the offset and the sky background, the photometry algorithms, the SAA handling and reference image management. For each test case, the LESIA scientific team produced, by simulation, using the model instrument, the dynamic data files and the parameter sets allowing to feed the DPU on the one hand, and, on the other hand, a model of the onboard software. These data files correspond to FITS images (black windows, star windows, offset windows) containing more or less disturbances and making it possible to test the DPU software in dynamic mode over durations of up to 48 hours. To perform the test and validation activities of the CoRoT instrument digital process unit, a set of software testing tools was developed by LESIA (Software Ground Support Equipment, hereafter "SGSE"). Thanks to their versatility and modularity, these software testing tools were actually used during all the activities of integration, tests and validation of the instrument and its subsystems CoRoTCase and CoRoTCam. The CoRoT SGSE were specified, designed and developed by LESIA. The objective was to have a software system allowing the users (validation team of the onboard software, instrument integration team, etc.) to remotely control and monitor the whole instrument or only one of the subsystems of the instrument like the DPU coupled to a simulator BEX or the BEX/DPU unit coupled to a BCC simulator. The idea was to be able to interact in real time with the system under test by driving the various EGSE, but also to play test procedures implemented as scripts organized into libraries, to record the telemetries and housekeeping data in a database, and to be able to carry out post-mortem analyses.

  10. Evaluation of the ICT Tuberculosis test for the routine diagnosis of tuberculosis

    PubMed Central

    Ongut, Gozde; Ogunc, Dilara; Gunseren, Filiz; Ogus, Candan; Donmez, Levent; Colak, Dilek; Gultekin, Meral

    2006-01-01

    Background Rapid and accurate diagnosis of tuberculosis (TB) is crucial to facilitate early treatment of infectious cases and thus to reduce its spread. To improve the diagnosis of TB, more rapid diagnostic techniques such as antibody detection methods including enzyme-linked immunosorbent assay (ELISA)-based serological tests and immunochromatographic methods were developed. This study was designed to evaluate the validity of an immunochromatographic assay, ICT Tuberculosis test for the serologic diagnosis of TB in Antalya, Turkey. Methods Sera from 72 patients with active pulmonary (53 smear-positive and 19 smear-negative cases) and eight extrapulmonary (6 smear-positive and 2 smear-negative cases) TB, and 54 controls from different outpatient clinics with similar demographic characteristics as patients were tested by ICT Tuberculosis test. Results The sensitivity, specificity, and negative predictive value of the ICT Tuberculosis test for pulmonary TB were 33.3%, 100%, and 52.9%, respectively. Smear-positive pulmonary TB patients showed a higher positivity rate for antibodies than smear-negative patients, but the difference was not statistically significant. Of the eight patients with extrapulmonary TB, antibody was detected in four patients. Conclusion Our results suggest that ICT Tuberculosis test can be used to aid TB diagnosis in smear-positive patients until the culture results are available. PMID:16504161

  11. Assessing Competence in Collaborative Case Conceptualization: Development and Preliminary Psychometric Properties of the Collaborative Case Conceptualization Rating Scale (CCC-RS).

    PubMed

    Kuyken, Willem; Beshai, Shadi; Dudley, Robert; Abel, Anna; Görg, Nora; Gower, Philip; McManus, Freda; Padesky, Christine A

    2016-03-01

    Case conceptualization is assumed to be an important element in cognitive-behavioural therapy (CBT) because it describes and explains clients' presentations in ways that inform intervention. However, we do not have a good measure of competence in CBT case conceptualization that can be used to guide training and elucidate mechanisms. The current study addresses this gap by describing the development and preliminary psychometric properties of the Collaborative Case Conceptualization - Rating Scale (CCC-RS; Padesky et al., 2011). The CCC-RS was developed in accordance with the model posited by Kuyken et al. (2009). Data for this study (N = 40) were derived from a larger trial (Wiles et al., 2013) with adults suffering from resistant depression. Internal consistency and inter-rater reliability were calculated. Further, and as a partial test of the scale's validity, Pearson's correlation coefficients were obtained for scores on the CCC-RS and key scales from the Cognitive Therapy Scale - Revised (CTS-R; Blackburn et al., 2001). The CCC-RS showed excellent internal consistency (α = .94), split-half (.82) and inter-rater reliabilities (ICC =.84). Total scores on the CCC-RS were significantly correlated with scores on the CTS-R (r = .54, p < .01). Moreover, the Collaboration subscale of the CCC-RS was significantly correlated (r = .44) with its counterpart of the CTS-R in a theoretically predictable manner. These preliminary results indicate that the CCC-RS is a reliable measure with adequate face, content and convergent validity. Further research is needed to replicate and extend the current findings to other facets of validity.

  12. QUEST/Ada (Query Utility Environment for Software Testing) of Ada: The development of a program analysis environment for Ada

    NASA Technical Reports Server (NTRS)

    Brown, David B.

    1988-01-01

    A history of the Query Utility Environment for Software Testing (QUEST)/Ada is presented. A fairly comprehensive literature review which is targeted toward issues of Ada testing is given. The definition of the system structure and the high level interfaces are then presented. The design of the three major components is described. The QUEST/Ada IORL System Specifications to this point in time are included in the Appendix. A paper is also included in the appendix which gives statistical evidence of the validity of the test case generation approach which is being integrated into QUEST/Ada.

  13. USDA regulatory guidelines and practices for veterinary Leptospira vaccine potency testing.

    PubMed

    Srinivas, G B; Walker, A; Rippke, B

    2013-09-01

    Batch-release potency testing of leptospiral vaccines licensed by the United States Department of Agriculture (USDA) historically was conducted through animal vaccination-challenge models. The hamster vaccination-challenge assay was Codified in 1974 for bacterins containing Leptospira pomona, Leptospira icterohaemorrhagiae, and Leptospira canicola, and in 1975 for bacterins containing Leptospira grippotyphosa. In brief, 10 hamsters are vaccinated with a specified dilution of bacterin. After a holding period, the vaccinated hamsters, as well as nonvaccinated controls, are challenged with virulent Leptospira and observed for mortality. Eighty percent of vaccinated hamsters must survive in the face of a valid challenge. The high cost of the Codified tests, in terms of monetary expense and animal welfare, prompted the Center for Veterinary Biologics (CVB) to develop ELISA alternatives for them. Potency tests for other serogroups, such as Leptospira hardjo-bovis, that do not have Codified requirements for potency testing continue to be examined on a case-by-case basis. Published by Elsevier Ltd.

  14. Detection of fetal cell-free DNA in maternal plasma for Down syndrome, Edward syndrome and Patau syndrome of high risk fetus

    PubMed Central

    Ke, Wei-Lin; Zhao, Wei-Hua; Wang, Xin-Yu

    2015-01-01

    Objective: The study aimed to validate the efficacy of detection of fetal cell-free DNA in maternal plasma of trisomy 21, 18 and 13 in a clinical setting. Methods: A total of 2340 women at high risk for Down syndrome based on maternal age, prenatal history or a positive sesum or sonographic screening test were offered prenatal noninvasive aneuploidy test. According to the prenatal noninvasive aneuploidy test, the pregnant women at high risk were offered amniocentesis karyotype analysis and the pregnant at low risk were followed up to make sure the newborn outcome. Results: The prenatal noninvasive aneuploidy test was positive for trisomy 21 in 17 cases, for trisomy 18 in 6 cases and for trisomy 13 in 1 case, which of all were confirmed by karyotype analysis. Newborns of low risk gestational woman detected by prenatal noninvasive aneuploidy for trisomy 21, 18, 13 were followed up and no one was found with trisomy. Conclusions: The prenatal noninvasive aneuploidy test is highly accurate for detection of trisomy 21, 18 and 13, which can be considered as a practical alternative for traditional invasive diagnostic procedures. PMID:26309618

  15. Detection of fetal cell-free DNA in maternal plasma for Down syndrome, Edward syndrome and Patau syndrome of high risk fetus.

    PubMed

    Ke, Wei-Lin; Zhao, Wei-Hua; Wang, Xin-Yu

    2015-01-01

    The study aimed to validate the efficacy of detection of fetal cell-free DNA in maternal plasma of trisomy 21, 18 and 13 in a clinical setting. A total of 2340 women at high risk for Down syndrome based on maternal age, prenatal history or a positive sesum or sonographic screening test were offered prenatal noninvasive aneuploidy test. According to the prenatal noninvasive aneuploidy test, the pregnant women at high risk were offered amniocentesis karyotype analysis and the pregnant at low risk were followed up to make sure the newborn outcome. The prenatal noninvasive aneuploidy test was positive for trisomy 21 in 17 cases, for trisomy 18 in 6 cases and for trisomy 13 in 1 case, which of all were confirmed by karyotype analysis. Newborns of low risk gestational woman detected by prenatal noninvasive aneuploidy for trisomy 21, 18, 13 were followed up and no one was found with trisomy. The prenatal noninvasive aneuploidy test is highly accurate for detection of trisomy 21, 18 and 13, which can be considered as a practical alternative for traditional invasive diagnostic procedures.

  16. Test Problem: Tilted Rayleigh-Taylor for 2-D Mixing Studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Andrews, Malcolm J.; Livescu, Daniel; Youngs, David L.

    2012-08-14

    The 'tilted-rig' test problem originates from a series of experiments (Smeeton & Youngs, 1987, Youngs, 1989) performed at AWE in the late 1980's, that followed from the 'rocket-rig' experiments (Burrows et al., 1984; Read & Youngs, 1983), and exploratory experiments performed at Imperial College (Andrews, 1986; Andrews and Spalding, 1990). A schematic of the experiment is shown in Figure 1, and comprises a tank filled with light fluid above heavy, and then 'tilted' on one side of the apparatus, thus causing an 'angled interface' to the acceleration history due to rockets. Details of the configuration given in the next chaptermore » include: fluids, dimensions, and other necessary details to simulate the experiment. Figure 2 shows results from two experiments, Case 110 (which is the source for this test problem) that has an Atwood number of 0.5, and Case 115 (a secondary source described in Appendix B), with Atwood of 0.9 Inspection of the photograph in Figure 2 (the main experimental diagnostic) for Case 110. reveals two main areas for mix development; 1) a large-scale overturning motion that produces a rising plume (spike) on the left, and falling plume (bubble) on the right, that are almost symmetric; and 2) a Rayleigh-Taylor driven mixing central mixing region that has a large-scale rotation associated with the rising and falling plumes, and also experiences lateral strain due to stretching of the interface by the plumes, and shear across the interface due to upper fluid moving downward and to the right, and lower fluid moving upward and to the left. Case 115 is similar but differs by a much larger Atwood of 0.9 that drives a strong asymmetry between a left side heavy spike penetration and a right side light bubble penetration. Case 110 is chosen as the source for the present test problem as the fluids have low surface tension (unlike Case 115) due the addition of a surfactant, the asymmetry small (no need to have fine grids for the spike), and there is extensive reasonable quality photographic data. The photographs in Figure 2 also reveal the appearance of a boundary layer at the left and right walls; this boundary layer has not been included in the test problem as preliminary calculations suggested it had a negligible effect on plume penetration and RT mixing. The significance of this test problem is that, unlike planar RT experiments such as the Rocket-Rig (Youngs, 1984), Linear Electric Motor - LEM (Dimonte, 1990), or the Water Tunnel (Andrews, 1992), the Tilted-Rig is a unique two-dimensional RT mixing experiment that has experimental data and now (in this TP) Direct Numerical Simulation data from Livescu and Wei. The availability of DNS data for the tilted-rig has made this TP viable as it provides detailed results for comparison purposes. The purpose of the test problem is to provide 3D simulation results, validated by comparison with experiment, which can be used for the development and validation of 2D RANS models. When such models are applied to 2D flows, various physics issues are raised such as double counting, combined buoyancy and shear, and 2-D strain, which have not yet been adequately addressed. The current objective of the test problem is to compare key results, which are needed for RANS model validation, obtained from high-Reynolds number DNS, high-resolution ILES or LES with explicit sub-grid-scale models. The experiment is incompressible and so is directly suitable for algorithms that are designed for incompressible flows (e.g. pressure correction algorithms with multi-grid); however, we have extended the TP so that compressible algorithms, run at low Mach number, may also be used if careful consideration is given to initial pressure fields. Thus, this TP serves as a useful tool for incompressible and compressible simulation codes, and mathematical models. In the remainder of this TP we provide a detailed specification; the next section provides the underlying assumptions for the TP, fluids, geometry details, boundary conditions (and alternative set-ups), initial conditions, and acceleration history (and ways to treat the acceleration ramp at the start of the experiment). This is followed by a section that defines data to be collected from the simulations, with results from the experiments and DNS from Livescu using the CFDNS code, and ILES simulations from Youngs using the compressible TURMOIL code and Andrews using the incompressible RTI3D code. We close the TP with concluding remarks, and Appendices that includes details of the sister Case 115, initial condition specifications for density and pressure fields. The Tilted-Rig Test Problem is intended to serve as a validation problem for RANS models, and as such we have provided ILES and DNS simulations in support of the test problem definition. The generally good agreement between experiment, ILES and DNS supports our assertion that the Tilted-Rig is useful, and the only 2-D TP that can be used to validate RANS models.« less

  17. Predicting longshore gradients in longshore transport: the CERC formula compared to Delft3D

    USGS Publications Warehouse

    List, Jeffrey H.; Hanes, Daniel M.; Ruggiero, Peter

    2007-01-01

    The prediction of longshore transport gradients is critical for forecasting shoreline change. We employ simple test cases consisting of shoreface pits at varying distances from the shoreline to compare the longshore transport gradients predicted by the CERC formula against results derived from the process-based model Delft3D. Results show that while in some cases the two approaches give very similar results, in many cases the results diverge greatly. Although neither approach is validated with field data here, the Delft3D-based transport gradients provide much more consistent predictions of erosional and accretionary zones as the pit location varies across the shoreface.

  18. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coffrin, Carleton James; Hijazi, Hassan L; Van Hentenryck, Pascal R

    Here this work revisits the Semidefine Programming (SDP) relaxation of the AC power flow equations in light of recent results illustrating the benefits of bounds propagation, valid inequalities, and the Convex Quadratic (QC) relaxation. By integrating all of these results into the SDP model a new hybrid relaxation is proposed, which combines the benefits from all of these recent works. This strengthened SDP formulation is evaluated on 71 AC Optimal Power Flow test cases from the NESTA archive and is shown to have an optimality gap of less than 1% on 63 cases. This new hybrid relaxation closes 50% ofmore » the open cases considered, leaving only 8 for future investigation.« less

  19. Preliminary findings on the reliability and validity of the Cantonese Birmingham Cognitive Screen in patients with acute ischemic stroke

    PubMed Central

    Pan, Xiaoping; Chen, Haobo; Bickerton, Wai-Ling; Lau, Johnny King Lam; Kong, Anthony Pak Hin; Rotshtein, Pia; Guo, Aihua; Hu, Jianxi; Humphreys, Glyn W

    2015-01-01

    Background There are no currently effective cognitive assessment tools for patients who have suffered stroke in the People’s Republic of China. The Birmingham Cognitive Screen (BCoS) has been shown to be a promising tool for revealing patients’ poststroke cognitive deficits in specific domains, which facilitates more individually designed rehabilitation in the long run. Hence we examined the reliability and validity of a Cantonese version BCoS in patients with acute ischemic stroke, in Guangzhou. Method A total of 98 patients with acute ischemic stroke were assessed with the Cantonese version of the BCoS, and an additional 133 healthy individuals were recruited as controls. Apart from the BCoS, the patients also completed a number of external cognitive tests, including the Montreal Cognitive Assessment Test (MoCA), Mini Mental State Examination (MMSE), Albert’s cancellation test, the Rey–Osterrieth Complex Figure Test, and six gesture matching tasks. Cutoff scores for failing each subtest, ie, deficits, were computed based on the performance of the controls. The validity and reliability of the Cantonese BCoS were examined, as well as interrater and test–retest reliability. We also compared the proportions of cases being classified as deficits in controlled attention, memory, character writing, and praxis, between patients with and without spoken language impairment. Results Analyses showed high test–retest reliability and agreement across independent raters on the qualitative aspects of measurement. Significant correlations were observed between the subtests of the Cantonese BCoS and the other external cognitive tests, providing evidence for convergent validity of the Cantonese BCoS. The screen was also able to generate measures of cognitive functions that were relatively uncontaminated by the presence of aphasia. Conclusion This study suggests good reliability and validity of the Cantonese version of the BCoS. The Cantonese BCoS is a very promising tool for the detection of cognitive problems in Cantonese speakers. PMID:26396522

  20. The Electromagnetic Field for a PEC Wedge Over a Grounded Dielectric Slab: 1. Formulation and Validation

    NASA Astrophysics Data System (ADS)

    Daniele, Vito G.; Lombardi, Guido; Zich, Rodolfo S.

    2017-12-01

    Complex scattering problems are often made by composite structures where wedges and penetrable substrates may interact at near field. In this paper (Part 1) together with its companion paper (Part 2) we study the canonical problem constituted of a Perfectly Electrically Conducting (PEC) wedge lying on a grounded dielectric slab with a comprehensive mathematical model based on the application of the Generalized Wiener-Hopf Technique (GWHT) with the help of equivalent circuital representations for linear homogenous regions (angular and layered regions). The proposed procedure is valid for the general case, and the papers focus on E-polarization. The solution is obtained using analytical and semianalytical approaches that reduce the Wiener-Hopf factorization to integral equations. Several numerical test cases validate the proposed method. The scope of Part 1 is to present the method and its validation applied to the problem. The companion paper Part 2 focuses on the properties of the solution, and it presents physical and engineering insights as Geometrical Theory of Diffraction (GTD)/Uniform Theory of Diffraction(UTD) coefficients, total far fields, modal fields, and excitation of surface and leaky waves for different kinds of source. The structure is of interest in antenna technologies and electromagnetic compatibility (tip on a substrate with guiding and antenna properties).

  1. Salivary diagnosis of measles: a study of notified cases in the United Kingdom, 1991-3.

    PubMed Central

    Brown, D. W.; Ramsay, M. E.; Richards, A. F.; Miller, E.

    1994-01-01

    OBJECTIVES--To validate a method for salivary diagnosis of measles and to assess the diagnostic accuracy of notified cases of measles. DESIGN--Blood and saliva samples were collected within 90 days of onset of symptoms from patients clinically diagnosed as having measles and tested for specific IgM by antibody capture radioimmunoassay. SETTING--17 districts in England and one in southern Ireland during August 1991 to February 1993. SUBJECTS--236 children and adults with measles notified by a general practitioner. RESULTS--Specific IgM was detected in serum in only 85 (36%) of the 236 cases. In cases associated with outbreaks and tested within six weeks of onset, 53/57 (93%) of samples were IgM positive, thereby confirming the sensitivity of serum IgM detection as a marker of recent infection. The serological confirmation rate was lower in cases with a documented history of vaccination (13/87; 15%) than in those without (70/149; 47%) and varied with age, being lowest in patients under a year, of whom only 4/36 (11%) were confirmed. Measles specific IgM was detected in 71/77 (92%) of adequate saliva samples collected from patients with serum positive for IgM. In cases where measles was not confirmed, 6/101 had rubella specific IgM and 5/132 had human parvovirus B19 specific IgM detected in serum. CONCLUSIONS--The existing national surveillance system for measles, which relies on clinically diagnosed cases, lacks the precision required for effective disease control. Saliva is a valid alternative to serum for IgM detection, and salivary diagnosis could play a major role in achieving measles elimination. Rubella and parvovirus B19 seem to be responsible for a minority of incorrectly diagnosed cases of measles in the United Kingdom and other infectious causes of measles-like illness need to be sought. PMID:8167513

  2. Development and Validation of a Novel Robotic Procedure Specific Simulation Platform: Partial Nephrectomy.

    PubMed

    Hung, Andrew J; Shah, Swar H; Dalag, Leonard; Shin, Daniel; Gill, Inderbir S

    2015-08-01

    We developed a novel procedure specific simulation platform for robotic partial nephrectomy. In this study we prospectively evaluate its face, content, construct and concurrent validity. This hybrid platform features augmented reality and virtual reality. Augmented reality involves 3-dimensional robotic partial nephrectomy surgical videos overlaid with virtual instruments to teach surgical anatomy, technical skills and operative steps. Advanced technical skills are assessed with an embedded full virtual reality renorrhaphy task. Participants were classified as novice (no surgical training, 15), intermediate (less than 100 robotic cases, 13) or expert (100 or more robotic cases, 14) and prospectively assessed. Cohort performance was compared with the Kruskal-Wallis test (construct validity). Post-study questionnaire was used to assess the realism of simulation (face validity) and usefulness for training (content validity). Concurrent validity evaluated correlation between virtual reality renorrhaphy task and a live porcine robotic partial nephrectomy performance (Spearman's analysis). Experts rated the augmented reality content as realistic (median 8/10) and helpful for resident/fellow training (8.0-8.2/10). Experts rated the platform highly for teaching anatomy (9/10) and operative steps (8.5/10) but moderately for technical skills (7.5/10). Experts and intermediates outperformed novices (construct validity) in efficiency (p=0.0002) and accuracy (p=0.002). For virtual reality renorrhaphy, experts outperformed intermediates on GEARS metrics (p=0.002). Virtual reality renorrhaphy and in vivo porcine robotic partial nephrectomy performance correlated significantly (r=0.8, p <0.0001) (concurrent validity). This augmented reality simulation platform displayed face, content and construct validity. Performance in the procedure specific virtual reality task correlated highly with a porcine model (concurrent validity). Future efforts will integrate procedure specific virtual reality tasks and their global assessment. Copyright © 2015 American Urological Association Education and Research, Inc. Published by Elsevier Inc. All rights reserved.

  3. Retrospective multicenter matched case-control study on the risk factors for narcolepsy with special focus on vaccinations (including pandemic influenza vaccination) and infections in Germany.

    PubMed

    Oberle, Doris; Pavel, Jutta; Mayer, Geert; Geisler, Peter; Keller-Stanislawski, Brigitte

    2017-06-01

    Studies associate pandemic influenza vaccination with narcolepsy. In Germany, a retrospective, multicenter, matched case-control study was performed to identify risk factors for narcolepsy, particularly regarding vaccinations (seasonal and pandemic influenza vaccination) and infections (seasonal and pandemic influenza) and to quantify the detected risks. Patients with excessive daytime sleepiness who had been referred to a sleep center between April 2009 and December 2012 for multiple sleep latency test (MSLT) were eligible. Case report forms were validated according to the criteria for narcolepsy defined by the Brighton Collaboration (BC). Confirmed cases of narcolepsy (BC level of diagnostic certainty 1-4a) were matched with population-based controls by year of birth, gender, and place of residence. A second control group was established including patients in whom narcolepsy was definitely excluded (test-negative controls). A total of 103 validated cases of narcolepsy were matched with 264 population-based controls. The second control group included 29 test-negative controls. A significantly increased odd ratio (OR) to develop narcolepsy (crude OR [cOR] = 3.9, 95% confidence interval [CI] = 1.8-8.5; adjusted OR [aOR] = 4.5, 95% CI = 2.0-9.9) was detected in individuals immunized with pandemic influenza A/H1N1/v vaccine prior to symptoms onset as compared to nonvaccinated individuals. Using test-negative controls, in individuals immunized with pandemic influenza A/H1N1/v vaccine prior to symptoms onset, a nonsignificantly increased OR of narcolepsy was detected when compared to nonvaccinated individuals (whole study population, BC levels 1-4a: cOR = 1.9, 95% CI = 0.5-6.9; aOR = 1.8, 95% CI = 0.3-10.1). The findings of this study support an increased risk for narcolepsy after immunization with pandemic influenza A/H1N1/v vaccine. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Dissolution of hypotheses in biochemistry: three case studies.

    PubMed

    Fry, Michael

    2016-12-01

    The history of biochemistry and molecular biology is replete with examples of erroneous theories that persisted for considerable lengths of time before they were rejected. This paper examines patterns of dissolution of three such erroneous hypotheses: The idea that nucleic acids are tetrads of the four nucleobases ('the tetranucleotide hypothesis'); the notion that proteins are collinear with their encoding genes in all branches of life; and the hypothesis that proteins are synthesized by reverse action of proteolytic enzymes. Analysis of these cases indicates that amassed contradictory empirical findings did not prompt critical experimental testing of the prevailing theories nor did they elicit alternative hypotheses. Rather, the incorrect models collapsed when experiments that were not purposely designed to test their validity exposed new facts.

  5. Efficient Computation of Atmospheric Flows with Tempest: Validation of Next-Generation Climate and Weather Prediction Algorithms at Non-Hydrostatic Scales

    NASA Astrophysics Data System (ADS)

    Guerra, Jorge; Ullrich, Paul

    2016-04-01

    Tempest is a next-generation global climate and weather simulation platform designed to allow experimentation with numerical methods for a wide range of spatial resolutions. The atmospheric fluid equations are discretized by continuous / discontinuous finite elements in the horizontal and by a staggered nodal finite element method (SNFEM) in the vertical, coupled with implicit/explicit time integration. At horizontal resolutions below 10km, many important questions remain on optimal techniques for solving the fluid equations. We present results from a suite of idealized test cases to validate the performance of the SNFEM applied in the vertical with an emphasis on flow features and dynamic behavior. Internal gravity wave, mountain wave, convective bubble, and Cartesian baroclinic instability tests will be shown at various vertical orders of accuracy and compared with known results.

  6. External Validation of a Case-Mix Adjustment Model for the Standardized Reporting of 30-Day Stroke Mortality Rates in China

    PubMed Central

    Yu, Ping; Pan, Yuesong; Wang, Yongjun; Wang, Xianwei; Liu, Liping; Ji, Ruijun; Meng, Xia; Jing, Jing; Tong, Xu; Guo, Li; Wang, Yilong

    2016-01-01

    Background and Purpose A case-mix adjustment model has been developed and externally validated, demonstrating promise. However, the model has not been thoroughly tested among populations in China. In our study, we evaluated the performance of the model in Chinese patients with acute stroke. Methods The case-mix adjustment model A includes items on age, presence of atrial fibrillation on admission, National Institutes of Health Stroke Severity Scale (NIHSS) score on admission, and stroke type. Model B is similar to Model A but includes only the consciousness component of the NIHSS score. Both model A and B were evaluated to predict 30-day mortality rates in 13,948 patients with acute stroke from the China National Stroke Registry. The discrimination of the models was quantified by c-statistic. Calibration was assessed using Pearson’s correlation coefficient. Results The c-statistic of model A in our external validation cohort was 0.80 (95% confidence interval, 0.79–0.82), and the c-statistic of model B was 0.82 (95% confidence interval, 0.81–0.84). Excellent calibration was reported in the two models with Pearson’s correlation coefficient (0.892 for model A, p<0.001; 0.927 for model B, p = 0.008). Conclusions The case-mix adjustment model could be used to effectively predict 30-day mortality rates in Chinese patients with acute stroke. PMID:27846282

  7. Prediction of clinical response to drugs in ovarian cancer using the chemotherapy resistance test (CTR-test).

    PubMed

    Kischkel, Frank Christian; Meyer, Carina; Eich, Julia; Nassir, Mani; Mentze, Monika; Braicu, Ioana; Kopp-Schneider, Annette; Sehouli, Jalid

    2017-10-27

    In order to validate if the test result of the Chemotherapy Resistance Test (CTR-Test) is able to predict the resistances or sensitivities of tumors in ovarian cancer patients to drugs, the CTR-Test result and the corresponding clinical response of individual patients were correlated retrospectively. Results were compared to previous recorded correlations. The CTR-Test was performed on tumor samples from 52 ovarian cancer patients for specific chemotherapeutic drugs. Patients were treated with monotherapies or drug combinations. Resistances were classified as extreme (ER), medium (MR) or slight (SR) resistance in the CTR-Test. Combination treatment resistances were transformed by a scoring system into these classifications. Accurate sensitivity prediction was accomplished in 79% of the cases and accurate prediction of resistance in 100% of the cases in the total data set. The data set of single agent treatment and drug combination treatment were analyzed individually. Single agent treatment lead to an accurate sensitivity in 44% of the cases and the drug combination to 95% accuracy. The detection of resistances was in both cases to 100% correct. ROC curve analysis indicates that the CTR-Test result correlates with the clinical response, at least for the combination chemotherapy. Those values are similar or better than the values from a publication from 1990. Chemotherapy resistance testing in vitro via the CTR-Test is able to accurately detect resistances in ovarian cancer patients. These numbers confirm and even exceed results published in 1990. Better sensitivity detection might be caused by a higher percentage of drug combinations tested in 2012 compared to 1990. Our study confirms the functionality of the CTR-Test to plan an efficient chemotherapeutic treatment for ovarian cancer patients.

  8. Experimental validation of an ultrasonic flowmeter for unsteady flows

    NASA Astrophysics Data System (ADS)

    Leontidis, V.; Cuvier, C.; Caignaert, G.; Dupont, P.; Roussette, O.; Fammery, S.; Nivet, P.; Dazin, A.

    2018-04-01

    An ultrasonic flowmeter was developed for further applications in cryogenic conditions and for measuring flow rate fluctuations in the range of 0 to 70 Hz. The prototype was installed in a flow test rig, and was validated experimentally both in steady and unsteady water flow conditions. A Coriolis flowmeter was used for the calibration under steady state conditions, whereas in the unsteady case the validation was done simultaneously against two methods: particle image velocimetry (PIV), and with pressure transducers installed flush on the wall of the pipe. The results show that the developed flowmeter and the proposed methodology can accurately measure the frequency and amplitude of unsteady fluctuations in the experimental range of 0-9 l s-1 of the mean main flow rate and 0-70 Hz of the imposed disturbances.

  9. Analytical Validation of the ReEBOV Antigen Rapid Test for Point-of-Care Diagnosis of Ebola Virus Infection

    PubMed Central

    Cross, Robert W.; Boisen, Matthew L.; Millett, Molly M.; Nelson, Diana S.; Oottamasathien, Darin; Hartnett, Jessica N.; Jones, Abigal B.; Goba, Augustine; Momoh, Mambu; Fullah, Mohamed; Bornholdt, Zachary A.; Fusco, Marnie L.; Abelson, Dafna M.; Oda, Shunichiro; Brown, Bethany L.; Pham, Ha; Rowland, Megan M.; Agans, Krystle N.; Geisbert, Joan B.; Heinrich, Megan L.; Kulakosky, Peter C.; Shaffer, Jeffrey G.; Schieffelin, John S.; Kargbo, Brima; Gbetuwa, Momoh; Gevao, Sahr M.; Wilson, Russell B.; Saphire, Erica Ollmann; Pitts, Kelly R.; Khan, Sheik Humarr; Grant, Donald S.; Geisbert, Thomas W.; Branco, Luis M.; Garry, Robert F.

    2016-01-01

    Background. Ebola virus disease (EVD) is a severe viral illness caused by Ebola virus (EBOV). The 2013–2016 EVD outbreak in West Africa is the largest recorded, with >11 000 deaths. Development of the ReEBOV Antigen Rapid Test (ReEBOV RDT) was expedited to provide a point-of-care test for suspected EVD cases. Methods. Recombinant EBOV viral protein 40 antigen was used to derive polyclonal antibodies for RDT and enzyme-linked immunosorbent assay development. ReEBOV RDT limits of detection (LOD), specificity, and interference were analytically validated on the basis of Food and Drug Administration (FDA) guidance. Results. The ReEBOV RDT specificity estimate was 95% for donor serum panels and 97% for donor whole-blood specimens. The RDT demonstrated sensitivity to 3 species of Ebolavirus (Zaire ebolavirus, Sudan ebolavirus, and Bundibugyo ebolavirus) associated with human disease, with no cross-reactivity by pathogens associated with non-EBOV febrile illness, including malaria parasites. Interference testing exhibited no reactivity by medications in common use. The LOD for antigen was 4.7 ng/test in serum and 9.4 ng/test in whole blood. Quantitative reverse transcription–polymerase chain reaction testing of nonhuman primate samples determined the range to be equivalent to 3.0 × 105–9.0 × 108 genomes/mL. Conclusions. The analytical validation presented here contributed to the ReEBOV RDT being the first antigen-based assay to receive FDA and World Health Organization emergency use authorization for this EVD outbreak, in February 2015. PMID:27587634

  10. External validation of Vascular Study Group of New England risk predictive model of mortality after elective abdominal aorta aneurysm repair in the Vascular Quality Initiative and comparison against established models.

    PubMed

    Eslami, Mohammad H; Rybin, Denis V; Doros, Gheorghe; Siracuse, Jeffrey J; Farber, Alik

    2018-01-01

    The purpose of this study is to externally validate a recently reported Vascular Study Group of New England (VSGNE) risk predictive model of postoperative mortality after elective abdominal aortic aneurysm (AAA) repair and to compare its predictive ability across different patients' risk categories and against the established risk predictive models using the Vascular Quality Initiative (VQI) AAA sample. The VQI AAA database (2010-2015) was queried for patients who underwent elective AAA repair. The VSGNE cases were excluded from the VQI sample. The external validation of a recently published VSGNE AAA risk predictive model, which includes only preoperative variables (age, gender, history of coronary artery disease, chronic obstructive pulmonary disease, cerebrovascular disease, creatinine levels, and aneurysm size) and planned type of repair, was performed using the VQI elective AAA repair sample. The predictive value of the model was assessed via the C-statistic. Hosmer-Lemeshow method was used to assess calibration and goodness of fit. This model was then compared with the Medicare, Vascular Governance Northwest model, and Glasgow Aneurysm Score for predicting mortality in VQI sample. The Vuong test was performed to compare the model fit between the models. Model discrimination was assessed in different risk group VQI quintiles. Data from 4431 cases from the VSGNE sample with the overall mortality rate of 1.4% was used to develop the model. The internally validated VSGNE model showed a very high discriminating ability in predicting mortality (C = 0.822) and good model fit (Hosmer-Lemeshow P = .309) among the VSGNE elective AAA repair sample. External validation on 16,989 VQI cases with an overall 0.9% mortality rate showed very robust predictive ability of mortality (C = 0.802). Vuong tests yielded a significant fit difference favoring the VSGNE over then Medicare model (C = 0.780), Vascular Governance Northwest (0.774), and Glasgow Aneurysm Score (0.639). Across the 5 risk quintiles, the VSGNE model predicted observed mortality significantly with great accuracy. This simple VSGNE AAA risk predictive model showed very high discriminative ability in predicting mortality after elective AAA repair among a large external independent sample of AAA cases performed by a diverse array of physicians nationwide. The risk score based on this simple VSGNE model can reliably stratify patients according to their risk of mortality after elective AAA repair better than other established models. Copyright © 2017 Society for Vascular Surgery. Published by Elsevier Inc. All rights reserved.

  11. Reliability and validity of an accele-rometric system for assessing vertical jumping performance.

    PubMed

    Choukou, M-A; Laffaye, G; Taiar, R

    2014-03-01

    The validity of an accelerometric system (Myotest©) for assessing vertical jump height, vertical force and power, leg stiffness and reactivity index was examined. 20 healthy males performed 3×"5 hops in place", 3×"1 squat jump" and 3× "1 countermovement jump" during 2 test-retest sessions. The variables were simultaneously assessed using an accelerometer and a force platform at a frequency of 0.5 and 1 kHz, respectively. Both reliability and validity of the accelerometric system were studied. No significant differences between test and retest data were found (p < 0.05), showing a high level of reliability. Besides, moderate to high intraclass correlation coefficients (ICCs) (from 0.74 to 0.96) were obtained for all variables whereas weak to moderate ICCs (from 0.29 to 0.79) were obtained for force and power during the countermovement jump. With regards to validity, the difference between the two devices was not significant for 5 hops in place height (1.8 cm), force during squat (-1.4 N · kg(-1)) and countermovement (0.1 N · kg(-1)) jumps, leg stiffness (7.8 kN · m(-1)) and reactivity index (0.4). So, the measurements of these variables with this accelerometer are valid, which is not the case for the other variables. The main causes of non-validity for velocity, power and contact time assessment are temporal biases of the takeoff and touchdown moments detection.

  12. RELIABILITY AND VALIDITY OF AN ACCELEROMETRIC SYSTEM FOR ASSESSING VERTICAL JUMPING PERFORMANCE

    PubMed Central

    Laffaye, G.; Taiar, R.

    2014-01-01

    The validity of an accelerometric system (Myotest©) for assessing vertical jump height, vertical force and power, leg stiffness and reactivity index was examined. 20 healthy males performed 3ד5 hops in place”, 3ד1 squat jump” and 3× “1 countermovement jump” during 2 test-retest sessions. The variables were simultaneously assessed using an accelerometer and a force platform at a frequency of 0.5 and 1 kHz, respectively. Both reliability and validity of the accelerometric system were studied. No significant differences between test and retest data were found (p < 0.05), showing a high level of reliability. Besides, moderate to high intraclass correlation coefficients (ICCs) (from 0.74 to 0.96) were obtained for all variables whereas weak to moderate ICCs (from 0.29 to 0.79) were obtained for force and power during the countermovement jump. With regards to validity, the difference between the two devices was not significant for 5 hops in place height (1.8 cm), force during squat (-1.4 N · kg−1) and countermovement (0.1 N · kg−1) jumps, leg stiffness (7.8 kN · m−1) and reactivity index (0.4). So, the measurements of these variables with this accelerometer are valid, which is not the case for the other variables. The main causes of non-validity for velocity, power and contact time assessment are temporal biases of the takeoff and touchdown moments detection. PMID:24917690

  13. Risk of malnutrition (over and under-nutrition): validation of the JaNuS screening tool.

    PubMed

    Donini, Lorenzo M; Ricciardi, Laura Maria; Neri, Barbara; Lenzi, Andrea; Marchesini, Giulio

    2014-12-01

    Malnutrition (over and under-nutrition) is highly prevalent in patients admitted to hospital and it is a well-known risk factor for increased morbidity and mortality. Nutritional problems are often misdiagnosed, and especially the coexistence of over and undernutrition is not usually recognized. We aimed to develop and validate a screening tool for the easy detection and reporting of both undernutrition and overnutrition, specifically identifying the clinical conditions where the two types of malnutrition coexist. The study consisted of three phases: 1) selection of an appropriate study population (estimation sample) and of the hospital admission parameters to identify overnutrition and undernutrition; 2) combination of selected variables to create a screening tool to assess the nutritional risk in case of undernutrition, overnutrition, or the copresence of both the conditions, to be used by non-specialist health care professionals; 3) validation of the screening tool in a different patient sample (validation sample). Two groups of variables (12 for undernutrition, 7 for overnutrition) were identified in separate logistic models for their correlation with the outcome variables. Both models showed high efficacy, sensitivity and specificity (overnutrition, 97.7%, 99.6%, 66.6%, respectively; undernutrition, 84.4%, 83.6%, 84.8%). The logistic models were used to construct a two-faced test (named JaNuS - Just A Nutritional Screening) fitting into a two-dimension Cartesian coordinate graphic system. In the validation sample the JaNuS test confirmed its predictive value. Internal consistency and test-retest analysis provide evidence for the reliability of the test. The study provides a screening tool for the assessment of the nutritional risk, based on parameters easy-to-use by health care personnel lacking nutritional competence and characterized by excellent predictive validity. The test might be confidently applied in the clinical setting to determine the importance of malnutrition (including the copresence of over and undernutrition) as a risk factor for morbidity and mortality. Copyright © 2013 Elsevier Ltd and European Society for Clinical Nutrition and Metabolism. All rights reserved.

  14. Optimization of active distribution networks: Design and analysis of significative case studies for enabling control actions of real infrastructure

    NASA Astrophysics Data System (ADS)

    Moneta, Diana; Mora, Paolo; Viganò, Giacomo; Alimonti, Gianluca

    2014-12-01

    The diffusion of Distributed Generation (DG) based on Renewable Energy Sources (RES) requires new strategies to ensure reliable and economic operation of the distribution networks and to support the diffusion of DG itself. An advanced algorithm (DISCoVER - DIStribution Company VoltagE Regulator) is being developed to optimize the operation of active network by means of an advanced voltage control based on several regulations. Starting from forecasted load and generation, real on-field measurements, technical constraints and costs for each resource, the algorithm generates for each time period a set of commands for controllable resources that guarantees achievement of technical goals minimizing the overall cost. Before integrating the controller into the telecontrol system of the real networks, and in order to validate the proper behaviour of the algorithm and to identify possible critical conditions, a complete simulation phase has started. The first step is concerning the definition of a wide range of "case studies", that are the combination of network topology, technical constraints and targets, load and generation profiles and "costs" of resources that define a valid context to test the algorithm, with particular focus on battery and RES management. First results achieved from simulation activity on test networks (based on real MV grids) and actual battery characteristics are given, together with prospective performance on real case applications.

  15. [Usefulness of SCL-90-R and SIMS inventories for the detection of mental health malingering at workplace].

    PubMed

    Loskin, Ulises E; Bertone, Matías S; López-Regueira, Joaquín

    2017-03-01

    Mental illness is a common cause of work leave. This situation has a negative impact on labor productivity and costs, and may contribute to con?icts affecting workplace environment. The purpose of this investigation is to describe the evaluation results of a total of 89 cases on sick leave for psychological and psychiatric reasons, and to test the convergent validity of scales in the "Positive Symptom Total" (PST) and "Positive Symptom Distress" (PSDI) of the Symptom Checklist Revised (SCL-90-R) by means of the Structured Inventory of Malingered Symptomatology (SIMS). Taking a score higher than 16 in the SIMS as the cut-off point, the analysis focused on whether PST and PSDI scales presented differences in average between malingers and non-malingers. From the total number of cases, 66 were found to be likely cases of malingered mental illness, with different averages in PST (77.02) and PSDI (2.71). Statistical correlation tests allowed to objectify convergent validity and statistical signifcance between the PSDI and PST scales of the inventory SCL-90-R and the SIMS inventory, with a higher average in PSDI scale (0.617) as compared with PST scale (0.413) in Spearman's rho. The results of the investigation confrm the usefulness of both instruments for the assessment of mental illness malingering in employers on sick leave due to mental disorders.

  16. Blood tests showing nonpaternity-conclusive or rebuttable evidence? The Chaplin case revisited.

    PubMed

    Benson, F

    1981-09-01

    A defendant accused of being the father of an illegitimate child denies responsibility. Blood samples from the child, mother, and alleged father are studied and the results reveal that the alleged father is excluded. What weight, if any, should the court (if a trial is held) or the jury give to the evidence of nonpaternity? Should the evidence be treated as conclusive proof of nonpaternity or should other evidence be admitted in the trial to overcome the nonpaternity evidence? A medical expert might conclude that a controversy exists because of the court's questioned trustworthiness of the paternity blood testing, while a legal expert might conclude that the controversy arises because of burdens of proof. Both conclusions are valid. The Berry v. Chaplin case held in California in 1946 illustrates this circumstance. In refreshing our memories on this case, we can review the problem in light of today's knowledge.

  17. Computational-experimental approach to drug-target interaction mapping: A case study on kinase inhibitors

    PubMed Central

    Ravikumar, Balaguru; Parri, Elina; Timonen, Sanna; Airola, Antti; Wennerberg, Krister

    2017-01-01

    Due to relatively high costs and labor required for experimental profiling of the full target space of chemical compounds, various machine learning models have been proposed as cost-effective means to advance this process in terms of predicting the most potent compound-target interactions for subsequent verification. However, most of the model predictions lack direct experimental validation in the laboratory, making their practical benefits for drug discovery or repurposing applications largely unknown. Here, we therefore introduce and carefully test a systematic computational-experimental framework for the prediction and pre-clinical verification of drug-target interactions using a well-established kernel-based regression algorithm as the prediction model. To evaluate its performance, we first predicted unmeasured binding affinities in a large-scale kinase inhibitor profiling study, and then experimentally tested 100 compound-kinase pairs. The relatively high correlation of 0.77 (p < 0.0001) between the predicted and measured bioactivities supports the potential of the model for filling the experimental gaps in existing compound-target interaction maps. Further, we subjected the model to a more challenging task of predicting target interactions for such a new candidate drug compound that lacks prior binding profile information. As a specific case study, we used tivozanib, an investigational VEGF receptor inhibitor with currently unknown off-target profile. Among 7 kinases with high predicted affinity, we experimentally validated 4 new off-targets of tivozanib, namely the Src-family kinases FRK and FYN A, the non-receptor tyrosine kinase ABL1, and the serine/threonine kinase SLK. Our sub-sequent experimental validation protocol effectively avoids any possible information leakage between the training and validation data, and therefore enables rigorous model validation for practical applications. These results demonstrate that the kernel-based modeling approach offers practical benefits for probing novel insights into the mode of action of investigational compounds, and for the identification of new target selectivities for drug repurposing applications. PMID:28787438

  18. Numerical model validation using experimental data: Application of the area metric on a Francis runner

    NASA Astrophysics Data System (ADS)

    Chatenet, Q.; Tahan, A.; Gagnon, M.; Chamberland-Lauzon, J.

    2016-11-01

    Nowadays, engineers are able to solve complex equations thanks to the increase of computing capacity. Thus, finite elements software is widely used, especially in the field of mechanics to predict part behavior such as strain, stress and natural frequency. However, it can be difficult to determine how a model might be right or wrong, or whether a model is better than another one. Nevertheless, during the design phase, it is very important to estimate how the hydroelectric turbine blades will behave according to the stress to which it is subjected. Indeed, the static and dynamic stress levels will influence the blade's fatigue resistance and thus its lifetime, which is a significant feature. In the industry, engineers generally use either graphic representation, hypothesis tests such as the Student test, or linear regressions in order to compare experimental to estimated data from the numerical model. Due to the variability in personal interpretation (reproducibility), graphical validation is not considered objective. For an objective assessment, it is essential to use a robust validation metric to measure the conformity of predictions against data. We propose to use the area metric in the case of a turbine blade that meets the key points of the ASME Standards and produces a quantitative measure of agreement between simulations and empirical data. This validation metric excludes any belief and criterion of accepting a model which increases robustness. The present work is aimed at applying a validation method, according to ASME V&V 10 recommendations. Firstly, the area metric is applied on the case of a real Francis runner whose geometry and boundaries conditions are complex. Secondly, the area metric will be compared to classical regression methods to evaluate the performance of the method. Finally, we will discuss the use of the area metric as a tool to correct simulations.

  19. An exploratory study into the effect of time-restricted internet access on face-validity, construct validity and reliability of postgraduate knowledge progress testing

    PubMed Central

    2013-01-01

    Background Yearly formative knowledge testing (also known as progress testing) was shown to have a limited construct-validity and reliability in postgraduate medical education. One way to improve construct-validity and reliability is to improve the authenticity of a test. As easily accessible internet has become inseparably linked to daily clinical practice, we hypothesized that allowing internet access for a limited amount of time during the progress test would improve the perception of authenticity (face-validity) of the test, which would in turn improve the construct-validity and reliability of postgraduate progress testing. Methods Postgraduate trainees taking the yearly knowledge progress test were asked to participate in a study where they could access the internet for 30 minutes at the end of a traditional pen and paper test. Before and after the test they were asked to complete a short questionnaire regarding the face-validity of the test. Results Mean test scores increased significantly for all training years. Trainees indicated that the face-validity of the test improved with internet access and that they would like to continue to have internet access during future testing. Internet access did not improve the construct-validity or reliability of the test. Conclusion Improving the face-validity of postgraduate progress testing, by adding the possibility to search the internet for a limited amount of time, positively influences test performance and face-validity. However, it did not change the reliability or the construct-validity of the test. PMID:24195696

  20. Diagnosing viral and bacterial respiratory infections in acute COPD exacerbations by an electronic nose: a pilot study.

    PubMed

    van Geffen, Wouter H; Bruins, Marcel; Kerstjens, Huib A M

    2016-06-16

    Respiratory infections, viral or bacterial, are a common cause of acute exacerbations of chronic obstructive pulmonary disease (AECOPD). A rapid, point-of-care, and easy-to-use tool distinguishing viral and bacterial from other causes would be valuable in routine clinical care. An electronic nose (e-nose) could fit this profile but has never been tested in this setting before. In a single-center registered trial (NTR 4601) patients admitted with AECOPD were tested with the Aeonose(®) electronic nose, and a diagnosis of viral or bacterial infection was obtained by bacterial culture on sputa and viral PCR on nose swabs. A neural network with leave-10%-out cross-validation was used to assess the e-nose data. Forty three patients were included. In the bacterial infection model, 22 positive cases were tested versus the negatives; and similarly 18 positive cases were tested in the viral infection model. The Aeonose was able to distinguish between COPD-subjects suffering from a viral infection and COPD patients without infection, showing an area under the curve (AUC) of 0.74. Similarly, for bacterial infections, an AUC of 0.72 was obtained. The Aeonose e-nose yields promising results in 'smelling' the presence or absence of a viral or bacterial respiratory infection during an acute exacerbation of COPD. Validation of these results using a new and large cohort is required before introduction into clinical practice.

  1. Appraisal of neurobehavioral methods in environmental health research: the developing brain as a target for neurotoxic chemicals.

    PubMed

    Winneke, Gerhard

    2007-10-01

    Psychological tests as developed and validated in the field of differential psychology have a longstanding tradition as tools to study individual differences. In clinical neuropsychology, global or more specific tests are used as neuropsychological tools in the differential diagnosis of various forms of brain damage or neurobehavioral dysfunction following chemical insults, such as mental sequelae of prenatal alcohol consumption by pregnant mothers (fetal alcohol syndrome) or of maternal thyroid deficiency during pregnancy. Psychometric tests are constructed to fulfill basic quality criteria, namely objectivity, reliability and validity. For strictly diagnostic purposes in individual cases they must also possess normative values based on representative reference groups. Intelligence tests or their developmental variants are often used as endpoints in environmental health research for studying neurodevelopmental adversity due to early exposure to neurotoxic chemicals in the environment. Intelligence as treated in psychology is a complex construct made up of specific cognitive functions which usually cover verbal, numerical and spatial skills, as well as perceptual speed, memory and reasoning. In this paper, case studies covering neurodevelopmental adversity of inorganic lead, of methylmercury and of polychlorinated biphenyls (PCBs) are reviewed, and the issue of postnatal behavioral sequelae of prenatal exposure is covered. In such observational studies precautions must be taken in order to avoid pitfalls of causative interpretation of associations between exposure and neurobehavioral outcome. This requires consideration of co-exposure and confounding. Important confounders considered in most modern developmental cohort studies are maternal intelligence and quality of the home environment.

  2. The Leukocyte Esterase Strip Test has Practical Value for Diagnosing Periprosthetic Joint Infection After Total Knee Arthroplasty: A Multicenter Study.

    PubMed

    Koh, In J; Han, Seung B; In, Yong; Oh, Kwang J; Lee, Dae H; Kim, Tae K

    2017-11-01

    Leukocyte esterase (LE) was recently reported to be an accurate marker for diagnosing periprosthetic joint infection (PJI) as defined by the Musculoskeletal Infection Society (MSIS) criteria. However, the diagnostic value of the LE test for PJI after total knee arthroplasty (TKA), the reliability of the subjective visual interpretation of the LE test, and the correlation between the LE test results and the current MSIS criteria remain unclear. This study prospectively enrolled 60 patients undergoing revision TKA for either PJI or aseptic failure. Serological marker, synovial fluid, and histological analyses were performed in all cases. The PJI group comprised 38 cases that met the MSIS criteria and the other 22 cases formed the aseptic group. All the LE tests were interpreted using both visual judgment and automated colorimetric reader. When "++" results were considered to indicate a positive PJI, the sensitivity, specificity, positive and negative predictive value, and diagnostic accuracy were 84, 100, 100, 79, and 90%, respectively. The visual interpretation agreed with the automated colorimetric reader in 90% of cases (Cronbach α = 0.894). The grade of the LE test was strongly correlated with the synovial white blood cell count (ρ = 0.695) and polymorphonuclear leukocyte percentage (ρ = 0.638) and moderately correlated with the serum C-reactive protein and erythrocyte sedimentation rate. The LE test has high diagnostic value for diagnosing PJI after TKA. Subjective visual interpretation of the LE test was reliable and valid for the current battery of PJI diagnostic tests according to the MSIS criteria. Copyright © 2017. Published by Elsevier Inc.

  3. Public health consequences of a false-positive laboratory test result for Brucella--Florida, Georgia, and Michigan, 2005.

    PubMed

    2008-06-06

    Human brucellosis, a nationally notifiable disease, is uncommon in the United States. Most human cases have occurred in returned travelers or immigrants from regions where brucellosis is endemic, or were acquired domestically from eating illegally imported, unpasteurized fresh cheeses. In January 2005, a woman aged 35 years who lived in Nassau County, Florida, received a diagnosis of brucellosis, based on results of a Brucella immunoglobulin M (IgM) enzyme immunoassay (EIA) performed in a commercial laboratory using analyte specific reagents (ASRs); this diagnosis prompted an investigation of dairy products in two other states. Subsequent confirmatory antibody testing by Brucella microagglutination test (BMAT) performed at CDC on the patient's serum was negative. The case did not meet the CDC/Council of State and Territorial Epidemiologists' (CSTE) definition for a probable or confirmed brucellosis case, and the initial EIA result was determined to be a false positive. This report summarizes the case history, laboratory findings, and public health investigations. CDC recommends that Brucella serology testing only be performed using tests cleared or approved by the Food and Drug Administration (FDA) or validated under the Clinical Laboratory Improvement Amendments (CLIA) and shown to reliably detect the presence of Brucella infection. Results from these tests should be considered supportive evidence for recent infection only and interpreted in the context of a clinically compatible illness and exposure history. EIA is not considered a confirmatory Brucella antibody test; positive screening test results should be confirmed by Brucella-specific agglutination (i.e., BMAT or standard tube agglutination test) methods.

  4. Criminal profiling as expert witness evidence: The implications of the profiler validity research.

    PubMed

    Kocsis, Richard N; Palermo, George B

    The use and development of the investigative tool colloquially known as criminal profiling has steadily increased over the past five decades throughout the world. Coupled with this growth has been a diversification in the suggested range of applications for this technique. Possibly the most notable of these has been the attempted transition of the technique from a tool intended to assist police investigations into a form of expert witness evidence admissible in legal proceedings. Whilst case law in various jurisdictions has considered with mutual disinclination the evidentiary admissibility of criminal profiling, a disjunction has evolved between these judicial examinations and the scientifically vetted research testing the accuracy (i.e., validity) of the technique. This article offers an analysis of the research directly testing the validity of the criminal profiling technique and the extant legal principles considering its evidentiary admissibility. This analysis reveals that research findings concerning the validity of criminal profiling are surprisingly compatible with the extant legal principles. The overall conclusion is that a discrete form of crime behavioural analysis is supported by the profiler validity research and could be regarded as potentially admissible expert witness evidence. Finally, a number of theoretical connections are also identified concerning the skills and qualifications of individuals who may feasibly provide such expert testimony. Copyright © 2016 Elsevier Ltd. All rights reserved.

  5. The importance of establishing reliability and validity of assessment instruments for mental health problems: An example from Somali children and adolescents living in three refugee camps in Ethiopia

    PubMed Central

    Hall, Brian J.; Puffer, Eve; Murray, Laura K.; Ismael, Abdulkadir; Bass, Judith K.; Sim, Amanda; Bolton, Paul A.

    2014-01-01

    Assessing mental health problems cross-culturally for children exposed to war and violence presents a number of unique challenges. One of the most important issues is the lack of validated symptom measures to assess these problems. The present study sought to evaluate the psychometric properties of two measures to assess mental health problems: the Achenbach Youth Self-Report and the Child Posttraumatic Stress Disorder Symptom Scale. We conducted a validity study in three refugee camps in Eastern Ethiopia in the outskirts of Jijiga, the capital of the Somali region. A total of 147 child and caregiver pairs were assessed, and scores obtained were submitted to rigorous psychometric evaluation. Excellent internal consistency reliability was obtained for symptom measures for children and their caregivers. Validation of study instruments based on local case definitions was obtained for the caregivers but not consistently for the children. Sensitivity and specificity of study measures were generally low, indicating that these scales would not perform adequately as screening instruments. Combined test-retest and inter-rater reliability was low for all scales. This study illustrates the need for validation and testing of existing measures cross-culturally. Methodological implications for future cross-cultural research studies in low- and middle-income countries are discussed. PMID:24955147

  6. The exit interview as a proxy measure of malaria case management practice: sensitivity and specificity relative to direct observation.

    PubMed

    Pulford, Justin; Siba, Peter M; Mueller, Ivo; Hetzel, Manuel W

    2014-12-03

    This paper aims to assess the sensitivity and specificity of exit interviews as a measure of malaria case management practice as compared to direct observation. The malaria case management of 1654 febrile patients attending 110 health facilities from across Papua New Guinea was directly observed by a trained research officer as part of a repeat cross sectional survey. Patient recall of 5 forms of clinical advice and 5 forms of clinical action were then assessed at service exit and statistical analyses on matched observation/exit interview data conducted. The sensitivity of exit interviews with respect to clinical advice ranged from 36.2% to 96.4% and specificity from 53.5% to 98.6%. With respect to clinical actions, sensitivity of the exit interviews ranged from 83.9% to 98.3% and specificity from 70.6% to 98.1%. The exit interview appears to be a valid measure of objective malaria case management practices such as the completion of a diagnostic test or the provision of antimalarial medication, but may be a less valid measure of low frequency, subjective practices such as the provision of malaria prevention advice.

  7. Constrained structural dynamic model verification using free vehicle suspension testing methods

    NASA Technical Reports Server (NTRS)

    Blair, Mark A.; Vadlamudi, Nagarjuna

    1988-01-01

    Verification of the validity of a spacecraft's structural dynamic math model used in computing ascent (or in the case of the STS, ascent and landing) loads is mandatory. This verification process requires that tests be carried out on both the payload and the math model such that the ensuing correlation may validate the flight loads calculations. To properly achieve this goal, the tests should be performed with the payload in the launch constraint (i.e., held fixed at only the payload-booster interface DOFs). The practical achievement of this set of boundary conditions is quite difficult, especially with larger payloads, such as the 12-ton Hubble Space Telescope. The development of equations in the paper will show that by exciting the payload at its booster interface while it is suspended in the 'free-free' state, a set of transfer functions can be produced that will have minima that are directly related to the fundamental modes of the payload when it is constrained in its launch configuration.

  8. Adaptation and validation of a Spanish-language version of the Frontotemporal Dementia Rating Scale (FTD-FRS).

    PubMed

    Turró-Garriga, O; Hermoso Contreras, C; Olives Cladera, J; Mioshi, E; Pelegrín Valero, C; Olivera Pueyo, J; Garre-Olmo, J; Sánchez-Valle, R

    2017-06-01

    The Frontotemporal Dementia Rating Scale (FTD-FRS) is a tool designed to aid with clinical staging and assessment of the progression of frontotemporal dementia (FTD-FRS). Present a multicentre adaptation and validation study of a Spanish version of the FRS. The adapted version was created using 2 translation-back translation processes (English to Spanish, Spanish to English) and verified by the scale's original authors. We validated the adapted version in a sample of consecutive patients diagnosed with FTD. The procedure included evaluating internal consistency, testing unidimensionality with the Rasch model, analysing construct validity and discriminant validity, and calculating the degree of agreement between the Clinical Dementia Rating scale (CDR) and FTD-FRS for FTD cases. The study included 60 patients with DFT. The mean score on the FRS was 12.1 points (SD=6.5; range, 2-25) with inter-group differences (F=120.3; df=3; P<.001). Cronbach's alpha was 0.897 and principal component analysis of residuals delivered an acceptable eigenvalue for 5 contrasts (1.6-2.7) and 36.1% raw variance. FRS was correlated with the Mini-mental State Examination (r=0.572; P<.001) and functional capacity (DAD; r=0.790; P<.001). FTD-FRS also showed a significant correlation with CDR (r=-0.641; P<.001), but we did observe variability in the severity levels; cases appeared to be less severe according to the CDR than when measured with the FTD-FRS (kappa=0.055). This process of validating the Spanish translation of the FTD-FRS yielded satisfactory results for validity and unidimensionality (severity) in the assessment of patients with FTD. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  9. Equally parsimonious pathways through an RNA sequence space are not equally likely

    NASA Technical Reports Server (NTRS)

    Lee, Y. H.; DSouza, L. M.; Fox, G. E.

    1997-01-01

    An experimental system for determining the potential ability of sequences resembling 5S ribosomal RNA (rRNA) to perform as functional 5S rRNAs in vivo in the Escherichia coli cellular environment was devised previously. Presumably, the only 5S rRNA sequences that would have been fixed by ancestral populations are ones that were functionally valid, and hence the actual historical paths taken through RNA sequence space during 5S rRNA evolution would have most likely utilized valid sequences. Herein, we examine the potential validity of all sequence intermediates along alternative equally parsimonious trajectories through RNA sequence space which connect two pairs of sequences that had previously been shown to behave as valid 5S rRNAs in E. coli. The first trajectory requires a total of four changes. The 14 sequence intermediates provide 24 apparently equally parsimonious paths by which the transition could occur. The second trajectory involves three changes, six intermediate sequences, and six potentially equally parsimonious paths. In total, only eight of the 20 sequence intermediates were found to be clearly invalid. As a consequence of the position of these invalid intermediates in the sequence space, seven of the 30 possible paths consisted of exclusively valid sequences. In several cases, the apparent validity/invalidity of the intermediate sequences could not be anticipated on the basis of current knowledge of the 5S rRNA structure. This suggests that the interdependencies in RNA sequence space may be more complex than currently appreciated. If ancestral sequences predicted by parsimony are to be regarded as actual historical sequences, then the present results would suggest that they should also satisfy a validity requirement and that, in at least limited cases, this conjecture can be tested experimentally.

  10. Employing Design and Development Research (DDR): Approaches in the Design and Development of Online Arabic Vocabulary Learning Games Prototype

    ERIC Educational Resources Information Center

    Sahrir, Muhammad Sabri; Alias, Nor Aziah; Ismail, Zawawi; Osman, Nurulhuda

    2012-01-01

    The design and development research, first proposed by Brown and Collins in the 1990s, is currently among the well-known methods in educational research to test theory and validate its practicality. The method is also known as developmental research, design research, design-based research, formative research and design-cased and possesses…

  11. Recent statistical methods for orientation data

    NASA Technical Reports Server (NTRS)

    Batschelet, E.

    1972-01-01

    The application of statistical methods for determining the areas of animal orientation and navigation are discussed. The method employed is limited to the two-dimensional case. Various tests for determining the validity of the statistical analysis are presented. Mathematical models are included to support the theoretical considerations and tables of data are developed to show the value of information obtained by statistical analysis.

  12. Perceptions of the Quality of School Life: A Case Study of Schools and Students.

    ERIC Educational Resources Information Center

    Batten, Margaret; Girling-Butcher, Sue

    In order to test the validity of a measure of Australian students' views on the quality of life within their schools, a small-scale study was conducted in seven secondary schools, including both public and private institutions. The 52-item survey instrument was administered to 651 students in grades 9-12. Followup interviews of students were held…

  13. [The requirements of standard and conditions of interchangeability of medical articles].

    PubMed

    Men'shikov, V V; Lukicheva, T I

    2013-11-01

    The article deals with possibility to apply specific approaches under evaluation of interchangeability of medical articles for laboratory analysis. The development of standardized analytical technologies of laboratory medicine and formulation of requirements of standards addressed to manufacturers of medical articles the clinically validated requirements are to be followed. These requirements include sensitivity and specificity of techniques, accuracy and precision of research results, stability of reagents' quality in particular conditions of their transportation and storage. The validity of requirements formulated in standards and addressed to manufacturers of medical articles can be proved using reference system, which includes master forms and standard samples, reference techniques and reference laboratories. This approach is supported by data of evaluation of testing systems for measurement of level of thyrotrophic hormone, thyroid hormones and glycated hemoglobin HB A1c. The versions of testing systems can be considered as interchangeable only in case of results corresponding to the results of reference technique and comparable with them. In case of absence of functioning reference system the possibilities of the Joined committee of traceability in laboratory medicine make it possible for manufacturers of reagent sets to apply the certified reference materials under development of manufacturing of sets for large listing of analytes.

  14. Validation of a numerical method for interface-resolving simulation of multicomponent gas-liquid mass transfer and evaluation of multicomponent diffusion models

    NASA Astrophysics Data System (ADS)

    Woo, Mino; Wörner, Martin; Tischer, Steffen; Deutschmann, Olaf

    2018-03-01

    The multicomponent model and the effective diffusivity model are well established diffusion models for numerical simulation of single-phase flows consisting of several components but are seldom used for two-phase flows so far. In this paper, a specific numerical model for interfacial mass transfer by means of a continuous single-field concentration formulation is combined with the multicomponent model and effective diffusivity model and is validated for multicomponent mass transfer. For this purpose, several test cases for one-dimensional physical or reactive mass transfer of ternary mixtures are considered. The numerical results are compared with analytical or numerical solutions of the Maxell-Stefan equations and/or experimental data. The composition-dependent elements of the diffusivity matrix of the multicomponent and effective diffusivity model are found to substantially differ for non-dilute conditions. The species mole fraction or concentration profiles computed with both diffusion models are, however, for all test cases very similar and in good agreement with the analytical/numerical solutions or measurements. For practical computations, the effective diffusivity model is recommended due to its simplicity and lower computational costs.

  15. Screening for depression with a brief questionnaire in a primary care setting: validation of the two questions with help question (Malay version).

    PubMed

    Mohd-Sidik, Sherina; Arroll, Bruce; Goodyear-Smith, Felicity; Zain, Azhar M D

    2011-01-01

    To determine the diagnostic accuracy of the two questions with help question (TQWHQ) in the Malay language. The two questions are case-finding questions on depression, and a question on whether help is needed was added to increase the specificity of the two questions. This cross sectional validation study was conducted in a government funded primary care clinic in Malaysia. The participants included 146 consecutive women patients receiving no psychotropic drugs and who were Malay speakers. The main outcome measures were sensitivity, specificity, and likelihood ratios of the two questions and help question. The two questions showed a sensitivity of 99% (95% confidence interval 88% to 99.9%) and a specificity of 70% (62% to 78%), respectively. The likelihood ratio for a positive test was 3.3 (2.5 to 4.5) and the likelihood ratio for a negative test was 0.01 (0.00 to 0.57). The addition of the help question to the two questions increased the specificity to 95% (89% to 98%). The two qeustions on depression detected most cases of depression in this study. The questions have the advantage of brevity. The addition of the help question increased the specificity of the two questions. Based on these findings, the TQWHQ can be strongly recommended for detection of depression in government primary care clnics in Malaysia. Translation did not apear to affect the validity of the TQWHQ.

  16. Development and validation of a novel questionnaire for self-determination of the range of motion of wrist and elbow.

    PubMed

    Schnetzke, Marc; Schüler, Svenja; Keil, Holger; Aytac, Sara; Studier-Fischer, Stefan; Grützner, Paul-Alfred; Guehring, Thorsten

    2016-07-26

    The aim of this study was to develop and validate a novel self-administered questionnaire for assessing the patient's own range of motion (ROM) of the wrist and the elbow. In a prospective clinical study from January 2015 to June 2015, 101 consecutive patients were evaluated with a novel, self-administered, diagram-based, wrist motion assessment score (W-MAS) and elbow motion assessment score (E-MAS). The questionnaire was statistically evaluated for test-retest reliability, patient-physician agreement, comparison with healthy population, and influence of covariates (age, gender, affected side and involvement in workers' compensation cases). Assessment of patient-physician agreement demonstrated almost perfect agreement (k > 0.80) with regard to six out of eight items. There was substantial agreement with regard to two items: elbow extension (k = 0.76) and pronation (k = 0.75). The assessment of the test-retest reliability revealed at least substantial agreement (k = 0.70). The questionnaire revealed a high discriminative power when comparing the healthy population with the study group (p = 0.007 or lower for every item). Age, gender, affected side and involvement in workers' compensation cases did not in general significantly influence the patient-physician agreement for the questionnaire. The W-MAS and E-MAS are valid and reliable self-administered questionnaires that provide a high level of patient-physician agreement for the assessments of wrist and elbow ROM. Diagnostic study, Level II.

  17. A Vehicle Management End-to-End Testing and Analysis Platform for Validation of Mission and Fault Management Algorithms to Reduce Risk for NASA's Space Launch System

    NASA Technical Reports Server (NTRS)

    Trevino, Luis; Patterson, Jonathan; Teare, David; Johnson, Stephen

    2015-01-01

    The engineering development of the new Space Launch System (SLS) launch vehicle requires cross discipline teams with extensive knowledge of launch vehicle subsystems, information theory, and autonomous algorithms dealing with all operations from pre-launch through on orbit operations. The characteristics of these spacecraft systems must be matched with the autonomous algorithm monitoring and mitigation capabilities for accurate control and response to abnormal conditions throughout all vehicle mission flight phases, including precipitating safing actions and crew aborts. This presents a large and complex system engineering challenge, which is being addressed in part by focusing on the specific subsystems involved in the handling of off-nominal mission and fault tolerance with response management. Using traditional model based system and software engineering design principles from the Unified Modeling Language (UML) and Systems Modeling Language (SysML), the Mission and Fault Management (M&FM) algorithms for the vehicle are crafted and vetted in specialized Integrated Development Teams (IDTs) composed of multiple development disciplines such as Systems Engineering (SE), Flight Software (FSW), Safety and Mission Assurance (S&MA) and the major subsystems and vehicle elements such as Main Propulsion Systems (MPS), boosters, avionics, Guidance, Navigation, and Control (GNC), Thrust Vector Control (TVC), and liquid engines. These model based algorithms and their development lifecycle from inception through Flight Software certification are an important focus of this development effort to further insure reliable detection and response to off-nominal vehicle states during all phases of vehicle operation from pre-launch through end of flight. NASA formed a dedicated M&FM team for addressing fault management early in the development lifecycle for the SLS initiative. As part of the development of the M&FM capabilities, this team has developed a dedicated testbed that integrates specific M&FM algorithms, specialized nominal and off-nominal test cases, and vendor-supplied physics-based launch vehicle subsystem models. Additionally, the team has developed processes for implementing and validating these algorithms for concept validation and risk reduction for the SLS program. The flexibility of the Vehicle Management End-to-end Testbed (VMET) enables thorough testing of the M&FM algorithms by providing configurable suites of both nominal and off-nominal test cases to validate the developed algorithms utilizing actual subsystem models such as MPS. The intent of VMET is to validate the M&FM algorithms and substantiate them with performance baselines for each of the target vehicle subsystems in an independent platform exterior to the flight software development infrastructure and its related testing entities. In any software development process there is inherent risk in the interpretation and implementation of concepts into software through requirements and test cases into flight software compounded with potential human errors throughout the development lifecycle. Risk reduction is addressed by the M&FM analysis group working with other organizations such as S&MA, Structures and Environments, GNC, Orion, the Crew Office, Flight Operations, and Ground Operations by assessing performance of the M&FM algorithms in terms of their ability to reduce Loss of Mission and Loss of Crew probabilities. In addition, through state machine and diagnostic modeling, analysis efforts investigate a broader suite of failure effects and associated detection and responses that can be tested in VMET to ensure that failures can be detected, and confirm that responses do not create additional risks or cause undesired states through interactive dynamic effects with other algorithms and systems. VMET further contributes to risk reduction by prototyping and exercising the M&FM algorithms early in their implementation and without any inherent hindrances such as meeting FSW processor scheduling constraints due to their target platform - ARINC 653 partitioned OS, resource limitations, and other factors related to integration with other subsystems not directly involved with M&FM such as telemetry packing and processing. The baseline plan for use of VMET encompasses testing the original M&FM algorithms coded in the same C++ language and state machine architectural concepts as that used by Flight Software. This enables the development of performance standards and test cases to characterize the M&FM algorithms and sets a benchmark from which to measure the effectiveness of M&FM algorithms performance in the FSW development and test processes.

  18. Definition and Demonstration of a Methodology for Validating Aircraft Trajectory Predictors

    NASA Technical Reports Server (NTRS)

    Vivona, Robert A.; Paglione, Mike M.; Cate, Karen T.; Enea, Gabriele

    2010-01-01

    This paper presents a new methodology for validating an aircraft trajectory predictor, inspired by the lessons learned from a number of field trials, flight tests and simulation experiments for the development of trajectory-predictor-based automation. The methodology introduces new techniques and a new multi-staged approach to reduce the effort in identifying and resolving validation failures, avoiding the potentially large costs associated with failures during a single-stage, pass/fail approach. As a case study, the validation effort performed by the Federal Aviation Administration for its En Route Automation Modernization (ERAM) system is analyzed to illustrate the real-world applicability of this methodology. During this validation effort, ERAM initially failed to achieve six of its eight requirements associated with trajectory prediction and conflict probe. The ERAM validation issues have since been addressed, but to illustrate how the methodology could have benefited the FAA effort, additional techniques are presented that could have been used to resolve some of these issues. Using data from the ERAM validation effort, it is demonstrated that these new techniques could have identified trajectory prediction error sources that contributed to several of the unmet ERAM requirements.

  19. The utility of the Total Neuropathy Score as an instrument to assess neuropathy severity in chronic kidney disease: A validation study.

    PubMed

    Issar, Tushar; Arnold, Ria; Kwai, Natalie C G; Pussell, Bruce A; Endre, Zoltan H; Poynten, Ann M; Kiernan, Matthew C; Krishnan, Arun V

    2018-05-01

    To demonstrate construct validity of the Total Neuropathy Score (TNS) in assessing peripheral neuropathy in subjects with chronic kidney disease (CKD). 113 subjects with CKD and 40 matched controls were assessed for peripheral neuropathy using the TNS. An exploratory factor analysis was conducted and internal consistency of the scale was evaluated using Cronbach's alpha. Construct validity of the TNS was tested by comparing scores between case and control groups. Factor analysis revealed valid item correlations and internal consistency of the TNS was good with a Cronbach's alpha of 0.897. Subjects with CKD scored significantly higher on the TNS (CKD: median, 6, interquartile range, 1-13; controls: median, 0, interquartile range, 0-1; p < 0.001). Subgroup analysis revealed construct validity was maintained for subjects with stages 3-5 CKD with and without diabetes. The TNS is a valid measure of peripheral neuropathy in patients with CKD. The TNS is the first neuropathy scale to be formally validated in patients with CKD. Copyright © 2018 International Federation of Clinical Neurophysiology. Published by Elsevier B.V. All rights reserved.

  20. Validating workplace performance assessments in health sciences students: a case study from speech pathology.

    PubMed

    McAllister, Sue; Lincoln, Michelle; Ferguson, Allison; McAllister, Lindy

    2013-01-01

    Valid assessment of health science students' ability to perform in the real world of workplace practice is critical for promoting quality learning and ultimately certifying students as fit to enter the world of professional practice. Current practice in performance assessment in the health sciences field has been hampered by multiple issues regarding assessment content and process. Evidence for the validity of scores derived from assessment tools are usually evaluated against traditional validity categories with reliability evidence privileged over validity, resulting in the paradoxical effect of compromising the assessment validity and learning processes the assessments seek to promote. Furthermore, the dominant statistical approaches used to validate scores from these assessments fall under the umbrella of classical test theory approaches. This paper reports on the successful national development and validation of measures derived from an assessment of Australian speech pathology students' performance in the workplace. Validation of these measures considered each of Messick's interrelated validity evidence categories and included using evidence generated through Rasch analyses to support score interpretation and related action. This research demonstrated that it is possible to develop an assessment of real, complex, work based performance of speech pathology students, that generates valid measures without compromising the learning processes the assessment seeks to promote. The process described provides a model for other health professional education programs to trial.

  1. 10 CFR 26.131 - Cutoff levels for validity screening and initial validity tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 10 Energy 1 2010-01-01 2010-01-01 false Cutoff levels for validity screening and initial validity tests. 26.131 Section 26.131 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.131 Cutoff levels for validity screening and initial validity tests. (a) Each...

  2. 10 CFR 26.131 - Cutoff levels for validity screening and initial validity tests.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 1 2011-01-01 2011-01-01 false Cutoff levels for validity screening and initial validity tests. 26.131 Section 26.131 Energy NUCLEAR REGULATORY COMMISSION FITNESS FOR DUTY PROGRAMS Licensee Testing Facilities § 26.131 Cutoff levels for validity screening and initial validity tests. (a) Each...

  3. Integrating Symbolic and Statistical Methods for Testing Intelligent Systems Applications to Machine Learning and Computer Vision

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jha, Sumit Kumar; Pullum, Laura L; Ramanathan, Arvind

    Embedded intelligent systems ranging from tiny im- plantable biomedical devices to large swarms of autonomous un- manned aerial systems are becoming pervasive in our daily lives. While we depend on the flawless functioning of such intelligent systems, and often take their behavioral correctness and safety for granted, it is notoriously difficult to generate test cases that expose subtle errors in the implementations of machine learning algorithms. Hence, the validation of intelligent systems is usually achieved by studying their behavior on representative data sets, using methods such as cross-validation and bootstrapping.In this paper, we present a new testing methodology for studyingmore » the correctness of intelligent systems. Our approach uses symbolic decision procedures coupled with statistical hypothesis testing to. We also use our algorithm to analyze the robustness of a human detection algorithm built using the OpenCV open-source computer vision library. We show that the human detection implementation can fail to detect humans in perturbed video frames even when the perturbations are so small that the corresponding frames look identical to the naked eye.« less

  4. Validation of the instrument of health literacy competencies for Chinese-speaking health professionals.

    PubMed

    Chang, Li-Chun; Chen, Yu-Chi; Liao, Li-Ling; Wu, Fei Ling; Hsieh, Pei-Lin; Chen, Hsiao-Jung

    2017-01-01

    The study aimed to illustrate the constructs and test the psychometric properties of an instrument of health literacy competencies (IOHLC) for health professionals. A multi-phase questionnaire development method was used to develop the scale. The categorization of the knowledge and practice domains achieved consensus through a modified Delphi process. To reduce the number of items, the 92-item IOHLC was psychometrically evaluated through internal consistency, Rasch modeling, and two-stage factor analysis. In total, 736 practitioners, including nurses, nurse practitioners, health educators, case managers, and dieticians completed the 92-item IOHLC online from May 2012 to January 2013. The final version of the IOHLC covered 9 knowledge items and 40 skill items containing 9 dimensions, with good model fit, and explaining 72% of total variance. All domains had acceptable internal consistency and discriminant validity. The tool in this study is the first to verify health literacy competencies rigorously. Moreover, through psychometric testing, the 49-item IOHLC demonstrates adequate reliability and validity. The IOHLC may serve as a reference for the theoretical and in-service training of Chinese-speaking individuals' health literacy competencies.

  5. Validating a benchmarking tool for audit of early outcomes after operations for head and neck cancer.

    PubMed

    Tighe, D; Sassoon, I; McGurk, M

    2017-04-01

    INTRODUCTION In 2013 all UK surgical specialties, with the exception of head and neck surgery, published outcome data adjusted for case mix for indicator operations. This paper reports a pilot study to validate a previously published risk adjustment score on patients from separate UK cancer centres. METHODS A case note audit was performed of 1,075 patients undergoing 1,218 operations for head and neck squamous cell carcinoma under general anaesthesia in 4 surgical centres. A logistic regression equation predicting for all complications, previously validated internally at sites A-C, was tested on a fourth external validation sample (site D, 172 operations) using receiver operating characteristic curves, Hosmer-Lemeshow goodness of fit analysis and Brier scores. RESULTS Thirty-day complication rates varied widely (34-51%) between the centres. The predictive score allowed imperfect risk adjustment (area under the curve: 0.70), with Hosmer-Lemeshow analysis suggesting good calibration. The Brier score changed from 0.19 for sites A-C to 0.23 when site D was also included, suggesting poor accuracy overall. CONCLUSIONS Marked differences in operative risk and patient case mix captured by the risk adjustment score do not explain all the differences in observed outcomes. Further investigation with different methods is recommended to improve modelling of risk. Morbidity is common, and usually has a major impact on patient recovery, ward occupancy, hospital finances and patient perception of quality of care. We hope comparative audit will highlight good performance and challenge underperformance where it exists.

  6. Validating a benchmarking tool for audit of early outcomes after operations for head and neck cancer

    PubMed Central

    Sassoon, I; McGurk, M

    2017-01-01

    INTRODUCTION In 2013 all UK surgical specialties, with the exception of head and neck surgery, published outcome data adjusted for case mix for indicator operations. This paper reports a pilot study to validate a previously published risk adjustment score on patients from separate UK cancer centres. METHODS A case note audit was performed of 1,075 patients undergoing 1,218 operations for head and neck squamous cell carcinoma under general anaesthesia in 4 surgical centres. A logistic regression equation predicting for all complications, previously validated internally at sites A–C, was tested on a fourth external validation sample (site D, 172 operations) using receiver operating characteristic curves, Hosmer–Lemeshow goodness of fit analysis and Brier scores. RESULTS Thirty-day complication rates varied widely (34–51%) between the centres. The predictive score allowed imperfect risk adjustment (area under the curve: 0.70), with Hosmer–Lemeshow analysis suggesting good calibration. The Brier score changed from 0.19 for sites A–C to 0.23 when site D was also included, suggesting poor accuracy overall. CONCLUSIONS Marked differences in operative risk and patient case mix captured by the risk adjustment score do not explain all the differences in observed outcomes. Further investigation with different methods is recommended to improve modelling of risk. Morbidity is common, and usually has a major impact on patient recovery, ward occupancy, hospital finances and patient perception of quality of care. We hope comparative audit will highlight good performance and challenge underperformance where it exists. PMID:27917662

  7. Assessing the validity and reliability of family factors on physical activity: A case study in Turkey.

    PubMed

    Steenson, Sharalyn; Özcebe, Hilal; Arslan, Umut; Konşuk Ünlü, Hande; Araz, Özgür M; Yardim, Mahmut; Üner, Sarp; Bilir, Nazmi; Huang, Terry T-K

    2018-01-01

    Childhood obesity rates have been rising rapidly in developing countries. A better understanding of the risk factors and social context is necessary to inform public health interventions and policies. This paper describes the validation of several measurement scales for use in Turkey, which relate to child and parent perceptions of physical activity (PA) and enablers and barriers of physical activity in the home environment. The aim of this study was to assess the validity and reliability of several measurement scales in Turkey using a population sample across three socio-economic strata in the Turkish capital, Ankara. Surveys were conducted in Grade 4 children (mean age = 9.7 years for boys; 9.9 years for girls), and their parents, across 6 randomly selected schools, stratified by SES (n = 641 students, 483 parents). Construct validity of the scales was evaluated through exploratory and confirmatory factor analysis. Internal consistency of scales and test-retest reliability were assessed by Cronbach's alpha and intra-class correlation. The scales as a whole were found to have acceptable-to-good model fit statistics (PA Barriers: RMSEA = 0.076, SRMR = 0.0577, AGFI = 0.901; PA Outcome Expectancies: RMSEA = 0.054, SRMR = 0.0545, AGFI = 0.916, and PA Home Environment: RMSEA = 0.038, SRMR = 0.0233, AGFI = 0.976). The PA Barriers subscales showed good internal consistency and poor to fair test-retest reliability (personal α = 0.79, ICC = 0.29, environmental α = 0.73, ICC = 0.59). The PA Outcome Expectancies subscales showed good internal consistency and test-retest reliability (negative α = 0.77, ICC = 0.56; positive α = 0.74, ICC = 0.49). Only the PA Home Environment subscale on support for PA was validated in the final confirmatory model; it showed moderate internal consistency and test-retest reliability (α = 0.61, ICC = 0.48). This study is the first to validate measures of perceptions of physical activity and the physical activity home environment in Turkey. Our results support the originally hypothesized two-factor structures for Physical Activity Barriers and Physical Activity Outcome Expectancies. However, we found the one-factor rather than two-factor structure for Physical Activity Home Environment had the best model fit. This study provides general support for the use of these scales in Turkey in terms of validity, but test-retest reliability warrants further research.

  8. Development and psychometric testing of the 'Motivation and Self-Efficacy in Early Detection of Skin Lesions' index.

    PubMed

    Dyson, Judith; Cowdell, Fiona

    2014-12-01

    To develop and psychometrically test the Motivation and Self-Efficacy in Early Detection of Skin Lesions Index. Skin cancer is the most frequently diagnosed cancer worldwide. The primary strategy used to prevent skin cancer is promotion of sun avoidance and the use of sun protection. However, despite costly and extensive campaigns, cases of skin cancer continue to increase. If found and treated early, skin cancer is curable. Early detection is, therefore, very important. The study was conducted in 2013. Instrument Development. A literature review and a survey identified barriers (factors that hinder) and levers (factors that help) to skin self-examination. These were categorized according to a the Theoretical Domains Framework and this formed the basis of an instrument, which was tested for validity and reliability using confirmatory factor analysis and Cronbach's alpha respectively. A five-factor 20-item instrument was used that tested well for reliability and construct validity. Test-retest reliability was good for all items and domains. The five factors were: (i) Outcome expectancies; (ii) Intention; (iii) Self-efficacy; (iv) Social influences; (v) Memory. The Motivation and Self-Efficacy in Early Detection of Skin Lesions Index provides a reliable and valid method of assessing barriers and levers to skin self-examination. The next step is to design a theory-based intervention that can be tailored according to individual determinants to behaviour change identified by this instrument. © 2014 John Wiley & Sons Ltd.

  9. A uniformly valid approximation algorithm for nonlinear ordinary singular perturbation problems with boundary layer solutions.

    PubMed

    Cengizci, Süleyman; Atay, Mehmet Tarık; Eryılmaz, Aytekin

    2016-01-01

    This paper is concerned with two-point boundary value problems for singularly perturbed nonlinear ordinary differential equations. The case when the solution only has one boundary layer is examined. An efficient method so called Successive Complementary Expansion Method (SCEM) is used to obtain uniformly valid approximations to this kind of solutions. Four test problems are considered to check the efficiency and accuracy of the proposed method. The numerical results are found in good agreement with exact and existing solutions in literature. The results confirm that SCEM has a superiority over other existing methods in terms of easy-applicability and effectiveness.

  10. A Multi-Scale Method for Dynamics Simulation in Continuum Solvent Models I: Finite-Difference Algorithm for Navier-Stokes Equation

    PubMed Central

    Xiao, Li; Cai, Qin; Li, Zhilin; Zhao, Hongkai; Luo, Ray

    2014-01-01

    A multi-scale framework is proposed for more realistic molecular dynamics simulations in continuum solvent models by coupling a molecular mechanics treatment of solute with a fluid mechanics treatment of solvent. This article reports our initial efforts to formulate the physical concepts necessary for coupling the two mechanics and develop a 3D numerical algorithm to simulate the solvent fluid via the Navier-Stokes equation. The numerical algorithm was validated with multiple test cases. The validation shows that the algorithm is effective and stable, with observed accuracy consistent with our design. PMID:25404761

  11. Partial Validation of Multibody Program to Optimize Simulated Trajectories II (POST II) Parachute Simulation With Interacting Forces

    NASA Technical Reports Server (NTRS)

    Raiszadeh, Ben; Queen, Eric M.

    2002-01-01

    A capability to simulate trajectories Of Multiple interacting rigid bodies has been developed. This capability uses the Program to Optimize Simulated Trajectories II (POST II). Previously, POST II had the ability to simulate multiple bodies without interacting forces. The current implementation is used for the Simulation of parachute trajectories, in which the parachute and suspended bodies can be treated as rigid bodies. An arbitrary set of connecting lines can be included in the model and are treated as massless spring-dampers. This paper discusses details of the connection line modeling and results of several test cases used to validate the capability.

  12. A Selection of Test Cases for the Validation of Large-Eddy Simulations of Turbulent Flows (Quelques cas d’essai pour la validation de la simulation des gros tourbillons dans les ecoulements turbulents)

    DTIC Science & Technology

    1998-04-01

    they approach the more useful (higher) Reynolds numbers. 8.6 SUMMARY OF COMPLEX FLOWS SQUARE DUCT CMPO00 UDOv 6.5 x 10’i E Yokosawa ei al. 164] pg...Sheets for: Chapter 8. Complex Flows 184 185 CMPOO: Flow in a square duct - Experiments Yokosawa , Fujita, Hirota, & Iwata 1. Description of the flow...These are the experiments of Yokosawa ei al (1989). Air was blown through a flow meter and a settling chamber into a square duct. Measuremsents were

  13. Solving time-dependent two-dimensional eddy current problems

    NASA Technical Reports Server (NTRS)

    Lee, Min Eig; Hariharan, S. I.; Ida, Nathan

    1988-01-01

    Results of transient eddy current calculations are reported. For simplicity, a two-dimensional transverse magnetic field which is incident on an infinitely long conductor is considered. The conductor is assumed to be a good but not perfect conductor. The resulting problem is an interface initial boundary value problem with the boundary of the conductor being the interface. A finite difference method is used to march the solution explicitly in time. The method is shown. Treatment of appropriate radiation conditions is given special consideration. Results are validated with approximate analytic solutions. Two stringent test cases of high and low frequency incident waves are considered to validate the results.

  14. The use of immunochromatographic rapid test for soft tissue remains identification in order to distinguish between human and non-human origin.

    PubMed

    Gascho, Dominic; Morf, Nadja V; Thali, Michael J; Schaerli, Sarah

    2017-05-01

    Clear identification of soft tissue remains as being of non-human origin may be visually difficult in some cases e.g. due to decomposition. Thus, an additional examination is required. The use of an immunochromatographic rapid tests (IRT) device can be an easy solution with the additional advantage to be used directly at the site of discovery. The use of these test devices for detecting human blood at crime scenes is a common method. However, the IRT is specific not only for blood but also for differentiation between human and non-human soft tissue remains. In the following this method is discussed and validated by means of two forensic cases and several samples of various animals. Copyright © 2017 The Chartered Society of Forensic Sciences. Published by Elsevier B.V. All rights reserved.

  15. Open source IPSEC software in manned and unmanned space missions

    NASA Astrophysics Data System (ADS)

    Edwards, Jacob

    Network security is a major topic of research because cyber attackers pose a threat to national security. Securing ground-space communications for NASA missions is important because attackers could endanger mission success and human lives. This thesis describes how an open source IPsec software package was used to create a secure and reliable channel for ground-space communications. A cost efficient, reproducible hardware testbed was also created to simulate ground-space communications. The testbed enables simulation of low-bandwidth and high latency communications links to experiment how the open source IPsec software reacts to these network constraints. Test cases were built that allowed for validation of the testbed and the open source IPsec software. The test cases also simulate using an IPsec connection from mission control ground routers to points of interest in outer space. Tested open source IPsec software did not meet all the requirements. Software changes were suggested to meet requirements.

  16. RELAP5-3D Resolution of Known Restart/Backup Issues

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mesina, George L.; Anderson, Nolan A.

    2014-12-01

    The state-of-the-art nuclear reactor system safety analysis computer program developed at the Idaho National Laboratory (INL), RELAP5-3D, continues to adapt to changes in computer hardware and software and to develop to meet the ever-expanding needs of the nuclear industry. To continue at the forefront, code testing must evolve with both code and industry developments, and it must work correctly. To best ensure this, the processes of Software Verification and Validation (V&V) are applied. Verification compares coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions. A form of this, sequentialmore » verification, checks code specifications against coding only when originally written then applies regression testing which compares code calculations between consecutive updates or versions on a set of test cases to check that the performance does not change. A sequential verification testing system was specially constructed for RELAP5-3D to both detect errors with extreme accuracy and cover all nuclear-plant-relevant code features. Detection is provided through a “verification file” that records double precision sums of key variables. Coverage is provided by a test suite of input decks that exercise code features and capabilities necessary to model a nuclear power plant. A matrix of test features and short-running cases that exercise them is presented. This testing system is used to test base cases (called null testing) as well as restart and backup cases. It can test RELAP5-3D performance in both standalone and coupled (through PVM to other codes) runs. Application of verification testing revealed numerous restart and backup issues in both standalone and couple modes. This document reports the resolution of these issues.« less

  17. Case definitions for chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME): a systematic review

    PubMed Central

    Brurberg, Kjetil Gundro; Fønhus, Marita Sporstøl; Larun, Lillebeth; Flottorp, Signe; Malterud, Kirsti

    2014-01-01

    Objective To identify case definitions for chronic fatigue syndrome/myalgic encephalomyelitis (CFS/ME), and explore how the validity of case definitions can be evaluated in the absence of a reference standard. Design Systematic review. Setting International. Participants A literature search, updated as of November 2013, led to the identification of 20 case definitions and inclusion of 38 validation studies. Primary and secondary outcome measure Validation studies were assessed for risk of bias and categorised according to three validation models: (1) independent application of several case definitions on the same population, (2) sequential application of different case definitions on patients diagnosed with CFS/ME with one set of diagnostic criteria or (3) comparison of prevalence estimates from different case definitions applied on different populations. Results A total of 38 studies contributed data of sufficient quality and consistency for evaluation of validity, with CDC-1994/Fukuda as the most frequently applied case definition. No study rigorously assessed the reproducibility or feasibility of case definitions. Validation studies were small with methodological weaknesses and inconsistent results. No empirical data indicated that any case definition specifically identified patients with a neuroimmunological condition. Conclusions Classification of patients according to severity and symptom patterns, aiming to predict prognosis or effectiveness of therapy, seems useful. Development of further case definitions of CFS/ME should be given a low priority. Consistency in research can be achieved by applying diagnostic criteria that have been subjected to systematic evaluation. PMID:24508851

  18. What tests should you use to assess small intestinal bacterial overgrowth in systemic sclerosis?

    PubMed

    Braun-Moscovici, Yolanda; Braun, Marius; Khanna, Dinesh; Balbir-Gurman, Alexandra; Furst, Daniel E

    2015-01-01

    Small intestinal bacterial overgrowth (SIBO) plays a major role in the pathogenesis of malabsorption in SSc patients and is a source of great morbidity and even mortality, in those patients. This manuscript reviews which tests are valid and should be used in SSc when evaluating SIBO. We performed systematic literature searches in PubMed, Embase and the Cochrane library from 1966 up to November 2014 for English language, published articles examining bacterial overgrowth in SSc (e.g. malabsorption tests, breath tests, xylose test, etc). Articles obtained from these searches were reviewed for additional references. The validity of the tests was evaluated according to the OMERACT principles of truth, discrimination and feasibility. From a total of 65 titles, 22 articles were reviewed and 20 were ultimately extracted to examine the validity of tests for GI morphology, bacterial overgrowth and malabsorption in SSc. Only 1 test (hydrogen and methane breath tests) is fully validated. Four tests are partially validated, including jejunal cultures, xylose, lactulose tests, and 72 hours fecal fat test. Only 1 of a total of 5 GI tests of bacterial overgrowth (see above) is fully validated in SSc. For clinical trials, fully validated tests are preferred, although some investigators use partially validated tests (4 tests). Further validation of GI tests in SSc is needed.

  19. Assessing dose rate distributions in VMAT plans

    NASA Astrophysics Data System (ADS)

    Mackeprang, P.-H.; Volken, W.; Terribilini, D.; Frauchiger, D.; Zaugg, K.; Aebersold, D. M.; Fix, M. K.; Manser, P.

    2016-04-01

    Dose rate is an essential factor in radiobiology. As modern radiotherapy delivery techniques such as volumetric modulated arc therapy (VMAT) introduce dynamic modulation of the dose rate, it is important to assess the changes in dose rate. Both the rate of monitor units per minute (MU rate) and collimation are varied over the course of a fraction, leading to different dose rates in every voxel of the calculation volume at any point in time during dose delivery. Given the radiotherapy plan and machine specific limitations, a VMAT treatment plan can be split into arc sectors between Digital Imaging and Communications in Medicine control points (CPs) of constant and known MU rate. By calculating dose distributions in each of these arc sectors independently and multiplying them with the MU rate, the dose rate in every single voxel at every time point during the fraction can be calculated. Independently calculated and then summed dose distributions per arc sector were compared to the whole arc dose calculation for validation. Dose measurements and video analysis were performed to validate the calculated datasets. A clinical head and neck, cranial and liver case were analyzed using the tool developed. Measurement validation of synthetic test cases showed linac agreement to precalculated arc sector times within  ±0.4 s and doses  ±0.1 MU (one standard deviation). Two methods for the visualization of dose rate datasets were developed: the first method plots a two-dimensional (2D) histogram of the number of voxels receiving a given dose rate over the course of the arc treatment delivery. In similarity to treatment planning system display of dose, the second method displays the dose rate as color wash on top of the corresponding computed tomography image, allowing the user to scroll through the variation over time. Examining clinical cases showed dose rates spread over a continuous spectrum, with mean dose rates hardly exceeding 100 cGy min-1 for conventional fractionation. A tool to analyze dose rate distributions in VMAT plans with sub-second accuracy was successfully developed and validated. Dose rates encountered in clinical VMAT test cases show a continuous spectrum with a mean less than or near 100 cGy min-1 for conventional fractionation.

  20. Semi-automatic forensic approach using mandibular midline lingual structures as fingerprint: a pilot study.

    PubMed

    Shaheen, E; Mowafy, B; Politis, C; Jacobs, R

    2017-12-01

    Previous research proposed the use of the mandibular midline neurovascular canal structures as a forensic finger print. In their observer study, an average correct identification of 95% was reached which triggered this study. To present a semi-automatic computer recognition approach to replace the observers and to validate the accuracy of this newly proposed method. Imaging data from Computer Tomography (CT) and Cone Beam Computer Tomography (CBCT) of mandibles scanned at two different moments were collected to simulate an AM and PM situation where the first scan presented AM and the second scan was used to simulate PM. Ten cases with 20 scans were used to build a classifier which relies on voxel based matching and results with classification into one of two groups: "Unmatched" and "Matched". This protocol was then tested using five other scans out of the database. Unpaired t-testing was applied and accuracy of the computerized approach was determined. A significant difference was found between the "Unmatched" and "Matched" classes with means of 0.41 and 0.86 respectively. Furthermore, the testing phase showed an accuracy of 100%. The validation of this method pushes this protocol further to a fully automatic identification procedure for victim identification based on the mandibular midline canals structures only in cases with available AM and PM CBCT/CT data.

  1. Construct Validity of the Nepalese School Leaving English Reading Test

    ERIC Educational Resources Information Center

    Dawadi, Saraswati; Shrestha, Prithvi N.

    2018-01-01

    There has been a steady interest in investigating the validity of language tests in the last decades. Despite numerous studies on construct validity in language testing, there are not many studies examining the construct validity of a reading test. This paper reports on a study that explored the construct validity of the English reading test in…

  2. Mendel,MD: A user-friendly open-source web tool for analyzing WES and WGS in the diagnosis of patients with Mendelian disorders

    PubMed Central

    D. Linhares, Natália; Pena, Sérgio D. J.

    2017-01-01

    Whole exome and whole genome sequencing have both become widely adopted methods for investigating and diagnosing human Mendelian disorders. As pangenomic agnostic tests, they are capable of more accurate and agile diagnosis compared to traditional sequencing methods. This article describes new software called Mendel,MD, which combines multiple types of filter options and makes use of regularly updated databases to facilitate exome and genome annotation, the filtering process and the selection of candidate genes and variants for experimental validation and possible diagnosis. This tool offers a user-friendly interface, and leads clinicians through simple steps by limiting the number of candidates to achieve a final diagnosis of a medical genetics case. A useful innovation is the “1-click” method, which enables listing all the relevant variants in genes present at OMIM for perusal by clinicians. Mendel,MD was experimentally validated using clinical cases from the literature and was tested by students at the Universidade Federal de Minas Gerais, at GENE–Núcleo de Genética Médica in Brazil and at the Children’s University Hospital in Dublin, Ireland. We show in this article how it can simplify and increase the speed of identifying the culprit mutation in each of the clinical cases that were received for further investigation. Mendel,MD proved to be a reliable web-based tool, being open-source and time efficient for identifying the culprit mutation in different clinical cases of patients with Mendelian Disorders. It is also freely accessible for academic users on the following URL: https://mendelmd.org. PMID:28594829

  3. Prospective, multicenter clinical trial to validate new products for skin tests in the diagnosis of allergy to penicillin.

    PubMed

    Fernández, J; Torres, M J; Campos, J; Arribas-Poves, F; Blanca, M

    2013-01-01

    Allergy to penicillin is the most commonly reported type of drug hypersensitivity. Diagnosis is currently confirmed using skin tests with benzylpenicillin reagents, ie, penicilloyl-polylysine (PPL) as the major determinant of benzylpenicillin and benzylpenicillin, benzylpenicilloate and benzylpenilloate as a minor determinant mixture (MDM). To synthesize and assess the diagnostic capacity of 2 new benzylpenicillin reagents in patients with immediate hypersensitivity reactions to B-lactams: benzylpenicilloyl octa-L-lysine (BP-OL) as the major determinant and benzylpenilloate (penilloate) as the minor determinant. Prospective multicenter clinical trial performed in 18 Spanish centers. Efficacy was assessed by detection of positive skin test results in an allergic population and negative skin test results in a nonallergic, drug-exposed population. Sensitivity, specificity, and negative and positive predictive values were determined. The study sample comprised 94 allergic patients: 31 (35.23%) presented anaphylaxis, 4 (4.55%) anaphylactic shock, 51 (58.04%) urticaria, and 2 (2.27%) no specific condition. The culprit 8-lactams were amoxicillin in 63 cases (71.60%), benzypencillin in 14 cases (15.89%), cephalosporins in 2 cases (2.27%), other drugs in 3 cases (3.42%), and unidentified agents in 6 cases (6.82%). The results of testing with BP-OL were positive in 46 cases (52.3%); the results of testing with penilloate were positive in 33 cases (37.5%). When both reagents were taken into consideration, sensitivity reached 61.36% and specificity 100%. Skin testing with penilloate was significantly more often negative when the interval between the reaction and the study was longer. The sensitivity of BP-OL and penilloate was 61%. Considering that amoxicillin was the culprit drug in 71% of reactions, these results indicate that most patients were allergic to the whole group of penicillins. These data support the use of benzylpenicillin determinants in the diagnosis of allergy to beta-lactams, even in predominantly amoxicillin-allergic populations.

  4. Simulation validation and management

    NASA Astrophysics Data System (ADS)

    Illgen, John D.

    1995-06-01

    Illgen Simulation Technologies, Inc., has been working interactive verification and validation programs for the past six years. As a result, they have evolved a methodology that has been adopted and successfully implemented by a number of different verification and validation programs. This methodology employs a unique case of computer-assisted software engineering (CASE) tools to reverse engineer source code and produce analytical outputs (flow charts and tables) that aid the engineer/analyst in the verification and validation process. We have found that the use of CASE tools saves time,which equate to improvements in both schedule and cost. This paper will describe the ISTI-developed methodology and how CASe tools are used in its support. Case studies will be discussed.

  5. Automated Generation and Assessment of Autonomous Systems Test Cases

    NASA Technical Reports Server (NTRS)

    Barltrop, Kevin J.; Friberg, Kenneth H.; Horvath, Gregory A.

    2008-01-01

    This slide presentation reviews some of the issues concerning verification and validation testing of autonomous spacecraft routinely culminates in the exploration of anomalous or faulted mission-like scenarios using the work involved during the Dawn mission's tests as examples. Prioritizing which scenarios to develop usually comes down to focusing on the most vulnerable areas and ensuring the best return on investment of test time. Rules-of-thumb strategies often come into play, such as injecting applicable anomalies prior to, during, and after system state changes; or, creating cases that ensure good safety-net algorithm coverage. Although experience and judgment in test selection can lead to high levels of confidence about the majority of a system's autonomy, it's likely that important test cases are overlooked. One method to fill in potential test coverage gaps is to automatically generate and execute test cases using algorithms that ensure desirable properties about the coverage. For example, generate cases for all possible fault monitors, and across all state change boundaries. Of course, the scope of coverage is determined by the test environment capabilities, where a faster-than-real-time, high-fidelity, software-only simulation would allow the broadest coverage. Even real-time systems that can be replicated and run in parallel, and that have reliable set-up and operations features provide an excellent resource for automated testing. Making detailed predictions for the outcome of such tests can be difficult, and when algorithmic means are employed to produce hundreds or even thousands of cases, generating predicts individually is impractical, and generating predicts with tools requires executable models of the design and environment that themselves require a complete test program. Therefore, evaluating the results of large number of mission scenario tests poses special challenges. A good approach to address this problem is to automatically score the results based on a range of metrics. Although the specific means of scoring depends highly on the application, the use of formal scoring - metrics has high value in identifying and prioritizing anomalies, and in presenting an overall picture of the state of the test program. In this paper we present a case study based on automatic generation and assessment of faulted test runs for the Dawn mission, and discuss its role in optimizing the allocation of resources for completing the test program.

  6. Assessing Requirements Quality through Requirements Coverage

    NASA Technical Reports Server (NTRS)

    Rajan, Ajitha; Heimdahl, Mats; Woodham, Kurt

    2008-01-01

    In model-based development, the development effort is centered around a formal description of the proposed software system the model. This model is derived from some high-level requirements describing the expected behavior of the software. For validation and verification purposes, this model can then be subjected to various types of analysis, for example, completeness and consistency analysis [6], model checking [3], theorem proving [1], and test-case generation [4, 7]. This development paradigm is making rapid inroads in certain industries, e.g., automotive, avionics, space applications, and medical technology. This shift towards model-based development naturally leads to changes in the verification and validation (V&V) process. The model validation problem determining that the model accurately captures the customer's high-level requirements has received little attention and the sufficiency of the validation activities has been largely determined through ad-hoc methods. Since the model serves as the central artifact, its correctness with respect to the users needs is absolutely crucial. In our investigation, we attempt to answer the following two questions with respect to validation (1) Are the requirements sufficiently defined for the system? and (2) How well does the model implement the behaviors specified by the requirements? The second question can be addressed using formal verification. Nevertheless, the size and complexity of many industrial systems make formal verification infeasible even if we have a formal model and formalized requirements. Thus, presently, there is no objective way of answering these two questions. To this end, we propose an approach based on testing that, when given a set of formal requirements, explores the relationship between requirements-based structural test-adequacy coverage and model-based structural test-adequacy coverage. The proposed technique uses requirements coverage metrics defined in [9] on formal high-level software requirements and existing model coverage metrics such as the Modified Condition and Decision Coverage (MC/DC) used when testing highly critical software in the avionics industry [8]. Our work is related to Chockler et al. [2], but we base our work on traditional testing techniques as opposed to verification techniques.

  7. Validation of Student and Parent Reported Data on the Basic Grant Application Form, 1978-79 Comprehensive Validation Guide. Procedural Manual for: Validation of Cases Referred by Institutions; Validation of Cases Referred by the Office of Education; Recovery of Overpayments.

    ERIC Educational Resources Information Center

    Smith, Karen; And Others

    Procedures for validating data reported by students and parents on an application for Basic Educational Opportunity Grants were developed in 1978 for the U.S. Office of Education (OE). Validation activities include: validation of flagged Student Eligibility Reports (SERs) for students whose schools are part of the Alternate Disbursement System;…

  8. Traceability validation of a high speed short-pulse testing method used in LED production

    NASA Astrophysics Data System (ADS)

    Revtova, Elena; Vuelban, Edgar Moreno; Zhao, Dongsheng; Brenkman, Jacques; Ulden, Henk

    2017-12-01

    Industrial processes of LED (light-emitting diode) production include LED light output performance testing. Most of them are monitored and controlled by optically, electrically and thermally measuring LEDs by high speed short-pulse measurement methods. However, these are not standardized and a lot of information is proprietary that it is impossible for third parties, such as NMIs, to trace and validate. It is known, that these techniques have traceability issue and metrological inadequacies. Often due to these, the claimed performance specifications of LEDs are overstated, which consequently results to manufacturers experiencing customers' dissatisfaction and a large percentage of failures in daily use of LEDs. In this research a traceable setup is developed to validate one of the high speed testing techniques, investigate inadequacies and work out the traceability issues. A well-characterised short square pulse of 25 ms is applied to chip-on-board (CoB) LED modules to investigate the light output and colour content. We conclude that the short-pulse method is very efficient in case a well-defined electrical current pulse is applied and the stabilization time of the device is "a priori" accurately determined. No colour shift is observed. The largest contributors to the measurement uncertainty include badly-defined current pulse and inaccurate calibration factor.

  9. Problems and challenges in the development and validation of human cell-based assays to determine nanoparticle-induced immunomodulatory effects

    PubMed Central

    2011-01-01

    Background With the increasing use of nanomaterials, the need for methods and assays to examine their immunosafety is becoming urgent, in particular for nanomaterials that are deliberately administered to human subjects (as in the case of nanomedicines). To obtain reliable results, standardised in vitro immunotoxicological tests should be used to determine the effects of engineered nanoparticles on human immune responses. However, before assays can be standardised, it is important that suitable methods are established and validated. Results In a collaborative work between European laboratories, existing immunological and toxicological in vitro assays were tested and compared for their suitability to test effects of nanoparticles on immune responses. The prototypical nanoparticles used were metal (oxide) particles, either custom-generated by wet synthesis or commercially available as powders. Several problems and challenges were encountered during assay validation, ranging from particle agglomeration in biological media and optical interference with assay systems, to chemical immunotoxicity of solvents and contamination with endotoxin. Conclusion The problems that were encountered in the immunological assay systems used in this study, such as chemical or endotoxin contamination and optical interference caused by the dense material, significantly affected the data obtained. These problems have to be solved to enable the development of reliable assays for the assessment of nano-immunosafety. PMID:21306632

  10. Drug and herb induced liver injury: Council for International Organizations of Medical Sciences scale for causality assessment

    PubMed Central

    Teschke, Rolf; Wolff, Albrecht; Frenzel, Christian; Schwarzenboeck, Alexander; Schulze, Johannes; Eickhoff, Axel

    2014-01-01

    Causality assessment of suspected drug induced liver injury (DILI) and herb induced liver injury (HILI) is hampered by the lack of a standardized approach to be used by attending physicians and at various subsequent evaluating levels. The aim of this review was to analyze the suitability of the liver specific Council for International Organizations of Medical Sciences (CIOMS) scale as a standard tool for causality assessment in DILI and HILI cases. PubMed database was searched for the following terms: drug induced liver injury; herb induced liver injury; DILI causality assessment; and HILI causality assessment. The strength of the CIOMS lies in its potential as a standardized scale for DILI and HILI causality assessment. Other advantages include its liver specificity and its validation for hepatotoxicity with excellent sensitivity, specificity and predictive validity, based on cases with a positive reexposure test. This scale allows prospective collection of all relevant data required for a valid causality assessment. It does not require expert knowledge in hepatotoxicity and its results may subsequently be refined. Weaknesses of the CIOMS scale include the limited exclusion of alternative causes and qualitatively graded risk factors. In conclusion, CIOMS appears to be suitable as a standard scale for attending physicians, regulatory agencies, expert panels and other scientists to provide a standardized, reproducible causality assessment in suspected DILI and HILI cases, applicable primarily at all assessing levels involved. PMID:24653791

  11. Simplified Thermo-Chemical Modelling For Hypersonic Flow

    NASA Astrophysics Data System (ADS)

    Sancho, Jorge; Alvarez, Paula; Gonzalez, Ezequiel; Rodriguez, Manuel

    2011-05-01

    Hypersonic flows are connected with high temperatures, generally associated with strong shock waves that appear in such flows. At high temperatures vibrational degrees of freedom of the molecules may become excited, the molecules may dissociate into atoms, the molecules or free atoms may ionize, and molecular or ionic species, unimportant at lower temperatures, may be formed. In order to take into account these effects, a chemical model is needed, but this model should be simplified in order to be handled by a CFD code, but with a sufficient precision to take into account the physics more important. This work is related to a chemical non-equilibrium model validation, implemented into a commercial CFD code, in order to obtain the flow field around bodies in hypersonic flow. The selected non-equilibrium model is composed of seven species and six direct reactions together with their inverse. The commercial CFD code where the non- equilibrium model has been implemented is FLUENT. For the validation, the X38/Sphynx Mach 20 case is rebuilt on a reduced geometry, including the 1/3 Lref forebody. This case has been run in laminar regime, non catalytic wall and with radiative equilibrium wall temperature. The validated non-equilibrium model is applied to the EXPERT (European Experimental Re-entry Test-bed) vehicle at a specified trajectory point (Mach number 14). This case has been run also in laminar regime, non catalytic wall and with radiative equilibrium wall temperature.

  12. Physical and composition characteristics of clinical secretions compared with test soils used for validation of flexible endoscope cleaning.

    PubMed

    Alfa, M J; Olson, N

    2016-05-01

    To determine which simulated-use test soils met the worst-case organic levels and viscosity of clinical secretions, and had the best adhesive characteristics. Levels of protein, carbohydrate and haemoglobin, and vibrational viscosity of clinical endoscope secretions were compared with test soils including ATS, ATS2015, Edinburgh, Edinburgh-M (modified), Miles, 10% serum and coagulated whole blood. ASTM D3359 was used for adhesion testing. Cleaning of a single-channel flexible intubation endoscope was tested after simulated use. The worst-case levels of protein, carbohydrate and haemoglobin, and viscosity of clinical material were 219,828μg/mL, 9296μg/mL, 9562μg/mL and 6cP, respectively. Whole blood, ATS2015 and Edinburgh-M were pipettable with viscosities of 3.4cP, 9.0cP and 11.9cP, respectively. ATS2015 and Edinburgh-M best matched the worst-case clinical parameters, but ATS had the best adhesion with 7% removal (36.7% for Edinburgh-M). Edinburgh-M and ATS2015 showed similar soiling and removal characteristics from the surface and lumen of a flexible intubation endoscope. Of the test soils evaluated, ATS2015 and Edinburgh-M were found to be good choices for the simulated use of endoscopes, as their composition and viscosity most closely matched worst-case clinical material. Copyright © 2016 The Authors. Published by Elsevier Ltd.. All rights reserved.

  13. Validation of Land Cover Maps Utilizing Astronaut Acquired Imagery

    NASA Technical Reports Server (NTRS)

    Estes, John E.; Gebelein, Jennifer

    1999-01-01

    This report is produced in accordance with the requirements outlined in the NASA Research Grant NAG9-1032 titled "Validation of Land Cover Maps Utilizing Astronaut Acquired Imagery". This grant funds the Remote Sensing Research Unit of the University of California, Santa Barbara. This document summarizes the research progress and accomplishments to date and describes current on-going research activities. Even though this grant has technically expired, in a contractual sense, work continues on this project. Therefore, this summary will include all work done through and 5 May 1999. The principal goal of this effort is to test the accuracy of a sub-regional portion of an AVHRR-based land cover product. Land cover mapped to three different classification systems, in the southwestern United States, have been subjected to two specific accuracy assessments. One assessment utilizing astronaut acquired photography, and a second assessment employing Landsat Thematic Mapper imagery, augmented in some cases, high aerial photography. Validation of these three land cover products has proceeded using a stratified sampling methodology. We believe this research will provide an important initial test of the potential use of imagery acquired from Shuttle and ultimately the International Space Station (ISS) for the operational validation of the Moderate Resolution Imaging Spectrometer (MODIS) land cover products.

  14. Assessment of Semi-Structured Clinical Interview for Mobile Phone Addiction Disorder.

    PubMed

    Alavi, Seyyed Salman; Mohammadi, Mohammad Reza; Jannatifard, Fereshteh; Mohammadi Kalhori, Soroush; Sepahbodi, Ghazal; BabaReisi, Mohammad; Sajedi, Sahar; Farshchi, Mojtaba; KhodaKarami, Rasul; Hatami Kasvaee, Vahid

    2016-04-01

    The Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition, Text Revision (DSM-IV-TR) classified mobile phone addiction disorder under "impulse control disorder not elsewhere classified". This study surveyed the diagnostic criteria of DSM-IV-TR for the diagnosis of mobile phone addiction in correspondence with Iranian society and culture. Two hundred fifty students of Tehran universities were entered into this descriptive-analytical and cross-sectional study. Quota sampling method was used. At first, semi- structured clinical interview (based on DSM-IV-TR) was performed for all the cases, and another specialist reevaluated the interviews. Data were analyzed using content validity, inter-scorer reliability (Kappa coefficient) and test-retest via SPSS18 software. The content validity of the semi- structured clinical interview matched the DSM-IV-TR criteria for behavioral addiction. Moreover, their content was appropriate, and two items, including "SMS pathological use" and "High monthly cost of using the mobile phone" were added to promote its validity. Internal reliability (Kappa) and test-retest reliability were 0.55 and r = 0.4 (p<0. 01) respectively. The results of this study revealed that semi- structured diagnostic criteria of DSM-IV-TR are valid and reliable for diagnosing mobile phone addiction, and this instrument is an effective tool to diagnose this disorder.

  15. 14 CFR 91.1041 - Aircraft proving and validation tests.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 14 Aeronautics and Space 2 2014-01-01 2014-01-01 false Aircraft proving and validation tests. 91... Ownership Operations Program Management § 91.1041 Aircraft proving and validation tests. (a) No program... tests. However, pilot flight training may be conducted during the proving tests. (d) Validation testing...

  16. 14 CFR 91.1041 - Aircraft proving and validation tests.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 14 Aeronautics and Space 2 2012-01-01 2012-01-01 false Aircraft proving and validation tests. 91... Ownership Operations Program Management § 91.1041 Aircraft proving and validation tests. (a) No program... tests. However, pilot flight training may be conducted during the proving tests. (d) Validation testing...

  17. 14 CFR 91.1041 - Aircraft proving and validation tests.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 14 Aeronautics and Space 2 2013-01-01 2013-01-01 false Aircraft proving and validation tests. 91... Ownership Operations Program Management § 91.1041 Aircraft proving and validation tests. (a) No program... tests. However, pilot flight training may be conducted during the proving tests. (d) Validation testing...

  18. 14 CFR 91.1041 - Aircraft proving and validation tests.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 14 Aeronautics and Space 2 2011-01-01 2011-01-01 false Aircraft proving and validation tests. 91... Ownership Operations Program Management § 91.1041 Aircraft proving and validation tests. (a) No program... tests. However, pilot flight training may be conducted during the proving tests. (d) Validation testing...

  19. 14 CFR 91.1041 - Aircraft proving and validation tests.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... 14 Aeronautics and Space 2 2010-01-01 2010-01-01 false Aircraft proving and validation tests. 91... Ownership Operations Program Management § 91.1041 Aircraft proving and validation tests. (a) No program... tests. However, pilot flight training may be conducted during the proving tests. (d) Validation testing...

  20. Validation in the clinical process: four settings for objectification of the subjectivity of understanding.

    PubMed

    Beland, H

    1994-12-01

    Clinical material is presented for discussion with the aim of exemplifying the author's conceptions of validation in a number of sessions and in psychoanalytic research and of making them verifiable, susceptible to consensus and/or falsifiable. Since Freud's postscript to the Dora case, the first clinical validation in the history of psychoanalysis, validation has been group-related and society-related, that is to say, it combines the evidence of subjectivity with the consensus of the research community (the scientific community). Validation verifies the conformity of the unconscious transference meaning with the analyst's understanding. The deciding criterion is the patient's reaction to the interpretation. In terms of the theory of science, validation in the clinical process corresponds to experimental testing of truth in the sphere of inanimate nature. Four settings of validation can be distinguished: the analyst's self-supervision during the process of understanding, which goes from incomprehension to comprehension (container-contained, PS-->D, selected fact); the patient's reaction to the interpretation (insight) and the analyst's assessment of the reaction; supervision and second thoughts; and discussion in groups and publications leading to consensus. It is a peculiarity of psychoanalytic research that in the event of positive validation the three criteria of truth (evidence, consensus and utility) coincide.

  1. Application of CFE/POST2 for Simulation of Launch Vehicle Stage Separation

    NASA Technical Reports Server (NTRS)

    Pamadi, Bandu N.; Tartabini, Paul V.; Toniolo, Matthew D.; Roithmayr, Carlos M.; Karlgaard, Christopher D.; Samareh, Jamshid A.

    2009-01-01

    The constraint force equation (CFE) methodology provides a framework for modeling constraint forces and moments acting at joints that connect multiple vehicles. With implementation in Program to Optimize Simulated Trajectories II (POST 2), the CFE provides a capability to simulate end-to-end trajectories of launch vehicles, including stage separation. In this paper, the CFE/POST2 methodology is applied to the Shuttle-SRB separation problem as a test and validation case. The CFE/POST2 results are compared with STS-1 flight test data.

  2. Human-robot interaction tests on a novel robot for gait assistance.

    PubMed

    Tagliamonte, Nevio Luigi; Sergi, Fabrizio; Carpino, Giorgio; Accoto, Dino; Guglielmelli, Eugenio

    2013-06-01

    This paper presents tests on a treadmill-based non-anthropomorphic wearable robot assisting hip and knee flexion/extension movements using compliant actuation. Validation experiments were performed on the actuators and on the robot, with specific focus on the evaluation of intrinsic backdrivability and of assistance capability. Tests on a young healthy subject were conducted. In the case of robot completely unpowered, maximum backdriving torques were found to be in the order of 10 Nm due to the robot design features (reduced swinging masses; low intrinsic mechanical impedance and high-efficiency reduction gears for the actuators). Assistance tests demonstrated that the robot can deliver torques attracting the subject towards a predicted kinematic status.

  3. Trends in testing behaviours for hepatitis C virus infection and associated determinants: results from population-based laboratory surveillance in Alberta, Canada (1998-2001).

    PubMed

    Jayaraman, G C; Lee, B; Singh, A E; Preiksaitis, J K

    2007-04-01

    Little is currently known about hepatitis C virus (HCV) test seeking behaviours at the population level. Given the centralized nature of testing for HCV infection in the province of Alberta, Canada, we had an opportunity to examine HCV testing behaviour at the population level on all newly diagnosed HCV-positive cases using laboratory data to validate the time and number of prior tests for each case. Record linkage identified 3323, 2937, 2660 and 2703 newly diagnosed cases of HCV infections in Alberta during 1998, 1999, 2000 and 2001, respectively, corresponding to age-adjusted rates of 149.8, 129, 114.3 and 113.7 per 100,000 population during these years, respectively. Results from secondary analyses of laboratory data suggest that the majority of HCV cases (95.3%) who were newly diagnosed between 1998 and 2001 were first-time testers for HCV infection. Among repeat testers, analysis of a negative test result within 1 year prior to a first of a positive test report suggests that 211 (38.4%) may be seroconvertors. These findings suggest that 339 or 61.7% of repeat testers may not have discovered their serostatus within 1 year of infection. Among this group, HCV testing was sought infrequently, with a median interval of 2.3 years between the last negative and first positive test. This finding is of concern given the risks for HCV transmission, particularly if risk-taking behaviours are not reduced because of unknown serostatus. These findings also reinforce the need to make the most of each test-seeking event with proper counselling and other appropriate support services.

  4. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  5. Item development process and analysis of 50 case-based items for implementation on the Korean Nursing Licensing Examination.

    PubMed

    Park, In Sook; Suh, Yeon Ok; Park, Hae Sook; Kang, So Young; Kim, Kwang Sung; Kim, Gyung Hee; Choi, Yeon-Hee; Kim, Hyun-Ju

    2017-01-01

    The purpose of this study was to improve the quality of items on the Korean Nursing Licensing Examination by developing and evaluating case-based items that reflect integrated nursing knowledge. We conducted a cross-sectional observational study to develop new case-based items. The methods for developing test items included expert workshops, brainstorming, and verification of content validity. After a mock examination of undergraduate nursing students using the newly developed case-based items, we evaluated the appropriateness of the items through classical test theory and item response theory. A total of 50 case-based items were developed for the mock examination, and content validity was evaluated. The question items integrated 34 discrete elements of integrated nursing knowledge. The mock examination was taken by 741 baccalaureate students in their fourth year of study at 13 universities. Their average score on the mock examination was 57.4, and the examination showed a reliability of 0.40. According to classical test theory, the average level of item difficulty of the items was 57.4% (80%-100% for 12 items; 60%-80% for 13 items; and less than 60% for 25 items). The mean discrimination index was 0.19, and was above 0.30 for 11 items and 0.20 to 0.29 for 15 items. According to item response theory, the item discrimination parameter (in the logistic model) was none for 10 items (0.00), very low for 20 items (0.01 to 0.34), low for 12 items (0.35 to 0.64), moderate for 6 items (0.65 to 1.34), high for 1 item (1.35 to 1.69), and very high for 1 item (above 1.70). The item difficulty was very easy for 24 items (below -2.0), easy for 8 items (-2.0 to -0.5), medium for 6 items (-0.5 to 0.5), hard for 3 items (0.5 to 2.0), and very hard for 9 items (2.0 or above). The goodness-of-fit test in terms of the 2-parameter item response model between the range of 2.0 to 0.5 revealed that 12 items had an ideal correct answer rate. We surmised that the low reliability of the mock examination was influenced by the timing of the test for the examinees and the inappropriate difficulty of the items. Our study suggested a methodology for the development of future case-based items for the Korean Nursing Licensing Examination.

  6. The Immune System as a Model for Pattern Recognition and Classification

    PubMed Central

    Carter, Jerome H.

    2000-01-01

    Objective: To design a pattern recognition engine based on concepts derived from mammalian immune systems. Design: A supervised learning system (Immunos-81) was created using software abstractions of T cells, B cells, antibodies, and their interactions. Artificial T cells control the creation of B-cell populations (clones), which compete for recognition of “unknowns.” The B-cell clone with the “simple highest avidity” (SHA) or “relative highest avidity” (RHA) is considered to have successfully classified the unknown. Measurement: Two standard machine learning data sets, consisting of eight nominal and six continuous variables, were used to test the recognition capabilities of Immunos-81. The first set (Cleveland), consisting of 303 cases of patients with suspected coronary artery disease, was used to perform a ten-way cross-validation. After completing the validation runs, the Cleveland data set was used as a training set prior to presentation of the second data set, consisting of 200 unknown cases. Results: For cross-validation runs, correct recognition using SHA ranged from a high of 96 percent to a low of 63.2 percent. The average correct classification for all runs was 83.2 percent. Using the RHA metric, 11.2 percent were labeled “too close to determine” and no further attempt was made to classify them. Of the remaining cases, 85.5 percent were correctly classified. When the second data set was presented, correct classification occurred in 73.5 percent of cases when SHA was used and in 80.3 percent of cases when RHA was used. Conclusions: The immune system offers a viable paradigm for the design of pattern recognition systems. Additional research is required to fully exploit the nuances of immune computation. PMID:10641961

  7. The InterFrost benchmark of Thermo-Hydraulic codes for cold regions hydrology - first inter-comparison results

    NASA Astrophysics Data System (ADS)

    Grenier, Christophe; Roux, Nicolas; Anbergen, Hauke; Collier, Nathaniel; Costard, Francois; Ferrry, Michel; Frampton, Andrew; Frederick, Jennifer; Holmen, Johan; Jost, Anne; Kokh, Samuel; Kurylyk, Barret; McKenzie, Jeffrey; Molson, John; Orgogozo, Laurent; Rivière, Agnès; Rühaak, Wolfram; Selroos, Jan-Olof; Therrien, René; Vidstrand, Patrik

    2015-04-01

    The impacts of climate change in boreal regions has received considerable attention recently due to the warming trends that have been experienced in recent decades and are expected to intensify in the future. Large portions of these regions, corresponding to permafrost areas, are covered by water bodies (lakes, rivers) that interact with the surrounding permafrost. For example, the thermal state of the surrounding soil influences the energy and water budget of the surface water bodies. Also, these water bodies generate taliks (unfrozen zones below) that disturb the thermal regimes of permafrost and may play a key role in the context of climate change. Recent field studies and modeling exercises indicate that a fully coupled 2D or 3D Thermo-Hydraulic (TH) approach is required to understand and model the past and future evolution of landscapes, rivers, lakes and associated groundwater systems in a changing climate. However, there is presently a paucity of 3D numerical studies of permafrost thaw and associated hydrological changes, and the lack of study can be partly attributed to the difficulty in verifying multi-dimensional results produced by numerical models. Numerical approaches can only be validated against analytical solutions for a purely thermic 1D equation with phase change (e.g. Neumann, Lunardini). When it comes to the coupled TH system (coupling two highly non-linear equations), the only possible approach is to compare the results from different codes to provided test cases and/or to have controlled experiments for validation. Such inter-code comparisons can propel discussions to try to improve code performances. A benchmark exercise was initialized in 2014 with a kick-off meeting in Paris in November. Participants from USA, Canada, Germany, Sweden and France convened, representing altogether 13 simulation codes. The benchmark exercises consist of several test cases inspired by existing literature (e.g. McKenzie et al., 2007) as well as new ones. They range from simpler, purely thermal cases (benchmark T1) to more complex, coupled 2D TH cases (benchmarks TH1, TH2, and TH3). Some experimental cases conducted in cold room complement the validation approach. A web site hosted by LSCE (Laboratoire des Sciences du Climat et de l'Environnement) is an interaction platform for the participants and hosts the test cases database at the following address: https://wiki.lsce.ipsl.fr/interfrost. The results of the first stage of the benchmark exercise will be presented. We will mainly focus on the inter-comparison of participant results for the coupled cases (TH1, TH2 & TH3). Further perspectives of the exercise will also be presented. Extensions to more complex physical conditions (e.g. unsaturated conditions and geometrical deformations) are contemplated. In addition, 1D vertical cases of interest to the Climate Modeling community will be proposed. Keywords: Permafrost; Numerical modeling; River-soil interaction; Arctic systems; soil freeze-thaw

  8. Development and validation of a multiplex reaction analyzing eight miniSTRs of the X chromosome for identity and kinship testing with degraded DNA.

    PubMed

    Castañeda, María; Odriozola, Adrián; Gómez, Javier; Zarrabeitia, María T

    2013-07-01

    We report the development of an effective system for analyzing X chromosome-linked mini short tandem repeat loci with reduced-size amplicons (less than 220 bp), useful for analyzing highly degraded DNA samples. To generate smaller amplicons, we redesigned primers for eight X-linked microsatellites (DXS7132, DXS10079, DXS10074, DXS10075, DXS6801, DXS6809, DXS6789, and DXS6799) and established efficient conditions for a multiplex PCR system (miniX). The validation tests confirmed that it has good sensitivity, requiring as little as 20 pg of DNA, and performs well with DNA from paraffin-embedded tissues, thus showing potential for improved analysis and identification of highly degraded and/or very limited DNA samples. Consequently, this system may help to solve complex forensic cases, particularly when autosomal markers convey insufficient information.

  9. Monte Carlo tests of the ELIPGRID-PC algorithm

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Davidson, J.R.

    1995-04-01

    The standard tool for calculating the probability of detecting pockets of contamination called hot spots has been the ELIPGRID computer code of Singer and Wickman. The ELIPGRID-PC program has recently made this algorithm available for an IBM{reg_sign} PC. However, no known independent validation of the ELIPGRID algorithm exists. This document describes a Monte Carlo simulation-based validation of a modified version of the ELIPGRID-PC code. The modified ELIPGRID-PC code is shown to match Monte Carlo-calculated hot-spot detection probabilities to within {plus_minus}0.5% for 319 out of 320 test cases. The one exception, a very thin elliptical hot spot located within a rectangularmore » sampling grid, differed from the Monte Carlo-calculated probability by about 1%. These results provide confidence in the ability of the modified ELIPGRID-PC code to accurately predict hot-spot detection probabilities within an acceptable range of error.« less

  10. Assessment of Hybrid High-Order methods on curved meshes and comparison with discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Botti, Lorenzo; Di Pietro, Daniele A.

    2018-10-01

    We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.

  11. [Validation of the Eating Attitudes Test as a screening instrument for eating disorders in general population].

    PubMed

    Peláez-Fernández, María Angeles; Ruiz-Lázaro, Pedro Manuel; Labrador, Francisco Javier; Raich, Rosa María

    2014-02-20

    To validate the best cut-off point of the Eating Attitudes Test (EAT-40), Spanish version, for the screening of eating disorders (ED) in the general population. This was a transversal cross-sectional study. The EAT-40 Spanish version was administered to a representative sample of 1.543 students, age range 12 to 21 years, in the Region of Madrid. Six hundred and two participants (probable cases and a random sample of controls) were interviewed. The best diagnostic prediction was obtained with a cut-off point of 21, with sensitivity: 88.2%; specificity: 62.1%; positive predictive value: 17.7%; negative predictive value: 62.1%. Use of a cut-off point of 21 is recommended in epidemiological studies of eating disorders in the Spanish general population. Copyright © 2012 Elsevier España, S.L. All rights reserved.

  12. Modelling of piezoelectric actuator dynamics for active structural control

    NASA Technical Reports Server (NTRS)

    Hagood, Nesbitt W.; Chung, Walter H.; Von Flotow, Andreas

    1990-01-01

    The paper models the effects of dynamic coupling between a structure and an electrical network through the piezoelectric effect. The coupled equations of motion of an arbitrary elastic structure with piezoelectric elements and passive electronics are derived. State space models are developed for three important cases: direct voltage driven electrodes, direct charge driven electrodes, and an indirect drive case where the piezoelectric electrodes are connected to an arbitrary electrical circuit with embedded voltage and current sources. The equations are applied to the case of a cantilevered beam with surface mounted piezoceramics and indirect voltage and current drive. The theoretical derivations are validated experimentally on an actively controlled cantilevered beam test article with indirect voltage drive.

  13. A validated case definition for chronic rhinosinusitis in administrative data: a Canadian perspective.

    PubMed

    Rudmik, Luke; Xu, Yuan; Kukec, Edward; Liu, Mingfu; Dean, Stafford; Quan, Hude

    2016-11-01

    Pharmacoepidemiological research using administrative databases has become increasingly popular for chronic rhinosinusitis (CRS); however, without a validated case definition the cohort evaluated may be inaccurate resulting in biased and incorrect outcomes. The objective of this study was to develop and validate a generalizable administrative database case definition for CRS using International Classification of Diseases, 9th edition (ICD-9)-coded claims. A random sample of 100 patients with a guideline-based diagnosis of CRS and 100 control patients were selected and then linked to a Canadian physician claims database from March 31, 2010, to March 31, 2015. The proportion of CRS ICD-9-coded claims (473.x and 471.x) for each of these 200 patients were reviewed and the validity of 7 different ICD-9-based coding algorithms was evaluated. The CRS case definition of ≥2 claims with a CRS ICD-9 code (471.x or 473.x) within 2 years of the reference case provides a balanced validity with a sensitivity of 77% and specificity of 79%. Applying this CRS case definition to the claims database produced a CRS cohort of 51,000 patients with characteristics that were consistent with published demographics and rates of comorbid asthma, allergic rhinitis, and depression. This study has validated several coding algorithms; based on the results a case definition of ≥2 physician claims of CRS (ICD-9 of 471.x or 473.x) within 2 years provides an optimal level of validity. Future studies will need to validate this administrative case definition from different health system perspectives and using larger retrospective chart reviews from multiple providers. © 2016 ARS-AAOA, LLC.

  14. Gravitropic responses of the Avena coleoptile in space and on clinostats. II. Is reciprocity valid?

    NASA Technical Reports Server (NTRS)

    Johnsson, A.; Brown, A. H.; Chapman, D. K.; Heathcote, D.; Karlsson, C.

    1995-01-01

    Experiments were undertaken to determine if the reciprocity rule is valid for gravitropic responses of oat coleoptiles in the acceleration region below 1 g. The rule predicts that the gravitropic response should be proportional to the product of the applied acceleration and the stimulation time. Seedlings were cultivated on 1 g centrifuges and transferred to test centrifuges to apply a transverse g-stimulation. Since responses occurred in microgravity, the uncertainties about the validity of clinostat simulation of weightlessness was avoided. Plants at two stages of coleoptile development were tested. Plant responses were obtained using time-lapse video recordings that were analyzed after the flight. Stimulus intensities and durations were varied and ranged from 0.1 to 1.0 g and from 2 to 130 min, respectively. For threshold g-doses the reciprocity rule was obeyed. The threshold dose was of the order of 55 g s and 120 g s, respectively, for two groups of plants investigated. Reciprocity was studied also at bending responses which are from just above the detectable level to about 10 degrees. The validity of the rule could not be confirmed for higher g-doses, chiefly because the data were more variable. It was investigated whether the uniformity of the overall response data increased when the gravitropic dose was defined as (gm x t) with m-values different from unity. This was not the case and the reciprocity concept is, therefore, valid also in the hypogravity region. The concept of gravitropic dose, the product of the transverse acceleration and the stimulation time, is also well-defined in the acceleration region studied. With the same hardware, tests were done on earth where responses occurred on clinostats. The results did not contradict the reciprocity rule but scatter in the data was large.

  15. Development and clinical validation of the Genedrive point-of-care test for qualitative detection of hepatitis C virus.

    PubMed

    Llibre, Alba; Shimakawa, Yusuke; Mottez, Estelle; Ainsworth, Shaun; Buivan, Tan-Phuc; Firth, Rick; Harrison, Elliott; Rosenberg, Arielle R; Meritet, Jean-François; Fontanet, Arnaud; Castan, Pablo; Madejón, Antonio; Laverick, Mark; Glass, Allison; Viana, Raquel; Pol, Stanislas; McClure, C Patrick; Irving, William Lucien; Miele, Gino; Albert, Matthew L; Duffy, Darragh

    2018-04-03

    Recently approved direct acting antivirals provide transformative therapies for chronic hepatitis C virus (HCV) infection. The major clinical challenge remains to identify the undiagnosed patients worldwide, many of whom live in low-income and middle-income countries, where access to nucleic acid testing remains limited. The aim of this study was to develop and validate a point-of-care (PoC) assay for the qualitative detection of HCV RNA. We developed a PoC assay for the qualitative detection of HCV RNA on the PCR Genedrive instrument. We validated the Genedrive HCV assay through a case-control study comparing results with those obtained with the Abbott RealTi m e HCV test. The PoC assay identified all major HCV genotypes, with a limit of detection of 2362 IU/mL (95% CI 1966 to 2788). Using 422 patients chronically infected with HCV and 503 controls negative for anti-HCV and HCV RNA, the Genedrive HCV assay showed 98.6% sensitivity (95% CI 96.9% to 99.5%) and 100% specificity (95% CI 99.3% to 100%) to detect HCV. In addition, melting peak ratiometric analysis demonstrated proof-of-principle for semiquantification of HCV. The test was further validated in a real clinical setting in a resource-limited country. We report a rapid, simple, portable and accurate PoC molecular test for HCV, with sensitivity and specificity that fulfils the recent FIND/WHO Target Product Profile for HCV decentralised testing in low-income and middle-income countries. This Genedrive HCV assay may positively impact the continuum of HCV care from screening to cure by supporting real-time treatment decisions. NCT02992184. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  16. Efficient automatic OCR word validation using word partial format derivation and language model

    NASA Astrophysics Data System (ADS)

    Chen, Siyuan; Misra, Dharitri; Thoma, George R.

    2010-01-01

    In this paper we present an OCR validation module, implemented for the System for Preservation of Electronic Resources (SPER) developed at the U.S. National Library of Medicine.1 The module detects and corrects suspicious words in the OCR output of scanned textual documents through a procedure of deriving partial formats for each suspicious word, retrieving candidate words by partial-match search from lexicons, and comparing the joint probabilities of N-gram and OCR edit transformation corresponding to the candidates. The partial format derivation, based on OCR error analysis, efficiently and accurately generates candidate words from lexicons represented by ternary search trees. In our test case comprising a historic medico-legal document collection, this OCR validation module yielded the correct words with 87% accuracy and reduced the overall OCR word errors by around 60%.

  17. Diagnosis of Concurrent Pulmonary Tuberculosis and Tuberculous Otitis Media Confirmed by Xpert MTB/RIF in the United States.

    PubMed

    Tompkins, Kathleen M; Reimers, Melissa A; White, Becky L; Herce, Michael E

    2016-05-01

    Tuberculosis (TB) remains an important cause of infectious morbidity in the United States (US), necessitating timely and accurate diagnosis. We report a case of concurrent pulmonary and extrapulmonary TB presenting as tuberculous otitis media in a hospitalized US patient admitted with cough, night sweats, and unilateral purulent otorrhea. Diagnosis was made by smear microscopy and rapidly confirmed by Xpert MTB/RIF-a novel, automated nucleic acid amplification test for the rapid detection of drug-susceptible and drug-resistant TB. This case adds to the growing body of evidence validating Xpert MTB/RIF as an effective tool for the rapid diagnosis of extrapulmonary TB, even in low TB-prevalence settings such as the US, when testing is performed on non-respiratory specimens.

  18. Diagnosis of Concurrent Pulmonary Tuberculosis and Tuberculous Otitis Media Confirmed by Xpert MTB/RIF in the United States

    PubMed Central

    Tompkins, Kathleen M.; Reimers, Melissa A.; White, Becky L.; Herce, Michael E.

    2015-01-01

    Tuberculosis (TB) remains an important cause of infectious morbidity in the United States (US), necessitating timely and accurate diagnosis. We report a case of concurrent pulmonary and extrapulmonary TB presenting as tuberculous otitis media in a hospitalized US patient admitted with cough, night sweats, and unilateral purulent otorrhea. Diagnosis was made by smear microscopy and rapidly confirmed by Xpert MTB/RIF—a novel, automated nucleic acid amplification test for the rapid detection of drug-susceptible and drug-resistant TB. This case adds to the growing body of evidence validating Xpert MTB/RIF as an effective tool for the rapid diagnosis of extrapulmonary TB, even in low TB-prevalence settings such as the US, when testing is performed on non-respiratory specimens. PMID:27346926

  19. 49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2013-10-01 2013-10-01 false What is validity testing, and are laboratories...

  20. 49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2011-10-01 2011-10-01 false What is validity testing, and are laboratories...

  1. 49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2010-10-01 2010-10-01 false What is validity testing, and are laboratories...

  2. 49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2012-10-01 2012-10-01 false What is validity testing, and are laboratories...

  3. 49 CFR 40.89 - What is validity testing, and are laboratories required to conduct it?

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... PROCEDURES FOR TRANSPORTATION WORKPLACE DRUG AND ALCOHOL TESTING PROGRAMS Drug Testing Laboratories § 40.89 What is validity testing, and are laboratories required to conduct it? (a) Specimen validity testing is... 49 Transportation 1 2014-10-01 2014-10-01 false What is validity testing, and are laboratories...

  4. Validity and Reproducibility of an Incremental Sit-To-Stand Exercise Test for Evaluating Anaerobic Threshold in Young, Healthy Individuals.

    PubMed

    Nakamura, Keisuke; Ohira, Masayoshi; Yokokawa, Yoshiharu; Nagasawa, Yuya

    2015-12-01

    Sit-to-stand exercise (STS) is a common activity of daily living. The objectives of the present study were: 1) to assess the validity of aerobic fitness measurements based on anaerobic thresholds (ATs), during incremental sit-to-stand exercise (ISTS) with and without arm support compared with an incremental cycle-ergometer (CE) test; and 2) to examine the reproducibility of the AT measured during the ISTSs. Twenty-six healthy individuals randomly performed the ISTS and CE test. Oxygen uptakes at the AT (AT-VO2) and heart rate at the AT (AT-HR) were determined during the ISTSs and CE test, and repeated-measures analyses of variance and Tukey's post-hoc test were used to evaluate the differences between these variables. Pearson correlation coefficients were used to assess the strength of the relationship between AT-VO2 and AT-HR during the ISTSs and CE test. Data analysis yielded the following correlations: AT-VO2 during the ISTS with arm support and the CE test, r = 0.77 (p < 0.05); AT-VO2 during the ISTS without arm support and the CE test, r = 0.70 (p < 0.05); AT-HR during the ISTS with arm support and the CE test, r = 0.80 (p < 0.05); and AT-HR during the ISTS without arm support and the CE test, r = 0.66 (p < 0.05). The AT-VO2 values during the ISTS with arm support (18.5 ± 1.9 mL·min(-1)·kg(-1)) and the CE test (18.4 ± 1.8 mL·min(-1)·kg(-1)) were significantly higher than those during the ISTS without arm support (16.6 ± 1.8 mL·min(-1)·kg(-1); p < 0.05). The AT-HR values during the ISTS with arm support (126 ± 10 bpm) and the CE test (126 ± 13 bpm) were significantly higher than those during the ISTS without arm support (119 ± 9 bpm; p < 0.05). The ISTS with arm support may provide a cardiopulmonary function load equivalent to the CE test; therefore, it is a potentially valid test for evaluating AT-VO2 and AT-HR in healthy, young adults. Key pointsThe ISTS is a simple test that varies only according to the frequency of standing up, and requires only a small space and a chair.The ISTS with arm support is valid and reproducible, and is a safe test for evaluating AT in healthy young adults.For evaluating the AT, the ISTS may serve as a valid alternative to conventional CPX, using either a cycle ergometer or treadmill, in cases where the latter methods are difficult to implement.

  5. Aircraft Mishap Exercise at SLF

    NASA Image and Video Library

    2018-02-14

    NASA Kennedy Space Center's Flight Operations prepares to rehearse a helicopter crash-landing to test new and updated emergency procedures. Called the Aircraft Mishap Preparedness and Contingency Plan, the operation was designed to validate several updated techniques the center's first responders would follow, should they ever need to rescue a crew in case of a real accident. The mishap exercise took place at the center's Shuttle Landing Facility.

  6. Tri-Service Corrosion Conference

    DTIC Science & Technology

    2002-01-18

    PREVENTION / CASE STUDIES 63 Issues in the Measurement of Volatile Organic Compounds (VOC’S) in New- 64 Generation Low-VOC Marine Coatings for...Bell Lab’s Corrosion Preventive Compound (MIL-L-87177A Grade B) 95 David H. Horne,ChE., P.E. The Operational Testing of the CPC ACF-50 on the...A. Matzdorf Low Volatile Organic Compound (VOC) Chemical Agent Resistant Coating 601 (CARC) Application Demonstration/Validation Lisa Weiser

  7. Thermal/structural Tailoring of Engine Blades (T/SEAEBL). Theoretical Manual

    NASA Technical Reports Server (NTRS)

    Brown, K. W.; Clevenger, W. B.

    1994-01-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.

  8. Validation of an Interdisciplinary Food Safety Curriculum Targeted at Middle School Students and Correlated to State Educational Standards

    ERIC Educational Resources Information Center

    Richards, Jennifer; Skolits, Gary; Burney, Janie; Pedigo, Ashley; Draughon, F. Ann

    2008-01-01

    Providing effective food safety education to young consumers is a national health priority to combat the nearly 76 million cases of foodborne illness in the United States annually. With the tremendous pressures on teachers for accountability in core subject areas, the focus of classrooms is on covering concepts that are tested on state performance…

  9. Thermal/structural tailoring of engine blades (T/SEAEBL). Theoretical manual

    NASA Astrophysics Data System (ADS)

    Brown, K. W.; Clevenger, W. B.

    1994-03-01

    The Thermal/Structural Tailoring of Engine Blades (T/STAEBL) system is a family of computer programs executed by a control program. The T/STAEBL system performs design optimizations of cooled, hollow turbine blades and vanes. This manual describes the T/STAEBL data block structure and system organization. The approximate analysis and optimization modules are detailed, and a validation test case is provided.

  10. Blade loss transient dynamic analysis of turbomachinery

    NASA Technical Reports Server (NTRS)

    Stallone, M. J.; Gallardo, V.; Storace, A. F.; Bach, L. J.; Black, G.; Gaffney, E. F.

    1982-01-01

    This paper reports on work completed to develop an analytical method for predicting the transient non-linear response of a complete aircraft engine system due to the loss of a fan blade, and to validate the analysis by comparing the results against actual blade loss test data. The solution, which is based on the component element method, accounts for rotor-to-casing rubs, high damping and rapid deceleration rates associated with the blade loss event. A comparison of test results and predicted response show good agreement except for an initial overshoot spike not observed in test. The method is effective for analysis of large systems.

  11. Beyond Corroboration: Strengthening Model Validation by Looking for Unexpected Patterns

    PubMed Central

    Chérel, Guillaume; Cottineau, Clémentine; Reuillon, Romain

    2015-01-01

    Models of emergent phenomena are designed to provide an explanation to global-scale phenomena from local-scale processes. Model validation is commonly done by verifying that the model is able to reproduce the patterns to be explained. We argue that robust validation must not only be based on corroboration, but also on attempting to falsify the model, i.e. making sure that the model behaves soundly for any reasonable input and parameter values. We propose an open-ended evolutionary method based on Novelty Search to look for the diverse patterns a model can produce. The Pattern Space Exploration method was tested on a model of collective motion and compared to three common a priori sampling experiment designs. The method successfully discovered all known qualitatively different kinds of collective motion, and performed much better than the a priori sampling methods. The method was then applied to a case study of city system dynamics to explore the model’s predicted values of city hierarchisation and population growth. This case study showed that the method can provide insights on potential predictive scenarios as well as falsifiers of the model when the simulated dynamics are highly unrealistic. PMID:26368917

  12. A gas-kinetic BGK scheme for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    2000-01-01

    This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.

  13. Prediction of Acoustic Loads Generated by Propulsion Systems

    NASA Technical Reports Server (NTRS)

    Perez, Linamaria; Allgood, Daniel C.

    2011-01-01

    NASA Stennis Space Center is one of the nation's premier facilities for conducting large-scale rocket engine testing. As liquid rocket engines vary in size, so do the acoustic loads that they produce. When these acoustic loads reach very high levels they may cause damages both to humans and to actual structures surrounding the testing area. To prevent these damages, prediction tools are used to estimate the spectral content and levels of the acoustics being generated by the rocket engine plumes and model their propagation through the surrounding atmosphere. Prior to the current work, two different acoustic prediction tools were being implemented at Stennis Space Center, each having their own advantages and disadvantages depending on the application. Therefore, a new prediction tool was created, using NASA SP-8072 handbook as a guide, which would replicate the same prediction methods as the previous codes, but eliminate any of the drawbacks the individual codes had. Aside from replicating the previous modeling capability in a single framework, additional modeling functions were added thereby expanding the current modeling capability. To verify that the new code could reproduce the same predictions as the previous codes, two verification test cases were defined. These verification test cases also served as validation cases as the predicted results were compared to actual test data.

  14. Validation of the 3-day rule for stool bacterial tests in Japan.

    PubMed

    Kobayashi, Masanori; Sako, Akahito; Ogami, Toshiko; Nishimura, So; Asayama, Naoki; Yada, Tomoyuki; Nagata, Naoyoshi; Sakurai, Toshiyuki; Yokoi, Chizu; Kobayakawa, Masao; Yanase, Mikio; Masaki, Naohiko; Takeshita, Nozomi; Uemura, Naomi

    2014-01-01

    Stool cultures are expensive and time consuming, and the positive rate of enteric pathogens in cases of nosocomial diarrhea is low. The 3-day rule, whereby clinicians order a Clostridium difficile (CD) toxin test rather than a stool culture for inpatients developing diarrhea >3 days after admission, has been well studied in Western countries. The present study sought to validate the 3-day rule in an acute care hospital setting in Japan. Stool bacterial and CD toxin test results for adult patients hospitalized in an acute care hospital in 2008 were retrospectively analyzed. Specimens collected after an initial positive test were excluded. The positive rate and cost-effectiveness of the tests were compared among three patient groups. The adult patients were divided into three groups for comparison: outpatients, patients hospitalized for ≤3 days and patients hospitalized for ≥4 days. Over the 12-month period, 1,597 stool cultures were obtained from 992 patients, and 880 CD toxin tests were performed in 529 patients. In the outpatient, inpatient ≤3 days and inpatient ≥4 days groups, the rate of positive stool cultures was 14.2%, 3.6% and 1.3% and that of positive CD toxin tests was 1.9%, 7.1% and 8.5%, respectively. The medical costs required to obtain one positive result were 9,181, 36,075 and 103,600 JPY and 43,200, 11,333 and 9,410 JPY, respectively. The 3-day rule was validated for the first time in a setting other than a Western country. Our results revealed that the "3-day rule" is also useful and cost-effective in Japan.

  15. Analytical Validation of the ReEBOV Antigen Rapid Test for Point-of-Care Diagnosis of Ebola Virus Infection.

    PubMed

    Cross, Robert W; Boisen, Matthew L; Millett, Molly M; Nelson, Diana S; Oottamasathien, Darin; Hartnett, Jessica N; Jones, Abigal B; Goba, Augustine; Momoh, Mambu; Fullah, Mohamed; Bornholdt, Zachary A; Fusco, Marnie L; Abelson, Dafna M; Oda, Shunichiro; Brown, Bethany L; Pham, Ha; Rowland, Megan M; Agans, Krystle N; Geisbert, Joan B; Heinrich, Megan L; Kulakosky, Peter C; Shaffer, Jeffrey G; Schieffelin, John S; Kargbo, Brima; Gbetuwa, Momoh; Gevao, Sahr M; Wilson, Russell B; Saphire, Erica Ollmann; Pitts, Kelly R; Khan, Sheik Humarr; Grant, Donald S; Geisbert, Thomas W; Branco, Luis M; Garry, Robert F

    2016-10-15

    Ebola virus disease (EVD) is a severe viral illness caused by Ebola virus (EBOV). The 2013-2016 EVD outbreak in West Africa is the largest recorded, with >11 000 deaths. Development of the ReEBOV Antigen Rapid Test (ReEBOV RDT) was expedited to provide a point-of-care test for suspected EVD cases. Recombinant EBOV viral protein 40 antigen was used to derive polyclonal antibodies for RDT and enzyme-linked immunosorbent assay development. ReEBOV RDT limits of detection (LOD), specificity, and interference were analytically validated on the basis of Food and Drug Administration (FDA) guidance. The ReEBOV RDT specificity estimate was 95% for donor serum panels and 97% for donor whole-blood specimens. The RDT demonstrated sensitivity to 3 species of Ebolavirus (Zaire ebolavirus, Sudan ebolavirus, and Bundibugyo ebolavirus) associated with human disease, with no cross-reactivity by pathogens associated with non-EBOV febrile illness, including malaria parasites. Interference testing exhibited no reactivity by medications in common use. The LOD for antigen was 4.7 ng/test in serum and 9.4 ng/test in whole blood. Quantitative reverse transcription-polymerase chain reaction testing of nonhuman primate samples determined the range to be equivalent to 3.0 × 10 5 -9.0 × 10 8 genomes/mL. The analytical validation presented here contributed to the ReEBOV RDT being the first antigen-based assay to receive FDA and World Health Organization emergency use authorization for this EVD outbreak, in February 2015. © The Author 2016. Published by Oxford University Press for the Infectious Diseases Society of America. All rights reserved. For permissions, e-mail journals.permissions@oup.com.

  16. A wall interference assessment/correction system

    NASA Technical Reports Server (NTRS)

    Lo, Ching F.; Ulbrich, N.; Sickles, W. L.; Qian, Cathy X.

    1992-01-01

    A Wall Signature method, the Hackett method, has been selected to be adapted for the 12-ft Wind Tunnel wall interference assessment/correction (WIAC) system in the present phase. This method uses limited measurements of the static pressure at the wall, in conjunction with the solid wall boundary condition, to determine the strength and distribution of singularities representing the test article. The singularities are used in turn for estimating wall interferences at the model location. The Wall Signature method will be formulated for application to the unique geometry of the 12-ft Tunnel. The development and implementation of a working prototype will be completed, delivered and documented with a software manual. The WIAC code will be validated by conducting numerically simulated experiments rather than actual wind tunnel experiments. The simulations will be used to generate both free-air and confined wind-tunnel flow fields for each of the test articles over a range of test configurations. Specifically, the pressure signature at the test section wall will be computed for the tunnel case to provide the simulated 'measured' data. These data will serve as the input for the WIAC method-Wall Signature method. The performance of the WIAC method then may be evaluated by comparing the corrected parameters with those for the free-air simulation. Each set of wind tunnel/test article numerical simulations provides data to validate the WIAC method. A numerical wind tunnel test simulation is initiated to validate the WIAC methods developed in the project. In the present reported period, the blockage correction has been developed and implemented for a rectangular tunnel as well as the 12-ft Pressure Tunnel. An improved wall interference assessment and correction method for three-dimensional wind tunnel testing is presented in the appendix.

  17. A Tissue Systems Pathology Assay for High-Risk Barrett's Esophagus.

    PubMed

    Critchley-Thorne, Rebecca J; Duits, Lucas C; Prichard, Jeffrey W; Davison, Jon M; Jobe, Blair A; Campbell, Bruce B; Zhang, Yi; Repa, Kathleen A; Reese, Lia M; Li, Jinhong; Diehl, David L; Jhala, Nirag C; Ginsberg, Gregory; DeMarshall, Maureen; Foxwell, Tyler; Zaidi, Ali H; Lansing Taylor, D; Rustgi, Anil K; Bergman, Jacques J G H M; Falk, Gary W

    2016-06-01

    Better methods are needed to predict risk of progression for Barrett's esophagus. We aimed to determine whether a tissue systems pathology approach could predict progression in patients with nondysplastic Barrett's esophagus, indefinite for dysplasia, or low-grade dysplasia. We performed a nested case-control study to develop and validate a test that predicts progression of Barrett's esophagus to high-grade dysplasia (HGD) or esophageal adenocarcinoma (EAC), based upon quantification of epithelial and stromal variables in baseline biopsies. Data were collected from Barrett's esophagus patients at four institutions. Patients who progressed to HGD or EAC in ≥1 year (n = 79) were matched with patients who did not progress (n = 287). Biopsies were assigned randomly to training or validation sets. Immunofluorescence analyses were performed for 14 biomarkers and quantitative biomarker and morphometric features were analyzed. Prognostic features were selected in the training set and combined into classifiers. The top-performing classifier was assessed in the validation set. A 3-tier, 15-feature classifier was selected in the training set and tested in the validation set. The classifier stratified patients into low-, intermediate-, and high-risk classes [HR, 9.42; 95% confidence interval, 4.6-19.24 (high-risk vs. low-risk); P < 0.0001]. It also provided independent prognostic information that outperformed predictions based on pathology analysis, segment length, age, sex, or p53 overexpression. We developed a tissue systems pathology test that better predicts risk of progression in Barrett's esophagus than clinicopathologic variables. The test has the potential to improve upon histologic analysis as an objective method to risk stratify Barrett's esophagus patients. Cancer Epidemiol Biomarkers Prev; 25(6); 958-68. ©2016 AACR. ©2016 American Association for Cancer Research.

  18. Development of hybrid fog detection algorithm (FDA) using satellite and ground observation data for nighttime

    NASA Astrophysics Data System (ADS)

    Kim, So-Hyeong; Han, Ji-Hae; Suh, Myoung-Seok

    2017-04-01

    In this study, we developed a hybrid fog detection algorithm (FDA) using AHI/Himawari-8 satellite and ground observation data for nighttime. In order to detect fog at nighttime, Dual Channel Difference (DCD) method based on the emissivity difference between SWIR and IR1 is most widely used. DCD is good at discriminating fog from other things (middle/high clouds, clear sea and land). However, it is difficult to distinguish fog from low clouds. In order to separate the low clouds from the pixels that satisfy the thresholds of fog in the DCD test, we conducted supplementary tests such as normalized local standard derivation (NLSD) of BT11 and the difference of fog top temperature (BT11) and air temperature (Ta) from NWP data (SST from OSTIA data). These tests are based on the larger homogeneity of fog top than low cloud tops and the similarity of fog top temperature and Ta (SST). Threshold values for the three tests were optimized through ROC analysis for the selected fog cases. In addition, considering the spatial continuity of fog, post-processing was performed to detect the missed pixels, in particular, at edge of fog or sub-pixel size fog. The final fog detection results are presented by fog probability (0 100 %). Validation was conducted by comparing fog detection probability with the ground observed visibility data from KMA. The validation results showed that POD and FAR are ranged from 0.70 0.94 and 0.45 0.72, respectively. The quantitative validation and visual inspection indicate that current FDA has a tendency to over-detect the fog. So, more works which reducing the FAR is needed. In the future, we will also validate sea fog using CALIPSO data.

  19. Validity of the Patient Health Questionnaire-9 to screen for depression in a high-HIV burden primary healthcare clinic in Johannesburg, South Africa.

    PubMed

    Cholera, R; Gaynes, B N; Pence, B W; Bassett, J; Qangule, N; Macphail, C; Bernhardt, S; Pettifor, A; Miller, W C

    2014-01-01

    Integration of depression screening into primary care may increase access to mental health services in sub-Saharan Africa, but this approach requires validated screening instruments. We sought to validate the Patient Health Questionnaire-9 (PHQ-9) as a depression screening tool at a high HIV-burden primary care clinic in Johannesburg, South Africa. We conducted a validation study of an interviewer-administered PHQ-9 among 397 patients. Sensitivity and specificity of the PHQ-9 were calculated with the Mini International Neuropsychiatric Interview (MINI) as the reference standard; receiver operating characteristic (ROC) curve analyses were performed. The prevalence of depression was 11.8%. One-third of participants tested positive for HIV. HIV-infected patients were more likely to be depressed (15%) than uninfected patients (9%; p=0.08). Using the standard cutoff score of ≥10, the PHQ-9 had a sensitivity of 78.7% (95% CI: 64.3-89.3) and specificity of 83.4% (95% CI: 79.1-87.2). The area under the ROC curve was 0.88 (95% CI: 0.83-0.92). Test performance did not vary by HIV status or language. In sensitivity analyses, reference test bias associated with the MINI appeared unlikely. We were unable to conduct qualitative work to adapt the PHQ-9 to this cultural context. This is the first validation study of the PHQ-9 in a primary care clinic in sub-Saharan Africa. It highlights the potential for using primary care as an access point for identifying depressive symptoms during routine HIV testing. The PHQ-9 showed reasonable accuracy in classifying cases of depression, was easily implemented by lay health workers, and is a useful screening tool in this setting. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Application of a framework to assess the usefulness of alternative sepsis criteria

    PubMed Central

    Seymour, Christopher W.; Coopersmith, Craig M.; Deutschman, Clifford S; Gesten, Foster; Klompas, Michael; Levy, Mitchell; Martin, Gregory S.; Osborn, Tiffany M.; Rhee, Chanu; Warren, David; Watson, R. Scott; Angus, Derek C.

    2016-01-01

    The current definition for sepsis is life-threatening, acute organ dysfunction secondary to a dysregulated host response to infection. Criteria to operationalize this definition can be judged by 6 domains of usefulness (reliability; content, construct and criterion validity, measurement burden, and timeliness). The relative importance of these 6 domains depends on the intended purpose for the criteria (clinical care, basic and clinical research, surveillance, or quality improvement (QI) and audit). For example, criteria for clinical care should have high content and construct validity, timeliness, and low measurement burden to facilitate prompt care. Criteria for surveillance or QI/audit place greater emphasis on reliability across individuals and sites and lower emphasis on timeliness. Criteria for clinical trials require timeliness to ensure prompt enrollment and reasonable reliability but can tolerate high measurement burden. Basic research also tolerates high measurement burden and may not need stability over time. In an illustrative case study, we compared examples of criteria designed for clinical care, surveillance and QI/audit among 396,241 patients admitted to 12 academic and community hospitals in an integrated health system. Case rates differed 4-fold and mortality 3-fold. Predictably, clinical care criteria, which emphasized timeliness and low burden and therefore used vital signs and routine laboratory tests, had the highest case identification with lowest mortality. QI /audit criteria, which emphasized reliability and criterion validity, used discharge information and had the lowest case identification with highest mortality. Using this framework to identify the purpose and apply domains of usefulness can help with the evaluation of existing sepsis diagnostic criteria and provide a roadmap for future work. PMID:26901560

Top