Sample records for validation study design

  1. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study.

    PubMed

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules' performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2-4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved.

  2. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study

    PubMed Central

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Background Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules’ performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Methods Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. Results A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2–4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Conclusion Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved. PMID:26730980

  3. Excavator Design Validation

    NASA Technical Reports Server (NTRS)

    Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je

    2010-01-01

    The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete

  4. Practical Aspects of Designing and Conducting Validation Studies Involving Multi-study Trials.

    PubMed

    Coecke, Sandra; Bernasconi, Camilla; Bowe, Gerard; Bostroem, Ann-Charlotte; Burton, Julien; Cole, Thomas; Fortaner, Salvador; Gouliarmou, Varvara; Gray, Andrew; Griesinger, Claudius; Louhimies, Susanna; Gyves, Emilio Mendoza-de; Joossens, Elisabeth; Prinz, Maurits-Jan; Milcamps, Anne; Parissis, Nicholaos; Wilk-Zasadna, Iwona; Barroso, João; Desprez, Bertrand; Langezaal, Ingrid; Liska, Roman; Morath, Siegfried; Reina, Vittorio; Zorzoli, Chiara; Zuang, Valérie

    This chapter focuses on practical aspects of conducting prospective in vitro validation studies, and in particular, by laboratories that are members of the European Union Network of Laboratories for the Validation of Alternative Methods (EU-NETVAL) that is coordinated by the EU Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM). Prospective validation studies involving EU-NETVAL, comprising a multi-study trial involving several laboratories or "test facilities", typically consist of two main steps: (1) the design of the validation study by EURL ECVAM and (2) the execution of the multi-study trial by a number of qualified laboratories within EU-NETVAL, coordinated and supported by EURL ECVAM. The approach adopted in the conduct of these validation studies adheres to the principles described in the OECD Guidance Document on the Validation and International Acceptance of new or updated test methods for Hazard Assessment No. 34 (OECD 2005). The context and scope of conducting prospective in vitro validation studies is dealt with in Chap. 4 . Here we focus mainly on the processes followed to carry out a prospective validation of in vitro methods involving different laboratories with the ultimate aim of generating a dataset that can support a decision in relation to the possible development of an international test guideline (e.g. by the OECD) or the establishment of performance standards.

  5. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    PubMed

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Quality Rating and Improvement System (QRIS) Validation Study Designs. CEELO FastFacts

    ERIC Educational Resources Information Center

    Schilder, D.

    2013-01-01

    In this "Fast Facts," a state has received Race to the Top Early Learning Challenge funds and is seeking information to inform the design of the Quality Rating and Improvement System (QRIS) validation study. The Center on Enhancing Early Learning Outcomes (CEELO) responds that according to Resnick (2012), validation of a QRIS is an…

  7. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  8. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

    PubMed Central

    Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  9. Curriculum Design Orientations Preference Scale of Teachers: Validity and Reliability Study

    ERIC Educational Resources Information Center

    Bas, Gokhan

    2013-01-01

    The purpose of this study was to develop a valid and reliable scale for preferences of teachers in regard of their curriculum design orientations. Because there was no scale development study similar to this one in Turkey, it was considered as an urgent need to develop such a scale in the study. The sample of the research consisted of 300…

  10. The Validity and Precision of the Comparative Interrupted Time-Series Design: Three Within-Study Comparisons

    ERIC Educational Resources Information Center

    St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.

    2016-01-01

    We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to…

  11. Regression Discontinuity and Beyond: Options for Studying External Validity in an Internally Valid Design

    ERIC Educational Resources Information Center

    Wing, Coady; Bello-Gomez, Ricardo A.

    2018-01-01

    Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…

  12. Design, development, testing and validation of a Photonics Virtual Laboratory for the study of LEDs

    NASA Astrophysics Data System (ADS)

    Naranjo, Francisco L.; Martínez, Guadalupe; Pérez, Ángel L.; Pardo, Pedro J.

    2014-07-01

    This work presents the design, development, testing and validation of a Photonic Virtual Laboratory, highlighting the study of LEDs. The study was conducted from a conceptual, experimental and didactic standpoint, using e-learning and m-learning platforms. Specifically, teaching tools that help ensure that our students perform significant learning have been developed. It has been brought together the scientific aspect, such as the study of LEDs, with techniques of generation and transfer of knowledge through the selection, hierarchization and structuring of information using concept maps. For the validation of the didactic materials developed, it has been used procedures with various assessment tools for the collection and processing of data, applied in the context of an experimental design. Additionally, it was performed a statistical analysis to determine the validity of the materials developed. The assessment has been designed to validate the contributions of the new materials developed over the traditional method of teaching, and to quantify the learning achieved by students, in order to draw conclusions that serve as a reference for its application in the teaching and learning processes, and comprehensively validate the work carried out.

  13. Design and validation of instruments to measure knowledge.

    PubMed

    Elliott, T E; Regal, R R; Elliott, B A; Renier, C M

    2001-01-01

    Measuring health care providers' learning after they have participated in educational interventions that use experimental designs requires valid, reliable, and practical instruments. A literature review was conducted. In addition, experience gained from designing and validating instruments for measuring the effect of an educational intervention informed this process. The eight main steps for designing, validating, and testing the reliability of instruments for measuring learning outcomes are presented. The key considerations and rationale for this process are discussed. Methods for critiquing and adapting existent instruments and creating new ones are offered. This study may help other investigators in developing valid, reliable, and practical instruments for measuring the outcomes of educational activities.

  14. The Universal Design for Play Tool: Establishing Validity and Reliability

    ERIC Educational Resources Information Center

    Ruffino, Amy Goetz; Mistrett, Susan G.; Tomita, Machiko; Hajare, Poonam

    2006-01-01

    The Universal Design for Play (UDP) Tool is an instrument designed to evaluate the presence of universal design (UD) features in toys. This study evaluated its psychometric properties, including content validity, construct validity, and test-retest reliability. The UDP tool was designed to assist in selecting toys most appropriate for children…

  15. Recommendations of the VAC2VAC workshop on the design of multi-centre validation studies.

    PubMed

    Halder, Marlies; Depraetere, Hilde; Delannois, Frédérique; Akkermans, Arnoud; Behr-Gross, Marie-Emmanuelle; Bruysters, Martijn; Dierick, Jean-François; Jungbäck, Carmen; Kross, Imke; Metz, Bernard; Pennings, Jeroen; Rigsby, Peter; Riou, Patrice; Balks, Elisabeth; Dobly, Alexandre; Leroy, Odile; Stirling, Catrina

    2018-03-01

    Within the Innovative Medicines Initiative 2 (IMI 2) project VAC2VAC (Vaccine batch to vaccine batch comparison by consistency testing), a workshop has been organised to discuss ways of improving the design of multi-centre validation studies and use the data generated for product-specific validation purposes. Moreover, aspects of validation within the consistency approach context were addressed. This report summarises the discussions and outlines the conclusions and recommendations agreed on by the workshop participants. Copyright © 2018.

  16. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Validation of designation. 597.402 Section 597.402 Housing and Urban Development Regulations Relating to Housing and Urban Development... DESIGNATIONS Post-Designation Requirements § 597.402 Validation of designation. (a) Reevaluation of...

  17. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Validation of designation. 597.402 Section 597.402 Housing and Urban Development Regulations Relating to Housing and Urban Development... DESIGNATIONS Post-Designation Requirements § 597.402 Validation of designation. (a) Reevaluation of...

  18. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Validation of designation. 597.402 Section 597.402 Housing and Urban Development Regulations Relating to Housing and Urban Development... DESIGNATIONS Post-Designation Requirements § 597.402 Validation of designation. (a) Reevaluation of...

  19. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Validation of designation. 597.402 Section 597.402 Housing and Urban Development Regulations Relating to Housing and Urban Development... DESIGNATIONS Post-Designation Requirements § 597.402 Validation of designation. (a) Reevaluation of...

  20. Design for validation: An approach to systems validation

    NASA Technical Reports Server (NTRS)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  1. Designing and Validating a Measure of Teacher Knowledge of Universal Design for Assessment (UDA)

    ERIC Educational Resources Information Center

    Jamgochian, Elisa Megan

    2010-01-01

    The primary purpose of this study was to design and validate a measure of teacher knowledge of Universal Design for Assessment (TK-UDA). Guided by a validity framework, a number of inferences, assumptions, and evidences supported this investigation. By addressing a series of research questions, evidence was garnered for the use of the measure to…

  2. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 1 2011-01-01 2011-01-01 false Validation of designation. 25.404 Section 25.404 Agriculture Office of the Secretary of Agriculture RURAL EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES Post-Designation Requirements § 25.404 Validation of designation. (a) Maintaining the principles of the program...

  3. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 1 2014-01-01 2014-01-01 false Validation of designation. 25.404 Section 25.404 Agriculture Office of the Secretary of Agriculture RURAL EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES Post-Designation Requirements § 25.404 Validation of designation. (a) Maintaining the principles of the program...

  4. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 1 2012-01-01 2012-01-01 false Validation of designation. 25.404 Section 25.404 Agriculture Office of the Secretary of Agriculture RURAL EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES Post-Designation Requirements § 25.404 Validation of designation. (a) Maintaining the principles of the program...

  5. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Validation of designation. 25.404 Section 25.404 Agriculture Office of the Secretary of Agriculture RURAL EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES Post-Designation Requirements § 25.404 Validation of designation. (a) Maintaining the principles of the program...

  6. 24 CFR 598.425 - Validation of designation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Validation of designation. 598.425 Section 598.425 Housing and Urban Development Regulations Relating to Housing and Urban Development...-Designation Requirements § 598.425 Validation of designation. (a) On the basis of the periodic progress...

  7. 24 CFR 598.425 - Validation of designation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Validation of designation. 598.425 Section 598.425 Housing and Urban Development Regulations Relating to Housing and Urban Development...-Designation Requirements § 598.425 Validation of designation. (a) On the basis of the periodic progress...

  8. 24 CFR 598.425 - Validation of designation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Validation of designation. 598.425 Section 598.425 Housing and Urban Development Regulations Relating to Housing and Urban Development...-Designation Requirements § 598.425 Validation of designation. (a) On the basis of the periodic progress...

  9. 24 CFR 598.425 - Validation of designation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Validation of designation. 598.425 Section 598.425 Housing and Urban Development Regulations Relating to Housing and Urban Development...-Designation Requirements § 598.425 Validation of designation. (a) On the basis of the periodic progress...

  10. Creating and validating GIS measures of urban design for health research.

    PubMed

    Purciel, Marnie; Neckerman, Kathryn M; Lovasi, Gina S; Quinn, James W; Weiss, Christopher; Bader, Michael D M; Ewing, Reid; Rundle, Andrew

    2009-12-01

    Studies relating urban design to health have been impeded by the unfeasibility of conducting field observations across large areas and the lack of validated objective measures of urban design. This study describes measures for five dimensions of urban design - imageability, enclosure, human scale, transparency, and complexity - created using public geographic information systems (GIS) data from the US Census and city and state government. GIS measures were validated for a sample of 588 New York City block faces using a well-documented field observation protocol. Correlations between GIS and observed measures ranged from 0.28 to 0.89. Results show valid urban design measures can be constructed from digital sources.

  11. Statistical methodology: II. Reliability and validity assessment in study design, Part B.

    PubMed

    Karras, D J

    1997-02-01

    Validity measures the correspondence between a test and other purported measures of the same or similar qualities. When a reference standard exists, a criterion-based validity coefficient can be calculated. If no such standard is available, the concepts of content and construct validity may be used, but quantitative analysis may not be possible. The Pearson and Spearman tests of correlation are often used to assess the correspondence between tests, but do not account for measurement biases and may yield misleading results. Techniques that measure interest differences may be more meaningful in validity assessment, and the kappa statistic is useful for analyzing categorical variables. Questionnaires often can be designed to allow quantitative assessment of reliability and validity, although this may be difficult. Inclusion of homogeneous questions is necessary to assess reliability. Analysis is enhanced by using Likert scales or similar techniques that yield ordinal data. Validity assessment of questionnaires requires careful definition of the scope of the test and comparison with previously validated tools.

  12. Continual Response Measurement: Design and Validation.

    ERIC Educational Resources Information Center

    Baggaley, Jon

    1987-01-01

    Discusses reliability and validity of continual response measurement (CRM), a computer-based measurement technique, and its use in social science research. Highlights include the importance of criterion-referencing the data, guidelines for designing studies using CRM, examples typifying their deductive and inductive functions, and a discussion of…

  13. Design validation and labeling comprehension study for a new epinephrine autoinjector.

    PubMed

    Edwards, Eric S; Edwards, Evan T; Gunn, Ronald; Patterson, Patricia; North, Robert

    2013-03-01

    To facilitate the correct use of epinephrine autoinjectors (EAIs) by patients and caregivers, a novel EAI (Auvi-Q) was designed to help minimize use-related hazards. To support validation of Auvi-Q final design and assess whether the instructions for use in the patient information leaflet (PIL) are effective in training participants on proper use of Auvi-Q. Healthy participants, 20 adult and 20 pediatric, were assessed for their ability to complete a simulated injection by following the Auvi-Q instructions for use. Participants relied only on the contents of the PIL and other labeling features (device labeling and its instructions for use, electronic voice instructions and visual prompts). The mean ± SD age of the adult and pediatric participants was 39.4 ± 11.6 and 10.9 ± 2.3 years, respectively. In total, 80% of adult and 35% of pediatric participants had prior experience with EAIs. All adults and 95% of pediatric participants completed a simulated injection on the first attempt; 1 pediatric participant required parental training and a second attempt. Three adult and 4 pediatric participants exhibited a noncritical issue while successfully completing the simulated injection. Most participants agreed that the injection steps were easy to follow and the PIL facilitated understanding on using Auvi-Q safely and effectively. The PIL and other labeling features were effective in communicating instructions for successful use of Auvi-Q. This study provided validation support for the final design and anticipated instructions for use of Auvi-Q. Copyright © 2013 American College of Allergy, Asthma & Immunology. Published by Elsevier Inc. All rights reserved.

  14. Creating and validating GIS measures of urban design for health research

    PubMed Central

    Purciel, Marnie; Neckerman, Kathryn M.; Lovasi, Gina S.; Quinn, James W.; Weiss, Christopher; Bader, Michael D.M.; Ewing, Reid; Rundle, Andrew

    2012-01-01

    Studies relating urban design to health have been impeded by the unfeasibility of conducting field observations across large areas and the lack of validated objective measures of urban design. This study describes measures for five dimensions of urban design – imageability, enclosure, human scale, transparency, and complexity – created using public geographic information systems (GIS) data from the US Census and city and state government. GIS measures were validated for a sample of 588 New York City block faces using a well-documented field observation protocol. Correlations between GIS and observed measures ranged from 0.28 to 0.89. Results show valid urban design measures can be constructed from digital sources. PMID:22956856

  15. Design and Implementation Content Validity Study: Development of an instrument for measuring Patient-Centered Communication

    PubMed Central

    Zamanzadeh, Vahid; Ghahramanian, Akram; Rassouli, Maryam; Abbaszadeh, Abbas; Alavi-Majd, Hamid; Nikanfar, Ali-Reza

    2015-01-01

    Introduction: The importance of content validity in the instrument psychometric and its relevance with reliability, have made it an essential step in the instrument development. This article attempts to give an overview of the content validity process and to explain the complexity of this process by introducing an example. Methods: We carried out a methodological study conducted to examine the content validity of the patient-centered communication instrument through a two-step process (development and judgment). At the first step, domain determination, sampling (item generation) and instrument formation and at the second step, content validity ratio, content validity index and modified kappa statistic was performed. Suggestions of expert panel and item impact scores are used to examine the instrument face validity. Results: From a set of 188 items, content validity process identified seven dimensions includes trust building (eight items), informational support (seven items), emotional support (five items), problem solving (seven items), patient activation (10 items), intimacy/friendship (six items) and spirituality strengthening (14 items). Content validity study revealed that this instrument enjoys an appropriate level of content validity. The overall content validity index of the instrument using universal agreement approach was low; however, it can be advocated with respect to the high number of content experts that makes consensus difficult and high value of the S-CVI with the average approach, which was equal to 0.93. Conclusion: This article illustrates acceptable quantities indices for content validity a new instrument and outlines them during design and psychometrics of patient-centered communication measuring instrument. PMID:26161370

  16. A unified approach to validation, reliability, and education study design for surgical technical skills training.

    PubMed

    Sweet, Robert M; Hananel, David; Lawrenz, Frances

    2010-02-01

    To present modern educational psychology theory and apply these concepts to validity and reliability of surgical skills training and assessment. In a series of cross-disciplinary meetings, we applied a unified approach of behavioral science principles and theory to medical technical skills education given the recent advances in the theories in the field of behavioral psychology and statistics. While validation of the individual simulation tools is important, it is only one piece of a multimodal curriculum that in and of itself deserves examination and study. We propose concurrent validation throughout the design of simulation-based curriculum rather than once it is complete. We embrace the concept that validity and curriculum development are interdependent, ongoing processes that are never truly complete. Individual predictive, construct, content, and face validity aspects should not be considered separately but as interdependent and complementary toward an end application. Such an approach could help guide our acceptance and appropriate application of these exciting new training and assessment tools for technical skills training in medicine.

  17. A newly developed tool for classifying study designs in systematic reviews of interventions and exposures showed substantial reliability and validity.

    PubMed

    Seo, Hyun-Ju; Kim, Soo Young; Lee, Yoon Jae; Jang, Bo-Hyoung; Park, Ji-Eun; Sheen, Seung-Soo; Hahn, Seo Kyung

    2016-02-01

    To develop a study Design Algorithm for Medical Literature on Intervention (DAMI) and test its interrater reliability, construct validity, and ease of use. We developed and then revised the DAMI to include detailed instructions. To test the DAMI's reliability, we used a purposive sample of 134 primary, mainly nonrandomized studies. We then compared the study designs as classified by the original authors and through the DAMI. Unweighted kappa statistics were computed to test interrater reliability and construct validity based on the level of agreement between the original and DAMI classifications. Assessment time was also recorded to evaluate ease of use. The DAMI includes 13 study designs, including experimental and observational studies of interventions and exposure. Both the interrater reliability (unweighted kappa = 0.67; 95% CI [0.64-0.75]) and construct validity (unweighted kappa = 0.63, 95% CI [0.52-0.67]) were substantial. Mean classification time using the DAMI was 4.08 ± 2.44 minutes (range, 0.51-10.92). The DAMI showed substantial interrater reliability and construct validity. Furthermore, given its ease of use, it could be used to accurately classify medical literature for systematic reviews of interventions although minimizing disagreement between authors of such reviews. Copyright © 2016 Elsevier Inc. All rights reserved.

  18. Development and validation of a building design waste reduction model.

    PubMed

    Llatas, C; Osmani, M

    2016-10-01

    Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  19. Effects of Newly Designed Hospital Buildings on Staff Perceptions: A Pre-Post Study to Validate Design Decisions.

    PubMed

    Schreuder, Eliane; van Heel, Liesbeth; Goedhart, Rien; Dusseldorp, Elise; Schraagen, Jan Maarten; Burdorf, Alex

    2015-01-01

    This study investigates effects of the newly built nonpatient-related buildings of a large university medical center on staff perceptions and whether the design objectives were achieved. The medical center is gradually renewing its hospital building area of 200,000 m.(2) This redevelopment is carefully planned and because lessons learned can guide design decisions of the next phase, the medical center is keen to evaluate the performance of the new buildings. A pre- and post-study with a control group was conducted. Prior to the move to the new buildings an occupancy evaluation was carried out in the old setting (n = 729) (pre-study). Post occupation of the new buildings another occupancy evaluation (post-study) was carried out in the new setting (intervention group) and again in some old settings (control group) (n = 664). The occupancy evaluation consisted of an online survey that measured the perceived performance of different aspects of the building. Longitudinal multilevel analysis was used to compare the performance of the old buildings with the new buildings. Significant improvements were found in indoor climate, perceived safety, working environment, well-being, facilities, sustainability, and overall satisfaction. Commitment to the employer, working atmosphere, orientation, work performance, and knowledge sharing did not improve. The results were interpreted by relating them to specific design choices. We showed that it is possible to measure the performance improvements of a complex intervention being a new building design and validate design decisions. A focused design process aiming for a safe, pleasant and sustainable building resulted in actual improvements in some of the related performance measures. © The Author(s) 2015.

  20. Design and validation of general biology learning program based on scientific inquiry skills

    NASA Astrophysics Data System (ADS)

    Cahyani, R.; Mardiana, D.; Noviantoro, N.

    2018-03-01

    Scientific inquiry is highly recommended to teach science. The reality in the schools and colleges is that many educators still have not implemented inquiry learning because of their lack of understanding. The study aims to1) analyze students’ difficulties in learning General Biology, 2) design General Biology learning program based on multimedia-assisted scientific inquiry learning, and 3) validate the proposed design. The method used was Research and Development. The subjects of the study were 27 pre-service students of general elementary school/Islamic elementary schools. The workflow of program design includes identifying learning difficulties of General Biology, designing course programs, and designing instruments and assessment rubrics. The program design is made for four lecture sessions. Validation of all learning tools were performed by expert judge. The results showed that: 1) there are some problems identified in General Biology lectures; 2) the designed products include learning programs, multimedia characteristics, worksheet characteristics, and, scientific attitudes; and 3) expert validation shows that all program designs are valid and can be used with minor revisions. The first section in your paper.

  1. Designing and validation of a yoga-based intervention for schizophrenia.

    PubMed

    Govindaraj, Ramajayam; Varambally, Shivarama; Sharma, Manjunath; Gangadhar, Bangalore Nanjundaiah

    2016-06-01

    Schizophrenia is a chronic mental illness which causes significant distress and dysfunction. Yoga has been found to be effective as an add-on therapy in schizophrenia. Modules of yoga used in previous studies were based on individual researcher's experience. This study aimed to develop and validate a specific generic yoga-based intervention module for patients with schizophrenia. The study was conducted at NIMHANS Integrated Centre for Yoga (NICY). A yoga module was designed based on traditional and contemporary yoga literature as well as published studies. The yoga module along with three case vignettes of adult patients with schizophrenia was sent to 10 yoga experts for their validation. Experts (n = 10) gave their opinion on the usefulness of a yoga module for patients with schizophrenia with some modifications. In total, 87% (13 of 15 items) of the items in the initial module were retained, with modification in the remainder as suggested by the experts. A specific yoga-based module for schizophrenia was designed and validated by experts. Further studies are needed to confirm efficacy and clinical utility of the module. Additional clinical validation is suggested.

  2. Automatic control system generation for robot design validation

    NASA Technical Reports Server (NTRS)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  3. Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)

    DTIC Science & Technology

    2015-05-27

    Address: 10 Moulton Street, Cambridge, MA 02138 Title of the Project: Seaworthy Quantum Key Distribution Design and Validation (SEAKEY...Technologies Kathryn Carson Program Manager Quantum Information Processing Report Documentation Page Form ApprovedOMB No. 0704-0188 Public...2016 4. TITLE AND SUBTITLE Seaworthy Quantum Key Distribution Design and Validation (SEAKEY) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  4. The impact of registration accuracy on imaging validation study design: A novel statistical power calculation.

    PubMed

    Gibson, Eli; Fenster, Aaron; Ward, Aaron D

    2013-10-01

    Novel imaging modalities are pushing the boundaries of what is possible in medical imaging, but their signal properties are not always well understood. The evaluation of these novel imaging modalities is critical to achieving their research and clinical potential. Image registration of novel modalities to accepted reference standard modalities is an important part of characterizing the modalities and elucidating the effect of underlying focal disease on the imaging signal. The strengths of the conclusions drawn from these analyses are limited by statistical power. Based on the observation that in this context, statistical power depends in part on uncertainty arising from registration error, we derive a power calculation formula relating registration error, number of subjects, and the minimum detectable difference between normal and pathologic regions on imaging, for an imaging validation study design that accommodates signal correlations within image regions. Monte Carlo simulations were used to evaluate the derived models and test the strength of their assumptions, showing that the model yielded predictions of the power, the number of subjects, and the minimum detectable difference of simulated experiments accurate to within a maximum error of 1% when the assumptions of the derivation were met, and characterizing sensitivities of the model to violations of the assumptions. The use of these formulae is illustrated through a calculation of the number of subjects required for a case study, modeled closely after a prostate cancer imaging validation study currently taking place at our institution. The power calculation formulae address three central questions in the design of imaging validation studies: (1) What is the maximum acceptable registration error? (2) How many subjects are needed? (3) What is the minimum detectable difference between normal and pathologic image regions? Copyright © 2013 Elsevier B.V. All rights reserved.

  5. Design and validation of a comprehensive fecal incontinence questionnaire.

    PubMed

    Macmillan, Alexandra K; Merrie, Arend E H; Marshall, Roger J; Parry, Bryan R

    2008-10-01

    Fecal incontinence can have a profound effect on quality of life. Its prevalence remains uncertain because of stigma, lack of consistent definition, and dearth of validated measures. This study was designed to develop a valid clinical and epidemiologic questionnaire, building on current literature and expertise. Patients and experts undertook face validity testing. Construct validity, criterion validity, and test-retest reliability was undertaken. Construct validity comprised factor analysis and internal consistency of the quality of life scale. The validity of known groups was tested against 77 control subjects by using regression models. Questionnaire results were compared with a stool diary for criterion validity. Test-retest reliability was calculated from repeated questionnaire completion. The questionnaire achieved good face validity. It was completed by 104 patients. The quality of life scale had four underlying traits (factor analysis) and high internal consistency (overall Cronbach alpha = 0.97). Patients and control subjects answered the questionnaire significantly differently (P < 0.01) in known-groups validity testing. Criterion validity assessment found mean differences close to zero. Median reliability for the whole questionnaire was 0.79 (range, 0.35-1). This questionnaire compares favorably with other available instruments, although the interpretation of stool consistency requires further research. Its sensitivity to treatment still needs to be investigated.

  6. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... URBAN DEVELOPMENT COMMUNITY FACILITIES URBAN EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES: ROUND ONE... eligibility for and the validity of the designation of any Empowerment Zone or Enterprise Community. Determinations of whether any designated Empowerment Zone or Enterprise Community remains in good standing shall...

  7. 42 CFR 71.3 - Designation of yellow fever vaccination centers; Validation stamps.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...; Validation stamps. 71.3 Section 71.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN... Designation of yellow fever vaccination centers; Validation stamps. (a) Designation of yellow fever... health department, may revoke designation. (b) Validation stamps. International Certificates of...

  8. 42 CFR 71.3 - Designation of yellow fever vaccination centers; Validation stamps.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...; Validation stamps. 71.3 Section 71.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN... Designation of yellow fever vaccination centers; Validation stamps. (a) Designation of yellow fever... health department, may revoke designation. (b) Validation stamps. International Certificates of...

  9. 42 CFR 71.3 - Designation of yellow fever vaccination centers; Validation stamps.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...; Validation stamps. 71.3 Section 71.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN... Designation of yellow fever vaccination centers; Validation stamps. (a) Designation of yellow fever... health department, may revoke designation. (b) Validation stamps. International Certificates of...

  10. 42 CFR 71.3 - Designation of yellow fever vaccination centers; Validation stamps.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...; Validation stamps. 71.3 Section 71.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN... Designation of yellow fever vaccination centers; Validation stamps. (a) Designation of yellow fever... health department, may revoke designation. (b) Validation stamps. International Certificates of...

  11. Performance Ratings: Designs for Evaluating Their Validity and Accuracy.

    DTIC Science & Technology

    1986-07-01

    ratees with substantial validity and with little bias due to the ethod for rating. Convergent validity and discriminant validity account for approximately...The expanded research design suggests that purpose for the ratings has little influence on the multitrait-multimethod properties of the ratings...Convergent and discriminant validity again account for substantial differences in the ratings of performance. Little method bias is present; both methods of

  12. Using wound care algorithms: a content validation study.

    PubMed

    Beitz, J M; van Rijswijk, L

    1999-09-01

    Valid and reliable heuristic devices facilitating optimal wound care are lacking. The objectives of this study were to establish content validation data for a set of wound care algorithms, to identify their associated strengths and weaknesses, and to gain insight into the wound care decision-making process. Forty-four registered nurse wound care experts were surveyed and interviewed at national and regional educational meetings. Using a cross-sectional study design and an 83-item, 4-point Likert-type scale, this purposive sample was asked to quantify the degree of validity of the algorithms' decisions and components. Participants' comments were tape-recorded, transcribed, and themes were derived. On a scale of 1 to 4, the mean score of the entire instrument was 3.47 (SD +/- 0.87), the instrument's Content Validity Index was 0.86, and the individual Content Validity Index of 34 of 44 participants was > 0.8. Item scores were lower for those related to packing deep wounds (P < .001). No other significant differences were observed. Qualitative data analysis revealed themes of difficulty associated with wound assessment and care issues, that is, the absence of valid and reliable definitions. The wound care algorithms studied proved valid. However, the lack of valid and reliable wound assessment and care definitions hinders optimal use of these instruments. Further research documenting their clinical use is warranted. Research-based practice recommendations should direct the development of future valid and reliable algorithms designed to help nurses provide optimal wound care.

  13. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  14. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  15. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  16. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  17. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  18. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  19. Design and validity of a clinic-based case-control study on the molecular epidemiology of lymphoma

    PubMed Central

    Cerhan, James R; Fredericksen, Zachary S; Wang, Alice H; Habermann, Thomas M; Kay, Neil E; Macon, William R; Cunningham, Julie M; Shanafelt, Tait D; Ansell, Stephen M; Call, Timothy G; Witzig, Thomas E; Slager, Susan L; Liebow, Mark

    2011-01-01

    We present the design features and implementation of a clinic-based case-control study on the molecular epidemiology of lymphoma conducted at the Mayo Clinic (Rochester, Minnesota, USA), and then assess the internal and external validity of the study. Cases were newly diagnosed lymphoma patients from Minnesota, Iowa and Wisconsin seen at Mayo and controls were patients from the same region without lymphoma who had a pre-scheduled general medical examination, frequency matched on age, sex and residence. Overall response rates were 67% for cases and 70% for controls; response rates were lower for cases and controls over age 70 years, cases with more aggressive disease, and controls from the local area, although absolute differences were modest. Cases and controls were well-balanced on age, sex, and residence characteristics. Demographic and disease characteristics of NHL cases were similar to population-based cancer registry data. Control distributions were similar to population-based data on lifestyle factors and minor allele frequencies of over 500 SNPs, although smoking rates were slightly lower. Associations with NHL in the Mayo study for smoking, alcohol use, family history of lymphoma, autoimmune disease, asthma, eczema, body mass index, and single nucleotide polymorphisms in TNF (rs1800629), LTA (rs909253), and IL10 (rs1800896) were at a magnitude consistent with estimates from pooled studies in InterLymph, with history of any allergy the only directly discordant result in the Mayo study. These data suggest that this study should have strong internal and external validity. This framework may be useful to others who are designing a similar study. PMID:21686124

  20. Conceptual Design of a Hypervelocity Asteroid Intercept Vehicle (HAIV) Flight Validation Mission

    NASA Technical Reports Server (NTRS)

    Barbee, Brent W.; Wie, Bong; Steiner, Mark; Getzandanner, Kenneth

    2013-01-01

    In this paper we present a detailed overview of the MDL study results and subsequent advances in the design of GNC algorithms for accurate terminal guidance during hypervelocity NEO intercept. The MDL study produced a conceptual con guration of the two-body HAIV and its subsystems; a mission scenario and trajectory design for a notional flight validation mission to a selected candidate target NEO; GNC results regarding the ability of the HAIV to reliably intercept small (50 m) NEOs at hypervelocity (typically greater than 10 km/s); candidate launch vehicle selection; a notional operations concept and cost estimate for the flight validation mission; and a list of topics to address during the remainder of our NIAC Phase II study.

  1. Design and validation of a microfluidic device for blood-brain barrier monitoring and transport studies

    NASA Astrophysics Data System (ADS)

    Ugolini, Giovanni Stefano; Occhetta, Paola; Saccani, Alessandra; Re, Francesca; Krol, Silke; Rasponi, Marco; Redaelli, Alberto

    2018-04-01

    In vitro blood-brain barrier models are highly relevant for drug screening and drug development studies, due to the challenging task of understanding the transport mechanism of drug molecules through the blood-brain barrier towards the brain tissue. In this respect, microfluidics holds potential for providing microsystems that require low amounts of cells and reagent and can be potentially multiplexed for increasing the ease and throughput of the drug screening process. We here describe the design, development and validation of a microfluidic device for endothelial blood-brain barrier cell transport studies. The device comprises of two microstructured layers (top culture chamber and bottom collection chamber) sandwiching a porous membrane for the cell culture. Microstructured layers include two pairs of physical electrodes, embedded into the device layers by geometrically defined guiding channels with computationally optimized positions. These electrodes allow the use of commercial electrical measurement systems for monitoring trans-endothelial electrical resistance (TEER). We employed the designed device for performing preliminary assessment of endothelial barrier formation with murine brain endothelial cells (Br-bEnd5). Results demonstrate that cellular junctional complexes effectively form in the cultures (expression of VE-Cadherin and ZO-1) and that the TEER monitoring systems effectively detects an increase of resistance of the cultured cell layers indicative of tight junction formation. Finally, we validate the use of the described microsystem for drug transport studies demonstrating that Br-bEnd5 cells significantly hinder the transport of molecules (40 kDa and 4 kDa dextran) from the top culture chamber to the bottom collection chamber.

  2. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  3. Human factors engineering and design validation for the redesigned follitropin alfa pen injection device.

    PubMed

    Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne

    2015-05-01

    To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.

  4. An extended protocol for usability validation of medical devices: Research design and reference model.

    PubMed

    Schmettow, Martin; Schnittker, Raphaela; Schraagen, Jan Maarten

    2017-05-01

    This paper proposes and demonstrates an extended protocol for usability validation testing of medical devices. A review of currently used methods for the usability evaluation of medical devices revealed two main shortcomings. Firstly, the lack of methods to closely trace the interaction sequences and derive performance measures. Secondly, a prevailing focus on cross-sectional validation studies, ignoring the issues of learnability and training. The U.S. Federal Drug and Food Administration's recent proposal for a validation testing protocol for medical devices is then extended to address these shortcomings: (1) a novel process measure 'normative path deviations' is introduced that is useful for both quantitative and qualitative usability studies and (2) a longitudinal, completely within-subject study design is presented that assesses learnability, training effects and allows analysis of diversity of users. A reference regression model is introduced to analyze data from this and similar studies, drawing upon generalized linear mixed-effects models and a Bayesian estimation approach. The extended protocol is implemented and demonstrated in a study comparing a novel syringe infusion pump prototype to an existing design with a sample of 25 healthcare professionals. Strong performance differences between designs were observed with a variety of usability measures, as well as varying training-on-the-job effects. We discuss our findings with regard to validation testing guidelines, reflect on the extensions and discuss the perspectives they add to the validation process. Copyright © 2017 Elsevier Inc. All rights reserved.

  5. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  6. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  7. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  8. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  9. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  10. The Validity and Precision of the Comparative Interrupted Time Series Design and the Difference-in-Difference Design in Educational Evaluation

    ERIC Educational Resources Information Center

    Somers, Marie-Andrée; Zhu, Pei; Jacob, Robin; Bloom, Howard

    2013-01-01

    In this paper, we examine the validity and precision of two nonexperimental study designs (NXDs) that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. In a CITS design, program impacts are evaluated by looking at whether the treatment group deviates from its…

  11. Identification and Validation of ESP Teacher Competencies: A Research Design

    ERIC Educational Resources Information Center

    Venkatraman, G.; Prema, P.

    2013-01-01

    The paper presents the research design used for identifying and validating a set of competencies required of ESP (English for Specific Purposes) teachers. The identification of the competencies and the three-stage validation process are also discussed. The observation of classes of ESP teachers for field-testing the validated competencies and…

  12. A Design to Improve Internal Validity of Assessments of Teaching Demonstrations

    ERIC Educational Resources Information Center

    Bartsch, Robert A.; Engelhardt Bittner, Wendy M.; Moreno, Jesse E., Jr.

    2008-01-01

    Internal validity is important in assessing teaching demonstrations both for one's knowledge and for quality assessment demanded by outside sources. We describe a method to improve the internal validity of assessments of teaching demonstrations: a 1-group pretest-posttest design with alternative forms. This design is often more practical and…

  13. System design from mission definition to flight validation

    NASA Technical Reports Server (NTRS)

    Batill, S. M.

    1992-01-01

    Considerations related to the engineering systems design process and an approach taken to introduce undergraduate students to that process are presented. The paper includes details on a particular capstone design course. This course is a team oriented aircraft design project which requires the students to participate in many phases of the system design process, from mission definition to validation of their design through flight testing. To accomplish this in a single course requires special types of flight vehicles. Relatively small-scale, remotely piloted vehicles have provided the class of aircraft considered in this course.

  14. Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)

    DTIC Science & Technology

    2016-03-10

    Contractor Address: 10 Moulton Street, Cambridge, MA 02138 Title of the Project: Seaworthy Quantum Key Distribution Design and Validation (SEAKEY...Technologies Kathryn Carson Program Manager Quantum Information Processing 2 | P a g e Approved for public release; distribution is...we have continued work calculating the key rates achievable parametrically with receiver performance. In addition, we describe the initial designs

  15. Design and validation of an automated hydrostatic weighing system.

    PubMed

    McClenaghan, B A; Rocchio, L

    1986-08-01

    The purpose of this study was to design and evaluate the validity of an automated technique to assess body density using a computerized hydrostatic weighing system. An existing hydrostatic tank was modified and interfaced with a microcomputer equipped with an analog-to-digital converter. Software was designed to input variables, control the collection of data, calculate selected measurements, and provide a summary of the results of each session. Validity of the data obtained utilizing the automated hydrostatic weighing system was estimated by: evaluating the reliability of the transducer/computer interface to measure objects of known underwater weight; comparing the data against a criterion measure; and determining inter-session subject reliability. Values obtained from the automated system were found to be highly correlated with known underwater weights (r = 0.99, SEE = 0.0060 kg). Data concurrently obtained utilizing the automated system and a manual chart recorder were also found to be highly correlated (r = 0.99, SEE = 0.0606 kg). Inter-session subject reliability was determined utilizing data collected on subjects (N = 16) tested on two occasions approximately 24 h apart. Correlations revealed high relationships between measures of underwater weight (r = 0.99, SEE = 0.1399 kg) and body density (r = 0.98, SEE = 0.00244 g X cm-1). Results indicate that a computerized hydrostatic weighing system is a valid and reliable method for determining underwater weight.

  16. The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions.

    PubMed

    Jacob, Robin; Somers, Marie-Andree; Zhu, Pei; Bloom, Howard

    2016-06-01

    In this article, we examine whether a well-executed comparative interrupted time series (CITS) design can produce valid inferences about the effectiveness of a school-level intervention. This article also explores the trade-off between bias reduction and precision loss across different methods of selecting comparison groups for the CITS design and assesses whether choosing matched comparison schools based only on preintervention test scores is sufficient to produce internally valid impact estimates. We conduct a validation study of the CITS design based on the federal Reading First program as implemented in one state using results from a regression discontinuity design as a causal benchmark. Our results contribute to the growing base of evidence regarding the validity of nonexperimental designs. We demonstrate that the CITS design can, in our example, produce internally valid estimates of program impacts when multiple years of preintervention outcome data (test scores in the present case) are available and when a set of reasonable criteria are used to select comparison organizations (schools in the present case). © The Author(s) 2016.

  17. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved

  18. Design of a Hydro-Turbine Blade for Acoustic and Performance Validation Studies

    NASA Astrophysics Data System (ADS)

    Johnson, E.; Barone, M.

    2011-12-01

    To meet the growing, global energy demands governments and industry have recently begun to focus on marine hydrokinetic (MHK) devices as an additional form of power generation. Water turbines have become a popular design choice since they are able to leverage experience from the decades-old wind industry in the hope of decreasing time-to-market. However, the difference in environments poses challenges that need to be addressed. In particular, little research has addressed the acoustic effects of common aerofoils in a marine setting. This has both a potential impact on marine life and may cause early fatigue by exciting new structural modes. An initial blade design is presented, which has been used to begin characterization of any structural and acoustic issues that may arise from a direct one-to-one swap of wind technologies into MHK devices. The blade was optimized for performance using blade-element momentum theory while requiring that it not exceed the allowable stress under a specified extreme operating design condition. This limited the maximum power generated, while ensuring a realizable blade. A stress analysis within ANSYS was performed to validate the structural integrity of the design. Additionally, predictions of the radiated noise from the MHK rotor will be made using boundary element modeling based on flow results from ANSYS CFX, a computational fluid dynamics (CFD) code. The FEA and CFD results demonstrate good comparison to the expected design. Determining a range for the anticipated noise produced from a MHK turbine provides a look at the environmental impact these devices will have. Future efforts will focus on the design constraints noise generation places on MHK devices.

  19. Design, Development, and Validation of Learning Objects

    ERIC Educational Resources Information Center

    Nugent, Gwen; Soh, Leen-Kiat; Samal, Ashok

    2006-01-01

    A learning object is a small, stand-alone, mediated content resource that can be reused in multiple instructional contexts. In this article, we describe our approach to design, develop, and validate Shareable Content Object Reference Model (SCORM) compliant learning objects for undergraduate computer science education. We discuss the advantages of…

  20. Design and validation of a multimethod assessment of metacognition and study of the effectiveness of Metacognitive Interventions

    NASA Astrophysics Data System (ADS)

    Sandi-Urena, Guillermo Santiago

    The central role of metacognition in learning and problem solving, in general and in chemistry in specific, has been substantially demonstrated and has raised pronounced interest in its study. However, the intrinsic difficulties associated with the inner processes of such a non-overt behavior have delayed the development of appropriate assessment instruments. The first research question addressed in this work originates from this observation: Is it possible to reliably assess metacognition use in chemistry problem solving? This study presents the development, validation, and application of a multimethod instrument for the assessment of metacognition use in chemistry problem solving. This multimethod is composed of two independent methods used at different times in relation to the task performance: (1) the prospective Metacognitive Activities Inventory, MCA-I; and (2) the concurrent Interactive MultiMedia Exercises software package, IMMEX. This work also includes the design, development, and validation of the MCA-I; evidence is discussed that supports its robustness, reliability and validity. Even though IMMEX is well-developed, its utilization as a metacognition assessment tool is novel and explained within this work. Among the benefits of utilizing IMMEX are: the automation of concurrent evidence collection and analysis which allows for the participation of large cohorts, the elimination of subjective assessments, and the collection of data in the absence of observers which presumably favors a more realistic deployment of skills by the participants. The independent instruments produced convergent results and the multimethod designed was proven to be reliable, robust and valid for the intended purpose. The second guiding question refers to the development of metacognition: Can regulatory metacognition use be enhanced by learning environments? Two interventions were utilized to explore this inquiry: a Collaborative Metacognitive Intervention and a Cooperative

  1. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  2. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  3. Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.

    2017-01-01

    Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.

  4. Development and Validation of a Hypersonic Vehicle Design Tool Based On Waverider Design Technique

    NASA Astrophysics Data System (ADS)

    Dasque, Nastassja

    Methodologies for a tool capable of assisting design initiatives for practical waverider based hypersonic vehicles were developed and validated. The design space for vehicle surfaces was formed using an algorithm that coupled directional derivatives with the conservation laws to determine a flow field defined by a set of post-shock streamlines. The design space is used to construct an ideal waverider with a sharp leading edge. A blunting method was developed to modify the ideal shapes to a more practical geometry for real-world application. Empirical and analytical relations were then systematically applied to the resulting geometries to determine local pressure, skin-friction and heat flux. For the ideal portion of the geometry, flat plate relations for compressible flow were applied. For the blunted portion of the geometry modified Newtonian theory, Fay-Riddell theory and Modified Reynolds analogy were applied. The design and analysis methods were validated using analytical solutions as well as empirical and numerical data. The streamline solution for the flow field generation technique was compared with a Taylor-Maccoll solution and showed very good agreement. The relationship between the local Stanton number and skin friction coefficient with local Reynolds number along the ideal portion of the body showed good agreement with experimental data. In addition, an automated grid generation routine was formulated to construct a structured mesh around resulting geometries in preparation for Computational Fluid Dynamics analysis. The overall analysis of the waverider body using the tool was then compared to CFD studies. The CFD flow field showed very good agreement with the design space. However, the distribution of the surface properties was near CFD results but did not have great agreement.

  5. A Validation of Object-Oriented Design Metrics

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel; Melo, Walcelio L.

    1995-01-01

    This paper presents the results of a study conducted at the University of Maryland in which we experimentally investigated the suite of Object-Oriented (00) design metrics introduced by [Chidamber and Kemerer, 1994]. In order to do this, we assessed these metrics as predictors of fault-prone classes. This study is complementary to [Lieand Henry, 1993] where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on experimental results, the advantages and drawbacks of these 00 metrics are discussed and suggestions for improvement are provided. Several of Chidamber and Kemerer's 00 metrics appear to be adequate to predict class fault-proneness during the early phases of the life-cycle. We also showed that they are, on our data set, better predictors than "traditional" code metrics, which can only be collected at a later phase of the software development processes.

  6. Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)

    DTIC Science & Technology

    2015-11-12

    polarization control and the CV state and the LO state are separated at a polarizing beam splitter . The CV state is delayed relative to the LO state, and... splitter or loss imperfections. We have identified a number of risks associated with implementing this design . The two most critical risks are: • The...Contractor Address: 10 Moulton Street, Cambridge, MA 02138 Title of the Project: Seaworthy Quantum Key Distribution Design and Validation (SEAKEY

  7. On validation of the rain climatic zone designations for Nigeria

    NASA Astrophysics Data System (ADS)

    Obiyemi, O. O.; Ibiyemi, T. S.; Ojo, J. S.

    2017-07-01

    In this paper, validation of rain climatic zone classifications for Nigeria is presented based on global radio-climatic models by the International Telecommunication Union-Radiocommunication (ITU-R) and Crane. Rain rate estimates deduced from several ground-based measurements and those earlier estimated from the precipitation index on the Tropical Rain Measurement Mission (TRMM) were employed for the validation exercise. Although earlier classifications indicated that Nigeria falls into zones P, Q, N, and K for the ITU-R designations, and zones E and H for Crane's climatic zone designations, the results however confirmed that the rain climatic zones across Nigeria can only be classified into four, namely P, Q, M, and N for the ITU-R designations, while the designations by Crane exhibited only three zones, namely E, G, and H. The ITU-R classification was found to be more suitable for planning microwave and millimeter wave links across Nigeria. The research outcomes are vital in boosting the confidence level of system designers in using the ITU-R designations as presented in the map developed for the rain zone designations for estimating the attenuation induced by rain along satellite and terrestrial microwave links over Nigeria.

  8. Design-validation of a hand exoskeleton using musculoskeletal modeling.

    PubMed

    Hansen, Clint; Gosselin, Florian; Ben Mansour, Khalil; Devos, Pierre; Marin, Frederic

    2018-04-01

    Exoskeletons are progressively reaching homes and workplaces, allowing interaction with virtual environments, remote control of robots, or assisting human operators in carrying heavy loads. Their design is however still a challenge as these robots, being mechanically linked to the operators who wear them, have to meet ergonomic constraints besides usual robotic requirements in terms of workspace, speed, or efforts. They have in particular to fit the anthropometry and mobility of their users. This traditionally results in numerous prototypes which are progressively fitted to each individual person. In this paper, we propose instead to validate the design of a hand exoskeleton in a fully digital environment, without the need for a physical prototype. The purpose of this study is thus to examine whether finger kinematics are altered when using a given hand exoskeleton. Therefore, user specific musculoskeletal models were created and driven by a motion capture system to evaluate the fingers' joint kinematics when performing two industrial related tasks. The kinematic chain of the exoskeleton was added to the musculoskeletal models and its compliance with the hand movements was evaluated. Our results show that the proposed exoskeleton design does not influence fingers' joints angles, the coefficient of determination between the model with and without exoskeleton being consistently high (R 2 ¯=0.93) and the nRMSE consistently low (nRMSE¯ = 5.42°). These results are promising and this approach combining musculoskeletal and robotic modeling driven by motion capture data could be a key factor in the ergonomics validation of the design of orthotic devices and exoskeletons prior to manufacturing. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Regression discontinuity was a valid design for dichotomous outcomes in three randomized trials.

    PubMed

    van Leeuwen, Nikki; Lingsma, Hester F; Mooijaart, Simon P; Nieboer, Daan; Trompet, Stella; Steyerberg, Ewout W

    2018-06-01

    Regression discontinuity (RD) is a quasi-experimental design that may provide valid estimates of treatment effects in case of continuous outcomes. We aimed to evaluate validity and precision in the RD design for dichotomous outcomes. We performed validation studies in three large randomized controlled trials (RCTs) (Corticosteroid Randomization After Significant Head injury [CRASH], the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries [GUSTO], and PROspective Study of Pravastatin in elderly individuals at risk of vascular disease [PROSPER]). To mimic the RD design, we selected patients above and below a cutoff (e.g., age 75 years) randomized to treatment and control, respectively. Adjusted logistic regression models using restricted cubic splines (RCS) and polynomials and local logistic regression models estimated the odds ratio (OR) for treatment, with 95% confidence intervals (CIs) to indicate precision. In CRASH, treatment increased mortality with OR 1.22 [95% CI 1.06-1.40] in the RCT. The RD estimates were 1.42 (0.94-2.16) and 1.13 (0.90-1.40) with RCS adjustment and local regression, respectively. In GUSTO, treatment reduced mortality (OR 0.83 [0.72-0.95]), with more extreme estimates in the RD analysis (OR 0.57 [0.35; 0.92] and 0.67 [0.51; 0.86]). In PROSPER, similar RCT and RD estimates were found, again with less precision in RD designs. We conclude that the RD design provides similar but substantially less precise treatment effect estimates compared with an RCT, with local regression being the preferred method of analysis. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Relative validity of a semiquantitative food frequency questionnaire designed for schoolchildren in western Greece

    PubMed Central

    Roumelioti, Maria; Leotsinidis, Michalis

    2009-01-01

    Background The use of food frequency questionnaires (FFQs) has become increasingly important in epidemiologic studies. During the past few decades, a wide variety of nutritional studies have used the semiquantitative FFQ as a tool for assessing and evaluating dietary intake. One of the main concerns in a dietary analysis is the validity of the collected dietary data. Methods This paper discusses several methodological and statistical issues related to the validation of a semiquantitative FFQ. This questionnaire was used to assess the nutritional habits of schoolchildren in western Greece. For validation purposes, we selected 200 schoolchildren and contacted their respective parents. We evaluated the relative validity of 400 FFQs (200 children's FFQs and 200 parents' FFQs). Results The correlations between the children's and the parents' questionnaire responses showed that the questionnaire we designed was appropriate for fulfilling the purposes of our study and in ranking subjects according to food group intake. Conclusion Our study shows that the semiquantitative FFQ provides a reasonably reliable measure of dietary intake and corroborates the relative validity of our questionnaire. PMID:19196469

  11. Bayesian Adaptive Trial Design for a Newly Validated Surrogate Endpoint

    PubMed Central

    Renfro, Lindsay A.; Carlin, Bradley P.; Sargent, Daniel J.

    2011-01-01

    Summary The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as ‘valid.’ However, little consideration has been given to how a trial which utilizes a newly-validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multi-trial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively–perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O’Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly-validated surrogate endpoint for overall survival. PMID:21838811

  12. NDARC - NASA Design and Analysis of Rotorcraft Validation and Demonstration

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2010-01-01

    Validation and demonstration results from the development of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are presented. The principal tasks of NDARC are to design a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft chosen as NDARC development test cases are the UH-60A single main-rotor and tail-rotor helicopter, the CH-47D tandem helicopter, the XH-59A coaxial lift-offset helicopter, and the XV-15 tiltrotor. These aircraft were selected because flight performance data, a weight statement, detailed geometry information, and a correlated comprehensive analysis model are available for each. Validation consists of developing the NDARC models for these aircraft by using geometry and weight information, airframe wind tunnel test data, engine decks, rotor performance tests, and comprehensive analysis results; and then comparing the NDARC results for aircraft and component performance with flight test data. Based on the calibrated models, the capability of the code to size rotorcraft is explored.

  13. BioNetCAD: design, simulation and experimental validation of synthetic biochemical networks

    PubMed Central

    Rialle, Stéphanie; Felicori, Liza; Dias-Lopes, Camila; Pérès, Sabine; El Atia, Sanaâ; Thierry, Alain R.; Amar, Patrick; Molina, Franck

    2010-01-01

    Motivation: Synthetic biology studies how to design and construct biological systems with functions that do not exist in nature. Biochemical networks, although easier to control, have been used less frequently than genetic networks as a base to build a synthetic system. To date, no clear engineering principles exist to design such cell-free biochemical networks. Results: We describe a methodology for the construction of synthetic biochemical networks based on three main steps: design, simulation and experimental validation. We developed BioNetCAD to help users to go through these steps. BioNetCAD allows designing abstract networks that can be implemented thanks to CompuBioTicDB, a database of parts for synthetic biology. BioNetCAD enables also simulations with the HSim software and the classical Ordinary Differential Equations (ODE). We demonstrate with a case study that BioNetCAD can rationalize and reduce further experimental validation during the construction of a biochemical network. Availability and implementation: BioNetCAD is freely available at http://www.sysdiag.cnrs.fr/BioNetCAD. It is implemented in Java and supported on MS Windows. CompuBioTicDB is freely accessible at http://compubiotic.sysdiag.cnrs.fr/ Contact: stephanie.rialle@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20628073

  14. Validation of Design and Analysis Techniques of Tailored Composite Structures

    NASA Technical Reports Server (NTRS)

    Jegley, Dawn C. (Technical Monitor); Wijayratne, Dulnath D.

    2004-01-01

    Aeroelasticity is the relationship between the elasticity of an aircraft structure and its aerodynamics. This relationship can cause instabilities such as flutter in a wing. Engineers have long studied aeroelasticity to ensure such instabilities do not become a problem within normal operating conditions. In recent decades structural tailoring has been used to take advantage of aeroelasticity. It is possible to tailor an aircraft structure to respond favorably to multiple different flight regimes such as takeoff, landing, cruise, 2-g pull up, etc. Structures can be designed so that these responses provide an aerodynamic advantage. This research investigates the ability to design and analyze tailored structures made from filamentary composites. Specifically the accuracy of tailored composite analysis must be verified if this design technique is to become feasible. To pursue this idea, a validation experiment has been performed on a small-scale filamentary composite wing box. The box is tailored such that its cover panels induce a global bend-twist coupling under an applied load. Two types of analysis were chosen for the experiment. The first is a closed form analysis based on a theoretical model of a single cell tailored box beam and the second is a finite element analysis. The predicted results are compared with the measured data to validate the analyses. The comparison of results show that the finite element analysis is capable of predicting displacements and strains to within 10% on the small-scale structure. The closed form code is consistently able to predict the wing box bending to 25% of the measured value. This error is expected due to simplifying assumptions in the closed form analysis. Differences between the closed form code representation and the wing box specimen caused large errors in the twist prediction. The closed form analysis prediction of twist has not been validated from this test.

  15. A business rules design framework for a pharmaceutical validation and alert system.

    PubMed

    Boussadi, A; Bousquet, C; Sabatier, B; Caruba, T; Durieux, P; Degoulet, P

    2011-01-01

    Several alert systems have been developed to improve the patient safety aspects of clinical information systems (CIS). Most studies have focused on the evaluation of these systems, with little information provided about the methodology leading to system implementation. We propose here an 'agile' business rule design framework (BRDF) supporting both the design of alerts for the validation of drug prescriptions and the incorporation of the end user into the design process. We analyzed the unified process (UP) design life cycle and defined the activities, subactivities, actors and UML artifacts that could be used to enhance the agility of the proposed framework. We then applied the proposed framework to two different sets of data in the context of the Georges Pompidou University Hospital (HEGP) CIS. We introduced two new subactivities into UP: business rule specification and business rule instantiation activity. The pharmacist made an effective contribution to five of the eight BRDF design activities. Validation of the two new subactivities was effected in the context of drug dosage adaption to the patients' clinical and biological contexts. Pilot experiment shows that business rules modeled with BRDF and implemented as an alert system triggered an alert for 5824 of the 71,413 prescriptions considered (8.16%). A business rule design framework approach meets one of the strategic objectives for decision support design by taking into account three important criteria posing a particular challenge to system designers: 1) business processes, 2) knowledge modeling of the context of application, and 3) the agility of the various design steps.

  16. Design and Validation of Implantable Passive Mechanisms for Orthopedic Surgery

    DTIC Science & Technology

    2017-10-01

    have post-surgery? Please put the designated grading next to each picture. 2. When comparing to the force applied by the index finger, what percentage...system, when compared with using the direct suture. This concept is inspired by the use of such mechanisms in the design of “underactuated” robotic...AWARD NUMBER: W81XWH-16-1-0794 TITLE: Design and Validation of Implantable Passive Mechanisms for Orthopedic Surgery PRINCIPAL INVESTIGATOR

  17. Laboratory Experimental Design for a Glycomic Study.

    PubMed

    Ugrina, Ivo; Campbell, Harry; Vučković, Frano

    2017-01-01

    Proper attention to study design before, careful conduct of procedures during, and appropriate inference from results after scientific experiments are important in all scientific studies in order to ensure valid and sometimes definitive conclusions can be made. The design of experiments, also called experimental design, addresses the challenge of structuring and conducting experiments to answer the questions of interest as clearly and efficiently as possible.

  18. [Design and Validation of a Questionnaire on Vaccination in Students of Health Sciences, Spain].

    PubMed

    Fernández-Prada, María; Ramos-Martín, Pedro; Madroñal-Menéndez, Jaime; Martínez-Ortega, Carmen; González-Cabrera, Joaquín

    2016-11-07

    Immunization rates among medicine and nursing students -and among health professional in general- during hospital training are low. It is necessary to investigate the causes for these low immunization rates. The objective of this study was to design and validate a questionnaire for exploring the attitudes and behaviours of medicine and nursing students toward immunization of vaccine-preventable diseases. An instrument validation study. The sample included 646 nursing and medicine students at University of Oviedo, Spain. It was a non-ramdom sampling. After the content validation process, a 24-item questionnaire was designed to assess attitudes and behaviours/behavioural intentions. Reliability (ordinal alpha), internal validity (exploratory factor analysis by parellel analysis), ANOVA and mediational model tests were performed. Exploratory factor analysis yielded two factors which accounted for 48.8% of total variance. Ordinal alpha for the total score was 0.92. Differences were observed across academic years in the dimensions of attitudes (F5.447=3.728) and knowledge (F5.448=65.59), but not in behaviours/behavioural intentions (F5.461=1.680). Attitudes demonstrated to be a moderating variable of knowledge and attitudes/behavioural attitudes (Indirect effect B=0.15; SD=0.3; 95% CI:0.09-0.19). We developed a questionnaie based on sufficient evidence of reliability and internal validity. Scores on attitudes and knowledge increase with the academic year. Attitudes act as a moderating variable between knowledge and behaviours/behavioural intentions.

  19. Engineering Software Suite Validates System Design

    NASA Technical Reports Server (NTRS)

    2007-01-01

    EDAptive Computing Inc.'s (ECI) EDAstar engineering software tool suite, created to capture and validate system design requirements, was significantly funded by NASA's Ames Research Center through five Small Business Innovation Research (SBIR) contracts. These programs specifically developed Syscape, used to capture executable specifications of multi-disciplinary systems, and VectorGen, used to automatically generate tests to ensure system implementations meet specifications. According to the company, the VectorGen tests considerably reduce the time and effort required to validate implementation of components, thereby ensuring their safe and reliable operation. EDASHIELD, an additional product offering from ECI, can be used to diagnose, predict, and correct errors after a system has been deployed using EDASTAR -created models. Initial commercialization for EDASTAR included application by a large prime contractor in a military setting, and customers include various branches within the U.S. Department of Defense, industry giants like the Lockheed Martin Corporation, Science Applications International Corporation, and Ball Aerospace and Technologies Corporation, as well as NASA's Langley and Glenn Research Centers

  20. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  1. The development and validation of three videos designed to psychologically prepare patients for coronary bypass surgery.

    PubMed

    Mahler, H I; Kulik, J A

    1995-02-01

    The purpose of this study was to demonstrate the validation of videotape interventions that were designed to prepare patients for coronary artery bypass graft (CABG) surgery. First, three videotapes were developed. Two of the tapes featured the experiences of three actual CABG patients and were constructed to present either an optimistic portrayal of the recovery period (mastery tape) or a portrayal designed to inoculate patients against potential problems (coping tape). The third videotape contained the more general nurse scenes and narration used in the other two tapes, but did not include the experiences of particular patients. We then conducted a study to establish the convergent and discriminant validity of the three tapes. That is, we sought to demonstrate both that the tapes did differ along the mastery-coping dimension, and that they did not differ in other respects (such as in the degree of information provided or the perceived credibility of the narrator). The validation study, conducted with 42 males who had previously undergone CABG, demonstrated that the intended equivalences and differences between the tapes were achieved. The importance of establishing the validity of health-related interventions is discussed.

  2. Active-comparator design and new-user design in observational studies

    PubMed Central

    Yoshida, Kazuki; Solomon, Daniel H.; Kim, Seoyoung C.

    2015-01-01

    SUMMARY Over the past decade, an increasing number of observational studies have examined the effectiveness or safety of rheumatoid arthritis treatments. However, unlike randomized controlled trials (RCTs), observational studies of drug effects face methodological challenges including confounding by indication. Two design principles - active comparator design and new user design can help mitigate such challenges in observational studies. To improve validity of study findings, observational studies should be designed in such a way that makes them more closely approximate RCTs. The active comparator design compares the drug of interest to another commonly used agent for the same indication, rather than a ‘non-user’ group. This principle helps select treatment groups similar in treatment indications (both measured and unmeasured characteristics). The new user design includes a cohort of patients from the time of treatment initiation, so that it can assess patients’ pretreatment characteristics and capture all events occurring anytime during follow-up. PMID:25800216

  3. Conceptual Design of a Flight Validation Mission for a Hypervelocity Asteroid Intercept Vehicle

    NASA Technical Reports Server (NTRS)

    Barbee, Brent W.; Wie, Bong; Steiner, Mark; Getzandanner, Kenneth

    2013-01-01

    Near-Earth Objects (NEOs) are asteroids and comets whose orbits approach or cross Earth s orbit. NEOs have collided with our planet in the past, sometimes to devastating effect, and continue to do so today. Collisions with NEOs large enough to do significant damage to the ground are fortunately infrequent, but such events can occur at any time and we therefore need to develop and validate the techniques and technologies necessary to prevent the Earth impact of an incoming NEO. In this paper we provide background on the hazard posed to Earth by NEOs and present the results of a recent study performed by the NASA/Goddard Space Flight Center s Mission Design Lab (MDL) in collaboration with Iowa State University s Asteroid Deflection Research Center (ADRC) to design a flight validation mission for a Hypervelocity Asteroid Intercept Vehicle (HAIV) as part of a Phase 2 NASA Innovative Advanced Concepts (NIAC) research project. The HAIV is a two-body vehicle consisting of a leading kinetic impactor and trailing follower carrying a Nuclear Explosive Device (NED) payload. The HAIV detonates the NED inside the crater in the NEO s surface created by the lead kinetic impactor portion of the vehicle, effecting a powerful subsurface detonation to disrupt the NEO. For the flight validation mission, only a simple mass proxy for the NED is carried in the HAIV. Ongoing and future research topics are discussed following the presentation of the detailed flight validation mission design results produced in the MDL.

  4. Design and validation of a critical pathway for hospital management of patients with severe traumatic brain injury.

    PubMed

    Espinosa-Aguilar, Amilcar; Reyes-Morales, Hortensia; Huerta-Posada, Carlos E; de León, Itzcoatl Limón-Pérez; López-López, Fernando; Mejía-Hernández, Margarita; Mondragón-Martínez, María A; Calderón-Téllez, Ligia M; Amezcua-Cuevas, Rosa L; Rebollar-González, Jorge A

    2008-05-01

    Critical pathways for the management of patients with severe traumatic brain injury (STBI) may contribute to reducing the incidence of hospital complications, length of hospitalization stay, and cost of care. Such pathways have previously been developed for departments with significant resource availability. In Mexico, STBI is the most important cause of complications and length of stay in neurotrauma services at public hospitals. Although current treatment is designed basically in accordance with the Brain Trauma Foundation guidelines, shortfalls in the availability of local resources make it difficult to comply with these standards, and no critical pathway is available that accords with the resources of public hospitals. The purpose of the present study was to design and to validate a critical pathway for managing STBI patients that would be suitable for implementation in neurotrauma departments of middle-income level countries. The study comprised two phases: design (through literature review and design plan) and validation (content, construct, and appearance) of the critical pathway. The validated critical pathway for managing STBI patients entails four sequential subprocesses summarizing the hospital's care procedures, and includes three components: (1) nodes and criteria (in some cases, indicators are also included); (2) health team members in charge of the patient; (3) maximum estimated time for compliance with recommendations. This validated critical pathway is based on the current scientific evidence and accords with the availability of resources of middle-income countries.

  5. Students' Initial Knowledge State and Test Design: Towards a Valid and Reliable Test Instrument

    ERIC Educational Resources Information Center

    CoPo, Antonio Roland I.

    2015-01-01

    Designing a good test instrument involves specifications, test construction, validation, try-out, analysis and revision. The initial knowledge state of forty (40) tertiary students enrolled in Business Statistics course was determined and the same test instrument undergoes validation. The designed test instrument did not only reveal the baseline…

  6. Computational design and experimental validation of new thermal barrier systems

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Guo, Shengmin

    2015-03-31

    The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validationmore » applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr 2.75O 8 and confirmed it’s hot corrosion performance.« less

  7. LATUX: An Iterative Workflow for Designing, Validating, and Deploying Learning Analytics Visualizations

    ERIC Educational Resources Information Center

    Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew

    2015-01-01

    Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…

  8. Validation of scaffold design optimization in bone tissue engineering: finite element modeling versus designed experiments.

    PubMed

    Uth, Nicholas; Mueller, Jens; Smucker, Byran; Yousefi, Azizeh-Mitra

    2017-02-21

    This study reports the development of biological/synthetic scaffolds for bone tissue engineering (TE) via 3D bioplotting. These scaffolds were composed of poly(L-lactic-co-glycolic acid) (PLGA), type I collagen, and nano-hydroxyapatite (nHA) in an attempt to mimic the extracellular matrix of bone. The solvent used for processing the scaffolds was 1,1,1,3,3,3-hexafluoro-2-propanol. The produced scaffolds were characterized by scanning electron microscopy, microcomputed tomography, thermogravimetric analysis, and unconfined compression test. This study also sought to validate the use of finite-element optimization in COMSOL Multiphysics for scaffold design. Scaffold topology was simplified to three factors: nHA content, strand diameter, and strand spacing. These factors affect the ability of the scaffold to bear mechanical loads and how porous the structure can be. Twenty four scaffolds were constructed according to an I-optimal, split-plot designed experiment (DE) in order to generate experimental models of the factor-response relationships. Within the design region, the DE and COMSOL models agreed in their recommended optimal nHA (30%) and strand diameter (460 μm). However, the two methods disagreed by more than 30% in strand spacing (908 μm for DE; 601 μm for COMSOL). Seven scaffolds were 3D-bioplotted to validate the predictions of DE and COMSOL models (4.5-9.9 MPa measured moduli). The predictions for these scaffolds showed relative agreement for scaffold porosity (mean absolute percentage error of 4% for DE and 13% for COMSOL), but were substantially poorer for scaffold modulus (51% for DE; 21% for COMSOL), partly due to some simplifying assumptions made by the models. Expanding the design region in future experiments (e.g., higher nHA content and strand diameter), developing an efficient solvent evaporation method, and exerting a greater control over layer overlap could allow developing PLGA-nHA-collagen scaffolds to meet the mechanical requirements for

  9. Design and validation of pictograms in a pediatric anaphylaxis action plan.

    PubMed

    Mok, Garrick; Vaillancourt, Régis; Irwin, Danica; Wong, Alexandre; Zemek, Roger; Alqurashi, Waleed

    2015-05-01

    Current anaphylaxis action plans (AAPs) are based on written instructions without inclusion of pictograms. To develop an AAP with pictorial aids and to prospectively validate the pictogram components of this plan. Participants recruited from the emergency department and allergy clinic participated in a questionnaire to validate pictograms depicting key counseling points of an anaphylactic reaction. Children ≥ 10 years of age and caregivers of children < 10 years with acute anaphylaxis or who carried epinephrine auto-injector for confirmed allergy were eligible. Guessability, translucency, and recall were assessed for 11 pictogram designs. Pictograms identified as correct or partially correct by at least 85% of participants were considered valid. Three independent reviewers assessed these outcome measures. Of the 115 total participants, 73 (63%) were female, 76 (66%) were parents/guardians, and 39 (34%) were children aged 10-17. Overall, 10 pictograms (91%) reached ≥ 85% for correct guessability, translucency, and recall. Four pictograms were redesigned to reach the preset validation target. One pictogram depicting symptom management (5-min wait time after first epinephrine treatment) reached 82% translucency after redesign. However, it reached 98% and 100% of correct guessability and recall, respectively. We prospectively designed and validated a set of pictograms to be included in an AAP. The incorporation of validated pictograms into an AAP may potentially increase comprehension of the triggers, signs and symptoms, and management of an anaphylactic reaction. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  10. Validation Study of a Gatekeeping Attitude Index for Social Work Education

    ERIC Educational Resources Information Center

    Tam, Dora M. Y.; Coleman, Heather

    2011-01-01

    This article reports on a study designed to validate the Gatekeeping Attitude Index, a 14-item Likert scaling index. The authors collected data from a convenience sample of social work field instructors (N = 188) with a response rate of 74.0%. Construct validation by exploratory factor analysis identified a 2-factor solution on the index after…

  11. Validation of GC and HPLC systems for residue studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, M.

    1995-12-01

    For residue studies, GC and HPLC system performance must be validated prior to and during use. One excellent measure of system performance is the standard curve and associated chromatograms used to construct that curve. The standard curve is a model of system response to an analyte over a specific time period, and is prima facia evidence of system performance beginning at the auto sampler and proceeding through the injector, column, detector, electronics, data-capture device, and printer/plotter. This tool measures the performance of the entire chromatographic system; its power negates most of the benefits associated with costly and time-consuming validation ofmore » individual system components. Other measures of instrument and method validation will be discussed, including quality control charts and experimental designs for method validation.« less

  12. Design and control of compliant tensegrity robots through simulation and hardware validation

    PubMed Central

    Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas

    2014-01-01

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity (‘tensile–integrity’) structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. PMID:24990292

  13. Design and Control of Compliant Tensegrity Robots Through Simulation and Hardware Validation

    NASA Technical Reports Server (NTRS)

    Caluwaerts, Ken; Despraz, Jeremie; Iscen, Atil; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; Sunspiral, Vytas

    2014-01-01

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center has developed and validated two different software environments for the analysis, simulation, and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ("tensile-integrity") structures have unique physical properties which make them ideal for interaction with uncertain environments. Yet these characteristics, such as variable structural compliance, and global multi-path load distribution through the tension network, make design and control of bio-inspired tensegrity robots extremely challenging. This work presents the progress in using these two tools in tackling the design and control challenges. The results of this analysis includes multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures. The current hardware prototype of a six-bar tensegrity, code-named ReCTeR, is presented in the context of this validation.

  14. Design and validation of an advanced entrained flow reactor system for studies of rapid solid biomass fuel particle conversion and ash formation reactions

    NASA Astrophysics Data System (ADS)

    Wagner, David R.; Holmgren, Per; Skoglund, Nils; Broström, Markus

    2018-06-01

    The design and validation of a newly commissioned entrained flow reactor is described in the present paper. The reactor was designed for advanced studies of fuel conversion and ash formation in powder flames, and the capabilities of the reactor were experimentally validated using two different solid biomass fuels. The drop tube geometry was equipped with a flat flame burner to heat and support the powder flame, optical access ports, a particle image velocimetry (PIV) system for in situ conversion monitoring, and probes for extraction of gases and particulate matter. A detailed description of the system is provided based on simulations and measurements, establishing the detailed temperature distribution and gas flow profiles. Mass balance closures of approximately 98% were achieved by combining gas analysis and particle extraction. Biomass fuel particles were successfully tracked using shadow imaging PIV, and the resulting data were used to determine the size, shape, velocity, and residence time of converting particles. Successful extractive sampling of coarse and fine particles during combustion while retaining their morphology was demonstrated, and it opens up for detailed time resolved studies of rapid ash transformation reactions; in the validation experiments, clear and systematic fractionation trends for K, Cl, S, and Si were observed for the two fuels tested. The combination of in situ access, accurate residence time estimations, and precise particle sampling for subsequent chemical analysis allows for a wide range of future studies, with implications and possibilities discussed in the paper.

  15. Design and validation of an oral health questionnaire for preoperative anaesthetic evaluation.

    PubMed

    Ruíz-López Del Prado, Gema; Blaya-Nováková, Vendula; Saz-Parkinson, Zuleika; Álvarez-Montero, Óscar Luis; Ayala, Alba; Muñoz-Moreno, Maria Fe; Forjaz, Maria João

    Dental injuries incurred during endotracheal intubation are more frequent in patients with previous oral pathology. The study objectives were to develop an oral health questionnaire for preanaesthesia evaluation, easy to apply for personnel without special dental training; and establish a cut-off value for detecting persons with poor oral health. Validation study of a self-administered questionnaire, designed according to a literature review and an expert group's recommendations. The questionnaire was applied to a sample of patients evaluated in a preanaesthesia consultation. Rasch analysis of the questionnaire psychometric properties included viability, acceptability, content validity and reliability of the scale. The sample included 115 individuals, 50.4% of men, with a median age of 58 years (range: 38-71). The final analysis of 11 items presented a Person Separation Index of 0.861 and good adjustment of data to the Rasch model. The scale was unidimensional and its items were not biased by sex, age or nationality. The oral health linear measure presented good construct validity. The cut-off value was set at 52 points. The questionnaire showed sufficient psychometric properties to be considered a reliable tool, valid for measuring the state of oral health in preoperative anaesthetic evaluations. Copyright © 2016 Sociedade Brasileira de Anestesiologia. Published by Elsevier Editora Ltda. All rights reserved.

  16. [Design and validation of an oral health questionnaire for preoperative anaesthetic evaluation].

    PubMed

    Ruíz-López Del Prado, Gema; Blaya-Nováková, Vendula; Saz-Parkinson, Zuleika; Álvarez-Montero, Óscar Luis; Ayala, Alba; Muñoz-Moreno, Maria Fe; Forjaz, Maria João

    Dental injuries incurred during endotracheal intubation are more frequent in patients with previous oral pathology. The study objectives were to develop an oral health questionnaire for preanaesthesia evaluation, easy to apply for personnel without special dental training; and establish a cut-off value for detecting persons with poor oral health. Validation study of a self-administered questionnaire, designed according to a literature review and an expert group's recommendations. The questionnaire was applied to a sample of patients evaluated in a preanaesthesia consultation. Rasch analysis of the questionnaire psychometric properties included viability, acceptability, content validity and reliability of the scale. The sample included 115 individuals, 50.4% of men, with a median age of 58 years (range: 38-71). The final analysis of 11 items presented a Person Separation Index of 0.861 and good adjustment of data to the Rasch model. The scale was unidimensional and its items were not biased by sex, age or nationality. The oral health linear measure presented good construct validity. The cut-off value was set at 52 points. The questionnaire showed sufficient psychometric properties to be considered a reliable tool, valid for measuring the state of oral health in preoperative anaesthetic evaluations. Copyright © 2016 Sociedade Brasileira de Anestesiologia. Publicado por Elsevier Editora Ltda. All rights reserved.

  17. [Design and validation of scales to measure adolescent attitude toward eating and toward physical activity].

    PubMed

    Lima-Serrano, Marta; Lima-Rodríguez, Joaquín Salvador; Sáez-Bueno, Africa

    2012-01-01

    Different authors suggest that attitude is a mediator in behavior change, so it is a predictor of behavior practice. The main of this study was to design and to validate two scales for measure adolescent attitude toward healthy eating and adolescent attitude toward healthy physical activity. Scales were design based on a literature review. After, they were validated using an on-line Delphi Panel with eighteen experts, a pretest, and a pilot test with a sample of 188 high school students. Comprehensibility, content validity, adequacy, as well as the reliability (alpha of Cronbach test), and construct validity (exploratory factor analysis) of scales were tested. Scales validated by experts were considered appropriate in the pretest. In the pilot test, the ten-item Attitude to Eating Scale obtained α=0.72. The eight-item Attitude to Physical Activity Scale obtained α=0.86. They showed evidence of one-dimensional interpretation after factor analysis, a) all items got weights r>0.30 in first factor before rotations, b) the first factor explained a significant proportion of variance before rotations, and c) the total variance explained by the main factors extracted was greater than 50%. The Scales showed their reliability and validity. They could be employed to assess attitude to these priority intervention areas in Spanish adolescents, and to evaluate this intermediate result of health interventions and health programs.

  18. Design and Technical Validation of a Telemedicine Service for Rural Healthcare in Ecuador.

    PubMed

    Vasquez-Cevallos, Leonel A; Bobokova, Jana; González-Granda, Patricia V; Iniesta, José M; Gómez, Enrique J; Hernando, M Elena

    2017-12-12

    Telemedicine is becoming increasingly important in Ecuador, especially in areas such as rural primary healthcare and medical education. Rural telemedicine programs in the country need to be strengthened by means of a technological platform adapted to local surroundings and offering advantages such as access to specialized care, continuing education, and so on, combined with modest investment requirements. This present article presents the design of a Telemedicine Platform (TMP) for rural healthcare services in Ecuador and a preliminary technical validation with medical students and teachers. An initial field study was designed to capture the requirements of the TMP. In a second phase, the TMP was validated in an academic environment along three consecutive academic courses. Assessment was by means of user polls and analyzing user interactions as registered automatically by the platform. The TMP was developed using Web-based technology and open code software. One hundred twenty-four students and 6 specialized faculty members participated in the study, conducting a total of 262 teleconsultations of clinical cases and 226 responses, respectively. The validation results show that the TMP is a useful communication tool for the documentation and discussion of clinical cases. Moreover, its usage may be recommended as a teaching methodology, to strengthen the skills of medical undergraduates. The results indicate that implementing the system in rural healthcare services in Ecuador would be feasible.

  19. CANFOR Portuguese version: validation study.

    PubMed

    Talina, Miguel; Thomas, Stuart; Cardoso, Ana; Aguiar, Pedro; Caldas de Almeida, Jose M; Xavier, Miguel

    2013-05-30

    The increase in prisoner population is a troublesome reality in several regions of the world. Along with this growth there is increasing evidence that prisoners have a higher proportion of mental illnesses and suicide than the general population. In order to implement strategies that address criminal recidivism and the health and social status of prisoners, particularly in mental disordered offenders, it is necessary to assess their care needs in a comprehensive, but individual perspective. This assessment must include potential harmful areas like comorbid personality disorder, substance misuse and offending behaviours. The Camberwell Assessment of Need - Forensic Version (CANFOR) has proved to be a reliable tool designed to accomplish such aims. The present study aimed to validate the CANFOR Portuguese version. The translation, adaptation to the Portuguese context, back-translation and revision followed the usual procedures. The sample comprised all detainees receiving psychiatric care in four forensic facilities, over a one year period. A total of 143 subjects, and respective case manager, were selected. The forensic facilities were chosen by convenience: one prison hospital psychiatric ward (n=68; 47.6%), one male (n=24; 16.8%) and one female (n=22; 15.4%) psychiatric clinic and one civil security ward (n=29; 20.3%), all located nearby Lisbon. Basic descriptive statistics and Kappa weighted coefficients were calculated for the inter-rater and the test-retest reliability studies. The convergent validity was evaluated using the Global Assessment of Functioning and the Brief Psychiatric Rating Scale scores. The majority of the participants were male and single, with short school attendance, and accused of a crime involving violence against persons. The most frequent diagnosis was major depression (56.1%) and almost half presented positive suicide risk. The reliability study showed average Kappa weighted coefficients of 0.884 and 0.445 for inter-rater and test

  20. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  1. Design and Validation of the Quantum Mechanics Conceptual Survey

    ERIC Educational Resources Information Center

    McKagan, S. B.; Perkins, K. K.; Wieman, C. E.

    2010-01-01

    The Quantum Mechanics Conceptual Survey (QMCS) is a 12-question survey of students' conceptual understanding of quantum mechanics. It is intended to be used to measure the relative effectiveness of different instructional methods in modern physics courses. In this paper, we describe the design and validation of the survey, a process that included…

  2. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol

    PubMed Central

    Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-01-01

    Introduction The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. Methods and analysis As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. Ethics and dissemination The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated

  3. Intent inferencing by an intelligent operator's associate - A validation study

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    1988-01-01

    In the supervisory control of a complex, dynamic system, one potential form of aiding for the human operator is a computer-based operator's associate. The design philosophy of the operator's associate is that of 'amplifying' rather than automating human skills. In particular, the associate possesses understanding and control properties. Understanding allows it to infer operator intentions and thus form the basis for context-dependent advice and reminders; control properties allow the human operator to dynamically delegate individual tasks or subfunctions to the associate. This paper focuses on the design, implementation, and validation of the intent inferencing function. Two validation studies are described which empirically demonstrate the viability of the proposed approach to intent inferencing.

  4. [Design and validation of a questionnaire for psychosocial nursing diagnosis in Primary Care].

    PubMed

    Brito-Brito, Pedro Ruymán; Rodríguez-Álvarez, Cristobalina; Sierra-López, Antonio; Rodríguez-Gómez, José Ángel; Aguirre-Jaime, Armando

    2012-01-01

    To develop a valid, reliable and easy-to-use questionnaire for a psychosocial nursing diagnosis. The study was performed in two phases: first phase, questionnaire design and construction; second phase, validity and reliability tests. A bank of items was constructed using the NANDA classification as a theoretical framework. Each item was assigned a Likert scale or dichotomous response. The combination of responses to the items constituted the diagnostic rules to assign up to 28 labels. A group of experts carried out the validity test for content. Other validated scales were used as reference standards for the criterion validity tests. Forty-five nurses provided the questionnaire to the patients on three separate occasions over a period of three weeks, and the other validated scales only once to 188 randomly selected patients in Primary Care centres in Tenerife (Spain). Validity tests for construct confirmed the six dimensions of the questionnaire with 91% of total variance explained. Validity tests for criterion showed a specificity of 66%-100%, and showed high correlations with the reference scales when the questionnaire was assigning nursing diagnoses. Reliability tests showed agreement of 56%-91% (P<.001), and a 93% internal consistency. The Questionnaire for Psychosocial Nursing Diagnosis was called CdePS, and included 61 items. The CdePS is a valid, reliable and easy-to-use tool in Primary Care centres to improve the assigning of a psychosocial nursing diagnosis. Copyright © 2011 Elsevier España, S.L. All rights reserved.

  5. Design and control of compliant tensegrity robots through simulation and hardware validation.

    PubMed

    Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas

    2014-09-06

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ('tensile-integrity') structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  6. Clinical audit project in undergraduate medical education curriculum: an assessment validation study

    PubMed Central

    Steketee, Carole; Mak, Donna

    2016-01-01

    Objectives To evaluate the merit of the Clinical Audit Project (CAP) in an assessment program for undergraduate medical education using a systematic assessment validation framework. Methods A cross-sectional assessment validation study at one medical school in Western Australia, with retrospective qualitative analysis of the design, development, implementation and outcomes of the CAP, and quantitative analysis of assessment data from four cohorts of medical students (2011- 2014). Results The CAP is fit for purpose with clear external and internal alignment to expected medical graduate outcomes.  Substantive validity in students’ and examiners’ response processes is ensured through relevant methodological and cognitive processes. Multiple validity features are built-in to the design, planning and implementation process of the CAP.  There is evidence of high internal consistency reliability of CAP scores (Cronbach’s alpha > 0.8) and inter-examiner consistency reliability (intra-class correlation>0.7). Aggregation of CAP scores is psychometrically sound, with high internal consistency indicating one common underlying construct.  Significant but moderate correlations between CAP scores and scores from other assessment modalities indicate validity of extrapolation and alignment between the CAP and the overall target outcomes of medical graduates.  Standard setting, score equating and fair decision rules justify consequential validity of CAP scores interpretation and use. Conclusions This study provides evidence demonstrating that the CAP is a meaningful and valid component in the assessment program. This systematic framework of validation can be adopted for all levels of assessment in medical education, from individual assessment modality, to the validation of an assessment program as a whole.  PMID:27716612

  7. Clinical audit project in undergraduate medical education curriculum: an assessment validation study.

    PubMed

    Tor, Elina; Steketee, Carole; Mak, Donna

    2016-09-24

    To evaluate the merit of the Clinical Audit Project (CAP) in an assessment program for undergraduate medical education using a systematic assessment validation framework. A cross-sectional assessment validation study at one medical school in Western Australia, with retrospective qualitative analysis of the design, development, implementation and outcomes of the CAP, and quantitative analysis of assessment data from four cohorts of medical students (2011- 2014). The CAP is fit for purpose with clear external and internal alignment to expected medical graduate outcomes.  Substantive validity in students' and examiners' response processes is ensured through relevant methodological and cognitive processes. Multiple validity features are built-in to the design, planning and implementation process of the CAP.  There is evidence of high internal consistency reliability of CAP scores (Cronbach's alpha > 0.8) and inter-examiner consistency reliability (intra-class correlation>0.7). Aggregation of CAP scores is psychometrically sound, with high internal consistency indicating one common underlying construct.  Significant but moderate correlations between CAP scores and scores from other assessment modalities indicate validity of extrapolation and alignment between the CAP and the overall target outcomes of medical graduates.  Standard setting, score equating and fair decision rules justify consequential validity of CAP scores interpretation and use. This study provides evidence demonstrating that the CAP is a meaningful and valid component in the assessment program. This systematic framework of validation can be adopted for all levels of assessment in medical education, from individual assessment modality, to the validation of an assessment program as a whole.

  8. Sexual behavioral abstine HIV/AIDS questionnaire: Validation study of an Iranian questionnaire.

    PubMed

    Najarkolaei, Fatemeh Rahmati; Niknami, Shamsaddin; Shokravi, Farkhondeh Amin; Tavafian, Sedigheh Sadat; Fesharaki, Mohammad Gholami; Jafari, Mohammad Reza

    2014-01-01

    This study was designed to assess the validity and reliability of the designed sexual, behavioral abstinence, and avoidance of high-risk situation questionnaire (SBAHAQ), with an aim to construct an appropriate development tool in the Iranian population. A descriptive-analytic study was conducted among female undergraduate students of Tehran University, who were selected through cluster random sampling. After reviewing the questionnaires and investigating face and content validity, internal consistency of the questionnaire was assessed by Cronbach's alpha. Explanatory and confirmatory factor analysis was conducted using SPSS and AMOS 16 Software, respectively. The sample consisted of 348 female university students with a mean age of 20.69 ± 1.63 years. The content validity ratio (CVR) coefficient was 0.85 and the reliability of each section of the questionnaire was as follows: Perceived benefit (PB; 0.87), behavioral intention (BI; 0.77), and self-efficacy (SE; 0.85) (Cronbach's alpha totally was 0.83). Explanatory factor analysis showed three factors, including SE, PB, and BI, with the total variance of 61% and Kaiser-Meyer-Olkin (KMO) index of 88%. These factors were also confirmed by confirmatory factor analysis [adjusted goodness of fitness index (AGFI) = 0.939, root mean square error of approximation (RMSEA) = 0.039]. This study showed the designed questionnaire provided adequate construct validity and reliability, and could be adequately used to measure sexual abstinence and avoidance of high-risk situations among female students.

  9. Sexual behavioral abstine HIV/AIDS questionnaire: Validation study of an Iranian questionnaire

    PubMed Central

    Najarkolaei, Fatemeh Rahmati; Niknami, Shamsaddin; Shokravi, Farkhondeh Amin; Tavafian, Sedigheh Sadat; Fesharaki, Mohammad Gholami; Jafari, Mohammad Reza

    2014-01-01

    Background: This study was designed to assess the validity and reliability of the designed sexual, behavioral abstinence, and avoidance of high-risk situation questionnaire (SBAHAQ), with an aim to construct an appropriate development tool in the Iranian population. Materials and Methods: A descriptive–analytic study was conducted among female undergraduate students of Tehran University, who were selected through cluster random sampling. After reviewing the questionnaires and investigating face and content validity, internal consistency of the questionnaire was assessed by Cronbach's alpha. Explanatory and confirmatory factor analysis was conducted using SPSS and AMOS 16 Software, respectively. Results: The sample consisted of 348 female university students with a mean age of 20.69 ± 1.63 years. The content validity ratio (CVR) coefficient was 0.85 and the reliability of each section of the questionnaire was as follows: Perceived benefit (PB; 0.87), behavioral intention (BI; 0.77), and self-efficacy (SE; 0.85) (Cronbach's alpha totally was 0.83). Explanatory factor analysis showed three factors, including SE, PB, and BI, with the total variance of 61% and Kaiser–Meyer–Olkin (KMO) index of 88%. These factors were also confirmed by confirmatory factor analysis [adjusted goodness of fitness index (AGFI) = 0.939, root mean square error of approximation (RMSEA) = 0.039]. Conclusion: This study showed the designed questionnaire provided adequate construct validity and reliability, and could be adequately used to measure sexual abstinence and avoidance of high-risk situations among female students. PMID:24741650

  10. Design and validation of a model to predict early mortality in haemodialysis patients.

    PubMed

    Mauri, Joan M; Clèries, Montse; Vela, Emili

    2008-05-01

    Mortality and morbidity rates are higher in patients receiving haemodialysis therapy than in the general population. Detection of risk factors related to early death in these patients could be of aid for clinical and administrative decision making. Objectives. The aims of this study were (1) to identify risk factors (comorbidity and variables specific to haemodialysis) associated with death in the first year following the start of haemodialysis and (2) to design and validate a prognostic model to quantify the probability of death for each patient. An analysis was carried out on all patients starting haemodialysis treatment in Catalonia during the period 1997-2003 (n = 5738). The data source was the Renal Registry of Catalonia, a mandatory population registry. Patients were randomly divided into two samples: 60% (n = 3455) of the total were used to develop the prognostic model and the remaining 40% (n = 2283) to validate the model. Logistic regression analysis was used to construct the model. One-year mortality in the total study population was 16.5%. The predictive model included the following variables: age, sex, primary renal disease, grade of functional autonomy, chronic obstructive pulmonary disease, malignant processes, chronic liver disease, cardiovascular disease, initial vascular access and malnutrition. The analyses showed adequate calibration for both the sample to develop the model and the validation sample (Hosmer-Lemeshow statistic 0.97 and P = 0.49, respectively) as well as adequate discrimination (ROC curve 0.78 in both cases). Risk factors implicated in mortality at one year following the start of haemodialysis have been determined and a prognostic model designed. The validated, easy-to-apply model quantifies individual patient risk attributable to various factors, some of them amenable to correction by directed interventions.

  11. Validating a new methodology for optical probe design and image registration in fNIRS studies

    PubMed Central

    Wijeakumar, Sobanawartiny; Spencer, John P.; Bohache, Kevin; Boas, David A.; Magnotta, Vincent A.

    2015-01-01

    Functional near-infrared spectroscopy (fNIRS) is an imaging technique that relies on the principle of shining near-infrared light through tissue to detect changes in hemodynamic activation. An important methodological issue encountered is the creation of optimized probe geometry for fNIRS recordings. Here, across three experiments, we describe and validate a processing pipeline designed to create an optimized, yet scalable probe geometry based on selected regions of interest (ROIs) from the functional magnetic resonance imaging (fMRI) literature. In experiment 1, we created a probe geometry optimized to record changes in activation from target ROIs important for visual working memory. Positions of the sources and detectors of the probe geometry on an adult head were digitized using a motion sensor and projected onto a generic adult atlas and a segmented head obtained from the subject's MRI scan. In experiment 2, the same probe geometry was scaled down to fit a child's head and later digitized and projected onto the generic adult atlas and a segmented volume obtained from the child's MRI scan. Using visualization tools and by quantifying the amount of intersection between target ROIs and channels, we show that out of 21 ROIs, 17 and 19 ROIs intersected with fNIRS channels from the adult and child probe geometries, respectively. Further, both the adult atlas and adult subject-specific MRI approaches yielded similar results and can be used interchangeably. However, results suggest that segmented heads obtained from MRI scans be used for registering children's data. Finally, in experiment 3, we further validated our processing pipeline by creating a different probe geometry designed to record from target ROIs involved in language and motor processing. PMID:25705757

  12. [Design and validation of a satisfaction survey with pharmaceutical care received in hospital pharmacyconsultation].

    PubMed

    Monje-Agudo, Patricia; Borrego-Izquierdo, Yolanda; Robustillo-Cortés, Ma de Las Aguas; Jiménez-Galán, Rocio; Almeida-González, Carmen V; Morillo-Verdugo, Ramón A

    2015-05-01

    To design and to validate a questionnaire to assess satisfaction with pharmaceutical care (PC) received at the hospital pharmacy. Multicentric study in five andalusian hospital in January 2013. A bibliography search was performed in PUBMED; MESH term; pharmaceutical services, patients satisfaction and questionnaire. Next, the questionnaire was produced by Delphi methodology with ten items and with the following variables; demographics, socials, pharrmacologicals and clinics which the patient was asked for the consequences of the PC in his treatment and illness and for the acceptance with the received service. The patient could answer between one= very insufficient and five= excellent. Before the validation phase questionnaire, a pilot phase was carried out. Descriptive analysis, Cronbach's alpha coefficient and intraclass correlation coefficient (ICC) were performed in both phases. Data analysis was conducted using the SPSS statistical software package release 20.0. In the pilot phase were included 21 questionnaires and 154 of them in validation phase (response index of 100%). In the last phase, 62% (N=96) of patients were men. More than 50% of patients answered "excelent" in all items of questionnaire in both phases. The Cronbach's alpha coefficient and ICC were 0.921 and 0.915 (95%IC: 0.847-0.961) and 0.916 and 0,910 (95%IC: 0.886-0.931) in pilot and validation phases, respectively. A high reliability instrument was designed and validated to evaluate the patient satisfaction with PC received at hospital pharmacy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  13. Designing a valid randomized pragmatic primary care implementation trial: the my own health report (MOHR) project.

    PubMed

    Krist, Alex H; Glenn, Beth A; Glasgow, Russell E; Balasubramanian, Bijal A; Chambers, David A; Fernandez, Maria E; Heurtin-Roberts, Suzanne; Kessler, Rodger; Ory, Marcia G; Phillips, Siobhan M; Ritzwoller, Debra P; Roby, Dylan H; Rodriguez, Hector P; Sabo, Roy T; Sheinfeld Gorin, Sherri N; Stange, Kurt C

    2013-06-25

    There is a pressing need for greater attention to patient-centered health behavior and psychosocial issues in primary care, and for practical tools, study designs and results of clinical and policy relevance. Our goal is to design a scientifically rigorous and valid pragmatic trial to test whether primary care practices can systematically implement the collection of patient-reported information and provide patients needed advice, goal setting, and counseling in response. This manuscript reports on the iterative design of the My Own Health Report (MOHR) study, a cluster randomized delayed intervention trial. Nine pairs of diverse primary care practices will be randomized to early or delayed intervention four months later. The intervention consists of fielding the MOHR assessment--addresses 10 domains of health behaviors and psychosocial issues--and subsequent provision of needed counseling and support for patients presenting for wellness or chronic care. As a pragmatic participatory trial, stakeholder groups including practice partners and patients have been engaged throughout the study design to account for local resources and characteristics. Participatory tasks include identifying MOHR assessment content, refining the study design, providing input on outcomes measures, and designing the implementation workflow. Study outcomes include the intervention reach (percent of patients offered and completing the MOHR assessment), effectiveness (patients reporting being asked about topics, setting change goals, and receiving assistance in early versus delayed intervention practices), contextual factors influencing outcomes, and intervention costs. The MOHR study shows how a participatory design can be used to promote the consistent collection and use of patient-reported health behavior and psychosocial assessments in a broad range of primary care settings. While pragmatic in nature, the study design will allow valid comparisons to answer the posed research question, and

  14. Designing and validation of a yoga-based intervention for obsessive compulsive disorder.

    PubMed

    Bhat, Shubha; Varambally, Shivarama; Karmani, Sneha; Govindaraj, Ramajayam; Gangadhar, B N

    2016-06-01

    Some yoga-based practices have been found to be useful for patients with obsessive compulsive disorder (OCD). The authors could not find a validated yoga therapy module available for OCD. This study attempted to formulate a generic yoga-based intervention module for OCD. A yoga module was designed based on traditional and contemporary yoga literature. The module was sent to 10 yoga experts for content validation. The experts rated the usefulness of the practices on a scale of 1-5 (5 = extremely useful). The final version of the module was pilot-tested on patients with OCD (n = 17) for both feasibility and effect on symptoms. Eighty-eight per cent (22 out of 25) of the items in the initial module were retained, with modifications in the module as suggested by the experts along with patients' inputs and authors' experience. The module was found to be feasible and showed an improvement in symptoms of OCD on total Yale-Brown Obsessive-Compulsive Scale (YBOCS) score (p = 0.001). A generic yoga therapy module for OCD was validated by experts in the field and found feasible to practice in patients. A decrease in the symptom scores was also found following yoga practice of 2 weeks. Further clinical validation is warranted to confirm efficacy.

  15. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol.

    PubMed

    Klussmann, Andre; Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-08-21

    The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. © Article

  16. The Design and Validation of the Colorado Learning Attitudes about Science Survey

    NASA Astrophysics Data System (ADS)

    Adams, W. K.; Perkins, K. K.; Dubson, M.; Finkelstein, N. D.; Wieman, C. E.

    2005-09-01

    The Colorado Learning Attitudes about Science Survey (CLASS) is a new instrument designed to measure various facets of student attitudes and beliefs about learning physics. This instrument extends previous work by probing additional facets of student attitudes and beliefs. It has been written to be suitably worded for students in a variety of different courses. This paper introduces the CLASS and its design and validation studies, which include analyzing results from over 2400 students, interviews and factor analyses. Methodology used to determine categories and how to analyze the robustness of categories for probing various facets of student learning are also described. This paper serves as the foundation for the results and conclusions from the analysis of our survey data.

  17. Sensor data validation and reconstruction. Phase 1: System architecture study

    NASA Technical Reports Server (NTRS)

    1991-01-01

    The sensor validation and data reconstruction task reviewed relevant literature and selected applicable validation and reconstruction techniques for further study; analyzed the selected techniques and emphasized those which could be used for both validation and reconstruction; analyzed Space Shuttle Main Engine (SSME) hot fire test data to determine statistical and physical relationships between various parameters; developed statistical and empirical correlations between parameters to perform validation and reconstruction tasks, using a computer aided engineering (CAE) package; and conceptually designed an expert system based knowledge fusion tool, which allows the user to relate diverse types of information when validating sensor data. The host hardware for the system is intended to be a Sun SPARCstation, but could be any RISC workstation with a UNIX operating system and a windowing/graphics system such as Motif or Dataviews. The information fusion tool is intended to be developed using the NEXPERT Object expert system shell, and the C programming language.

  18. [Strengthening the methodology of study designs in scientific researches].

    PubMed

    Ren, Ze-qin

    2010-06-01

    Many problems in study designs have affected the validity of scientific researches seriously. We must understand the methodology of research, especially clinical epidemiology and biostatistics, and recognize the urgency in selection and implement of right study design. Thereafter we can promote the research capability and improve the overall quality of scientific researches.

  19. Rap-Music Attitude and Perception Scale: A Validation Study

    ERIC Educational Resources Information Center

    Tyson, Edgar H.

    2006-01-01

    Objective: This study tests the validity of the Rap-music Attitude and Perception (RAP) Scale, a 1-page, 24-item measure of a person's thoughts and feelings surrounding the effects and content of rap music. The RAP was designed as a rapid assessment instrument for youth programs and practitioners using rap music and hip hop culture in their work…

  20. Objectifying Content Validity: Conducting a Content Validity Study in Social Work Research.

    ERIC Educational Resources Information Center

    Rubio, Doris McGartland; Berg-Weger, Marla; Tebb, Susan S.; Lee, E. Suzanne; Rauch, Shannon

    2003-01-01

    The purpose of this article is to demonstrate how to conduct a content validity study. Instructions on how to calculate a content validity index, factorial validity index, and an interrater reliability index and guide for interpreting these indices are included. Implications regarding the value of conducting a content validity study for…

  1. Better cancer biomarker discovery through better study design.

    PubMed

    Rundle, Andrew; Ahsan, Habibul; Vineis, Paolo

    2012-12-01

    High-throughput laboratory technologies coupled with sophisticated bioinformatics algorithms have tremendous potential for discovering novel biomarkers, or profiles of biomarkers, that could serve as predictors of disease risk, response to treatment or prognosis. We discuss methodological issues in wedding high-throughput approaches for biomarker discovery with the case-control study designs typically used in biomarker discovery studies, especially focusing on nested case-control designs. We review principles for nested case-control study design in relation to biomarker discovery studies and describe how the efficiency of biomarker discovery can be effected by study design choices. We develop a simulated prostate cancer cohort data set and a series of biomarker discovery case-control studies nested within the cohort to illustrate how study design choices can influence biomarker discovery process. Common elements of nested case-control design, incidence density sampling and matching of controls to cases are not typically factored correctly into biomarker discovery analyses, inducing bias in the discovery process. We illustrate how incidence density sampling and matching of controls to cases reduce the apparent specificity of truly valid biomarkers 'discovered' in a nested case-control study. We also propose and demonstrate a new case-control matching protocol, we call 'antimatching', that improves the efficiency of biomarker discovery studies. For a valid, but as yet undiscovered, biomarker(s) disjunctions between correctly designed epidemiologic studies and the practice of biomarker discovery reduce the likelihood that true biomarker(s) will be discovered and increases the false-positive discovery rate. © 2012 The Authors. European Journal of Clinical Investigation © 2012 Stichting European Society for Clinical Investigation Journal Foundation.

  2. New Office Technology: A Study on Curriculum Design.

    ERIC Educational Resources Information Center

    Mulder, Martin

    1989-01-01

    A study collected information about office automation trends, office personnel job profiles, and existing curricula. A curriculum conference was held to design and validate a modular curriculum for office automation. (SK)

  3. Absorption in Sport: A Cross-Validation Study

    PubMed Central

    Koehn, Stefan; Stavrou, Nektarios A. M.; Cogley, Jeremy; Morris, Tony; Mosek, Erez; Watt, Anthony P.

    2017-01-01

    Absorption has been identified as readiness for experiences of deep involvement in the task. Conceptually, absorption is a key psychological construct, incorporating experiential, cognitive, and motivational components. Although, no operationalization of the construct has been provided to facilitate research in this area, the purpose of this research was the development and examination of the psychometric properties of a sport-specific measure of absorption that evolved from the use of the modified Tellegen Absorption Scale (MODTAS; Jamieson, 2005) in mainstream psychology. The study aimed to provide evidence of the psychometric properties, reliability, and validity of the Measure of Absorption in Sport Contexts (MASCs). The psychometric examination included a calibration sample from Scotland and a cross-validation sample from Australia using a cross-sectional design. The item pool was developed based on existing items from the modified Tellegen Absorption Scale (Jamieson, 2005). The MODTAS items were reworded and translated into a sport context. The Scottish sample consisted of 292 participants and the Australian sample of 314 participants. Congeneric model testing and confirmatory factor analysis for both samples and multi-group invariance testing across samples was used. In the cross-validation sample the MASC subscales showed acceptable internal consistency and construct reliability (≥0.70). Excellent fit indices were found for the final 18-item, six-factor measure in the cross-validation sample, χ(120)2 = 197.486, p < 0.001; CFI = 0.957; TLI = 0.945; RMSEA = 0.045; SRMR = 0.044. Multi-group invariance testing revealed no differences in item meaning, except for two items. The MASC and the Dispositional Flow Scale-2 showed moderate-to-strong positive correlations in both samples, r = 0.38, p < 0.001 and r = 0.42, p < 0.001, supporting the external validity of the MASC. This article provides initial evidence in support of the psychometric properties

  4. Designing, validation and feasibility of a yoga-based intervention for elderly

    PubMed Central

    Hariprasad, V. R.; Varambally, S.; Varambally, P. T.; Thirthalli, J.; Basavaraddi, I. V.; Gangadhar, B. N.

    2013-01-01

    Context: Ageing is an unavoidable facet of life. Yogic practices have been reported to promote healthy aging. Previous studies have used either yoga therapy interventions derived from a particular school of yoga or have tested specific yogic practices like meditation. Aims: This study reports the development, validation and feasibility of a yoga-based intervention for elderly with or without mild cognitive impairment. Settings and Design: The study was conducted at the Advanced Centre for Yoga, National Institute for Mental Health and Neurosciences, Bangalore. The module was developed, validated, and then pilot-tested on volunteers. Materials and Methods: The first part of the study consisted of designing of a yoga module based on traditional and contemporary yogic literature. This yoga module along with the three case vignettes of elderly with cognitive impairment were sent to 10 yoga experts to help develop the intended yoga-based intervention. In the second part, the feasibility of the developed yoga-based intervention was tested. Results: Experts (n=10) opined the yoga-based intervention will be useful in improving cognition in elderly, but with some modifications. Frequent supervised yoga sessions, regular follow-ups, addition/deletion/modifications of yoga postures were some of the suggestions. Ten elderly consented and eight completed the pilot testing of the intervention. All of them were able to perform most of the Sukṣmavyayāma, Prāṇāyāma and Nādānusaṇdhāna (meditation) technique without difficulty. Some of the participants (n=3) experienced difficulty in performing postures seated on the ground. Most of the older adults experienced difficulty in remembering and completing entire sequence of yoga-based intervention independently. Conclusions: The yoga based intervention is feasible in the elderly with cognitive impairment. Testing with a larger sample of older adults is warranted. PMID:24049197

  5. A Cross-Validation Study of the School Attitude Assessment Survey (SAAS).

    ERIC Educational Resources Information Center

    McCoach, D. Betsy

    Factors commonly associated with underachievement in the research literature include low self-concept, low self-motivation/self-regulation, negative attitude toward school, and negative peer influence. This study attempts to isolate these four factors within a secondary school population. The purpose of the study was to design a valid and reliable…

  6. VALUE - A Framework to Validate Downscaling Approaches for Climate Change Studies

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilke, Renate A. I.

    2015-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. Here, we present the key ingredients of this framework. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.

  7. VALUE: A framework to validate downscaling approaches for climate change studies

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilcke, Renate A. I.

    2015-01-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. In this paper, we present the key ingredients of this framework. VALUE's main approach to validation is user- focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.

  8. Design of experiments in medical physics: Application to the AAA beam model validation.

    PubMed

    Dufreneix, S; Legrand, C; Di Bartolo, C; Bremaud, M; Mesgouez, J; Tiplica, T; Autret, D

    2017-09-01

    The purpose of this study is to evaluate the usefulness of the design of experiments in the analysis of multiparametric problems related to the quality assurance in radiotherapy. The main motivation is to use this statistical method to optimize the quality assurance processes in the validation of beam models. Considering the Varian Eclipse system, eight parameters with several levels were selected: energy, MLC, depth, X, Y 1 and Y 2 jaw dimensions, wedge and wedge jaw. A Taguchi table was used to define 72 validation tests. Measurements were conducted in water using a CC04 on a TrueBeam STx, a TrueBeam Tx, a Trilogy and a 2300IX accelerator matched by the vendor. Dose was computed using the AAA algorithm. The same raw data was used for all accelerators during the beam modelling. The mean difference between computed and measured doses was 0.1±0.5% for all beams and all accelerators with a maximum difference of 2.4% (under the 3% tolerance level). For all beams, the measured doses were within 0.6% for all accelerators. The energy was found to be an influencing parameter but the deviations observed were smaller than 1% and not considered clinically significant. Designs of experiment can help define the optimal measurement set to validate a beam model. The proposed method can be used to identify the prognostic factors of dose accuracy. The beam models were validated for the 4 accelerators which were found dosimetrically equivalent even though the accelerator characteristics differ. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  9. A methodology for the validated design space exploration of fuel cell powered unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Moffitt, Blake Almy

    Unmanned Aerial Vehicles (UAVs) are the most dynamic growth sector of the aerospace industry today. The need to provide persistent intelligence, surveillance, and reconnaissance for military operations is driving the planned acquisition of over 5,000 UAVs over the next five years. The most pressing need is for quiet, small UAVs with endurance beyond what is capable with advanced batteries or small internal combustion propulsion systems. Fuel cell systems demonstrate high efficiency, high specific energy, low noise, low temperature operation, modularity, and rapid refuelability making them a promising enabler of the small, quiet, and persistent UAVs that military planners are seeking. Despite the perceived benefits, the actual near-term performance of fuel cell powered UAVs is unknown. Until the auto industry began spending billions of dollars in research, fuel cell systems were too heavy for useful flight applications. However, the last decade has seen rapid development with fuel cell gravimetric and volumetric power density nearly doubling every 2--3 years. As a result, a few design studies and demonstrator aircraft have appeared, but overall the design methodology and vehicles are still in their infancy. The design of fuel cell aircraft poses many challenges. Fuel cells differ fundamentally from combustion based propulsion in how they generate power and interact with other aircraft subsystems. As a result, traditional multidisciplinary analysis (MDA) codes are inappropriate. Building new MDAs is difficult since fuel cells are rapidly changing in design, and various competitive architectures exist for balance of plant, hydrogen storage, and all electric aircraft subsystems. In addition, fuel cell design and performance data is closely protected which makes validation difficult and uncertainty significant. Finally, low specific power and high volumes compared to traditional combustion based propulsion result in more highly constrained design spaces that are

  10. Supersonic, nonlinear, attached-flow wing design for high lift with experimental validation

    NASA Technical Reports Server (NTRS)

    Pittman, J. L.; Miller, D. S.; Mason, W. H.

    1984-01-01

    Results of the experimental validation are presented for the three dimensional cambered wing which was designed to achieve attached supercritical cross flow for lifting conditions typical of supersonic maneuver. The design point was a lift coefficient of 0.4 at Mach 1.62 and 12 deg angle of attack. Results from the nonlinear full potential method are presented to show the validity of the design process along with results from linear theory codes. Longitudinal force and moment data and static pressure data were obtained in the Langley Unitary Plan Wind Tunnel at Mach numbers of 1.58, 1.62, 1.66, 1.70, and 2.00 over an angle of attack range of 0 to 14 deg at a Reynolds number of 2.0 x 10 to the 6th power per foot. Oil flow photographs of the upper surface were obtained at M = 1.62 for alpha approx. = 8, 10, 12, and 14 deg.

  11. Threats to validity in the design and conduct of preclinical efficacy studies: a systematic review of guidelines for in vivo animal experiments.

    PubMed

    Henderson, Valerie C; Kimmelman, Jonathan; Fergusson, Dean; Grimshaw, Jeremy M; Hackam, Dan G

    2013-01-01

    The vast majority of medical interventions introduced into clinical development prove unsafe or ineffective. One prominent explanation for the dismal success rate is flawed preclinical research. We conducted a systematic review of preclinical research guidelines and organized recommendations according to the type of validity threat (internal, construct, or external) or programmatic research activity they primarily address. We searched MEDLINE, Google Scholar, Google, and the EQUATOR Network website for all preclinical guideline documents published up to April 9, 2013 that addressed the design and conduct of in vivo animal experiments aimed at supporting clinical translation. To be eligible, documents had to provide guidance on the design or execution of preclinical animal experiments and represent the aggregated consensus of four or more investigators. Data from included guidelines were independently extracted by two individuals for discrete recommendations on the design and implementation of preclinical efficacy studies. These recommendations were then organized according to the type of validity threat they addressed. A total of 2,029 citations were identified through our search strategy. From these, we identified 26 guidelines that met our eligibility criteria--most of which were directed at neurological or cerebrovascular drug development. Together, these guidelines offered 55 different recommendations. Some of the most common recommendations included performance of a power calculation to determine sample size, randomized treatment allocation, and characterization of disease phenotype in the animal model prior to experimentation. By identifying the most recurrent recommendations among preclinical guidelines, we provide a starting point for developing preclinical guidelines in other disease domains. We also provide a basis for the study and evaluation of preclinical research practice. Please see later in the article for the Editors' Summary.

  12. Block 2 SRM conceptual design studies. Volume 1, Book 1: Conceptual design package

    NASA Technical Reports Server (NTRS)

    Smith, Brad; Williams, Neal; Miller, John; Ralston, Joe; Richardson, Jennifer; Moore, Walt; Doll, Dan; Maughan, Jeff; Hayes, Fred

    1986-01-01

    The conceptual design studies of a Block 2 Solid Rocket Motor (SRM) require the elimination of asbestos-filled insulation and was open to alternate designs, such as case changes, different propellants, modified burn rate - to improve reliability and performance. Limitations were placed on SRM changes such that the outside geometry should not impact the physical interfaces with other Space Shuttle elements and should have minimum changes to the aerodynamic and dynamic characteristics of the Space Shuttle vehicle. Previous Space Shuttle SRM experience was assessed and new design concepts combined to define a valid approach to assured flight success and economic operation of the STS. Trade studies, preliminary designs, analyses, plans, and cost estimates are documented.

  13. [Valuating public health in some zoos in Colombia. Phase 1: designing and validating instruments].

    PubMed

    Agudelo-Suárez, Angela N; Villamil-Jiménez, Luis C

    2009-10-01

    Designing and validating instruments for identifying public health problems in some zoological parks in Colombia, thereby allowing them to be evaluated. Four instruments were designed and validated along with the participation of five zoos. The instruments were validated regarding appearance, content, sensitivity to change, reliability tests and determining the tools' usefulness. An evaluation scale was created which assigned a maximum of 400 points, having the following evaluation intervals: 350-400 points meant good public health management, 100-349 points for regular management and 0-99 points for deficient management. The instruments were applied to the five zoos as part of the validation, forming a base-line for future evaluation of public health in them. Four valid and useful instruments were obtained for evaluating public health in zoos in Colombia. The five zoos presented regular public health management. The base-line obtained when validating the instruments led to identifying strengths and weaknesses regarding public health management in the zoos. The instruments obtained generally and specifically evaluated public health management; they led to diagnosing, identifying, quantifying and scoring zoos in Colombia in terms of public health. The base-line provided a starting point for making comparisons and enabling future follow-up of public health in Colombian zoos.

  14. Development, calibration, and validation of performance prediction models for the Texas M-E flexible pavement design system.

    DOT National Transportation Integrated Search

    2010-08-01

    This study was intended to recommend future directions for the development of TxDOTs Mechanistic-Empirical : (TexME) design system. For stress predictions, a multi-layer linear elastic system was evaluated and its validity was : verified by compar...

  15. A Validation of Object-Oriented Design Metrics as Quality Indicators

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Briand, Lionel C.; Melo, Walcelio

    1997-01-01

    This paper presents the results of a study in which we empirically investigated the suits of object-oriented (00) design metrics introduced in another work. More specifically, our goal is to assess these metrics as predictors of fault-prone classes and, therefore, determine whether they can be used as early quality indicators. This study is complementary to the work described where the same suite of metrics had been used to assess frequencies of maintenance changes to classes. To perform our validation accurately, we collected data on the development of eight medium-sized information management systems based on identical requirements. All eight projects were developed using a sequential life cycle model, a well-known 00 analysis/design method and the C++ programming language. Based on empirical and quantitative analysis, the advantages and drawbacks of these 00 metrics are discussed. Several of Chidamber and Kamerer's 00 metrics appear to be useful to predict class fault-proneness during the early phases of the life-cycle. Also, on our data set, they are better predictors than 'traditional' code metrics, which can only be collected at a later phase of the software development processes.

  16. Design and validation of a questionnaire on nursing competence in the notification of medication incidents.

    PubMed

    Salcedo-Diego, Isabel; de Andrés-Gimeno, Begoña; Ruiz-Antorán, Belén; Layunta, Rocío; Serrano-Gallardo, Pilar

    To design and perform a face and content validation of a questionnaire to measure the competence of hospital RN to report medication incidents. Content and face questionnaire validation descriptive study. A review of the literature was performed for the creation of ítems. A panel of six experts assessed the relevance of the inclusion of each ítem in the questionnaire by calculating the position index; ítems with position index >0.70 were selected. The questionnaire was piloted by 59 RN. Finally, a meeting was convened with experts, in order to reduce the length of the piloted questionnaire through review, discussion and decision by consensus on each item. From the literature review, a battery of 151 ítems grouped into three elements of competence: attitudes, knowledge and skills was created. 52.9% (n=80) of the ítems received a position index > 0.70. The response rate in the pilot study was 40.65%. The median time to complete the questionnaire was 23:35minutes. After reduction by the experts, the final questionnaire comprised 45 ítems grouped into 32 questions. The NORMA questionnaire, designed to explore the competence of hospital RN to report medication incidents, has adequate face and content validity and is easy to administer, enabling its institutional implementation. Copyright © 2017 Elsevier España, S.L.U. All rights reserved.

  17. Work design and management in the manufacturing sector: development and validation of the Work Organisation Assessment Questionnaire.

    PubMed

    Griffiths, A; Cox, T; Karanika, M; Khan, S; Tomás, J M

    2006-10-01

    To examine the factor structure, reliability, and validity of a new context-specific questionnaire for the assessment of work and organisational factors. The Work Organisation Assessment Questionnaire (WOAQ) was developed as part of a risk assessment and risk reduction methodology for hazards inherent in the design and management of work in the manufacturing sector. Two studies were conducted. Data were collected from 524 white- and blue-collar employees from a range of manufacturing companies. Exploratory factor analysis was carried out on 28 items that described the most commonly reported failures of work design and management in companies in the manufacturing sector. Concurrent validity data were also collected. A reliability study was conducted with a further 156 employees. Principal component analysis, with varimax rotation, revealed a strong 28-item, five factor structure. The factors were named: quality of relationships with management, reward and recognition, workload, quality of relationships with colleagues, and quality of physical environment. Analyses also revealed a more general summative factor. Results indicated that the questionnaire has good internal consistency and test-retest reliability and validity. Being associated with poor employee health and changes in health related behaviour, the WOAQ factors are possible hazards. It is argued that the strength of those associations offers some estimation of risk. Feedback from the organisations involved indicated that the WOAQ was easy to use and meaningful for them as part of their risk assessment procedures. The studies reported here describe a model of the hazards to employee health and health related behaviour inherent in the design and management of work in the manufacturing sector. It offers an instrument for their assessment. The scales derived which form the WOAQ were shown to be reliable, valid, and meaningful to the user population.

  18. Work design and management in the manufacturing sector: development and validation of the Work Organisation Assessment Questionnaire

    PubMed Central

    Griffiths, A; Cox, T; Karanika, M; Khan, S; Tomás, J‐M

    2006-01-01

    Objectives To examine the factor structure, reliability, and validity of a new context‐specific questionnaire for the assessment of work and organisational factors. The Work Organisation Assessment Questionnaire (WOAQ) was developed as part of a risk assessment and risk reduction methodology for hazards inherent in the design and management of work in the manufacturing sector. Method Two studies were conducted. Data were collected from 524 white‐ and blue‐collar employees from a range of manufacturing companies. Exploratory factor analysis was carried out on 28 items that described the most commonly reported failures of work design and management in companies in the manufacturing sector. Concurrent validity data were also collected. A reliability study was conducted with a further 156 employees. Results Principal component analysis, with varimax rotation, revealed a strong 28‐item, five factor structure. The factors were named: quality of relationships with management, reward and recognition, workload, quality of relationships with colleagues, and quality of physical environment. Analyses also revealed a more general summative factor. Results indicated that the questionnaire has good internal consistency and test‐retest reliability and validity. Being associated with poor employee health and changes in health related behaviour, the WOAQ factors are possible hazards. It is argued that the strength of those associations offers some estimation of risk. Feedback from the organisations involved indicated that the WOAQ was easy to use and meaningful for them as part of their risk assessment procedures. Conclusions The studies reported here describe a model of the hazards to employee health and health related behaviour inherent in the design and management of work in the manufacturing sector. It offers an instrument for their assessment. The scales derived which form the WOAQ were shown to be reliable, valid, and meaningful to the user population. PMID:16858081

  19. Design of an Axisymmetric Afterbody Test Case for CFD Validation

    NASA Technical Reports Server (NTRS)

    Disotell, Kevin J.; Rumsey, Christopher L.

    2017-01-01

    As identified in the CFD Vision 2030 Study commissioned by NASA, validation of advanced RANS models and scale-resolving methods for computing turbulent flow fields must be supported by continuous improvements in fundamental, high-fidelity experiments designed specifically for CFD implementation. In accordance with this effort, the underpinnings of a new test platform referred to herein as the NASA Axisymmetric Afterbody are presented. The devised body-of-revolution is a modular platform consisting of a forebody section and afterbody section, allowing for a range of flow behaviors to be studied on interchangeable afterbody geometries. A body-of-revolution offers advantages in shape definition and fabrication, in avoiding direct contact with wind tunnel sidewalls, and in tail-sting integration to facilitate access to higher Reynolds number tunnels. The current work is focused on validation of smooth-body turbulent flow separation, for which a six-parameter body has been developed. A priori RANS computations are reported for a risk-reduction test configuration in order to demonstrate critical variation among turbulence model results for a given afterbody, ranging from barely-attached to mild separated flow. RANS studies of the effects of forebody nose (with/without) and wind tunnel boundary (slip/no-slip) on the selected afterbody are presented. Representative modeling issues that can be explored with this configuration are the effect of higher Reynolds number on separation behavior, flow physics of the progression from attached to increasingly-separated afterbody flows, and the effect of embedded longitudinal vortices on turbulence structure.

  20. Design and validation of a wind tunnel system for odour sampling on liquid area sources.

    PubMed

    Capelli, L; Sironi, S; Del Rosso, R; Céntola, P

    2009-01-01

    The aim of this study is to describe the methods adopted for the design and the experimental validation of a wind tunnel, a sampling system suitable for the collection of gaseous samples on passive area sources, which allows to simulate wind action on the surface to be monitored. The first step of the work was the study of the air velocity profiles. The second step of the work consisted in the validation of the sampling system. For this purpose, the odour concentration of some air samples collected by means of the wind tunnel was measured by dynamic olfactometry. The results of the air velocity measurements show that the wind tunnel design features enabled the achievement of a uniform and homogeneous air flow through the hood. Moreover, the laboratory tests showed a very good correspondence between the odour concentration values measured at the wind tunnel outlet and the odour concentration values predicted by the application of a specific volatilization model, based on the Prandtl boundary layer theory. The agreement between experimental and theoretical trends demonstrate that the studied wind tunnel represents a suitable sampling system for the simulation of specific odour emission rates from liquid area sources without outward flow.

  1. Thermodynamically optimal whole-genome tiling microarray design and validation.

    PubMed

    Cho, Hyejin; Chou, Hui-Hsien

    2016-06-13

    Microarray is an efficient apparatus to interrogate the whole transcriptome of species. Microarray can be designed according to annotated gene sets, but the resulted microarrays cannot be used to identify novel transcripts and this design method is not applicable to unannotated species. Alternatively, a whole-genome tiling microarray can be designed using only genomic sequences without gene annotations, and it can be used to detect novel RNA transcripts as well as known genes. The difficulty with tiling microarray design lies in the tradeoff between probe-specificity and coverage of the genome. Sequence comparison methods based on BLAST or similar software are commonly employed in microarray design, but they cannot precisely determine the subtle thermodynamic competition between probe targets and partially matched probe nontargets during hybridizations. Using the whole-genome thermodynamic analysis software PICKY to design tiling microarrays, we can achieve maximum whole-genome coverage allowable under the thermodynamic constraints of each target genome. The resulted tiling microarrays are thermodynamically optimal in the sense that all selected probes share the same melting temperature separation range between their targets and closest nontargets, and no additional probes can be added without violating the specificity of the microarray to the target genome. This new design method was used to create two whole-genome tiling microarrays for Escherichia coli MG1655 and Agrobacterium tumefaciens C58 and the experiment results validated the design.

  2. Value-Eroding Teacher Behaviors Scale: A Validity and Reliability Study

    ERIC Educational Resources Information Center

    Arseven, Zeynep; Kiliç, Abdurrahman; Sahin, Seyma

    2016-01-01

    In the present study, it is aimed to develop a valid and reliable scale for determining value-eroding behaviors of teachers, hence their values of judgment. The items of the "Value-eroding Teacher Behaviors Scale" were designed in the form of 5-point likert type rating scale. The exploratory factor analysis (EFA) was conducted to…

  3. Design and validation of the Health Professionals' Attitudes Toward the Homeless Inventory (HPATHI).

    PubMed

    Buck, David S; Monteiro, F Marconi; Kneuper, Suzanne; Rochon, Donna; Clark, Dana L; Melillo, Allegra; Volk, Robert J

    2005-01-10

    Recent literature has called for humanistic care of patients and for medical schools to begin incorporating humanism into medical education. To assess the attitudes of health-care professionals toward homeless patients and to demonstrate how those attitudes might impact optimal care, we developed and validated a new survey instrument, the Health Professional Attitudes Toward the Homeless Inventory (HPATHI). An instrument that measures providers' attitudes toward the homeless could offer meaningful information for the design and implementation of educational activities that foster more compassionate homeless health care. Our intention was to describe the process of designing and validating the new instrument and to discuss the usefulness of the instrument for assessing the impact of educational experiences that involve working directly with the homeless on the attitudes, interest, and confidence of medical students and other health-care professionals. The study consisted of three phases: identifying items for the instrument; pilot testing the initial instrument with a group of 72 third-year medical students; and modifying and administering the instrument in its revised form to 160 health-care professionals and third-year medical students. The instrument was analyzed for reliability and validity throughout the process. A 19-item version of the HPATHI had good internal consistency with a Cronbach's alpha of 0.88 and a test-retest reliability coefficient of 0.69. The HPATHI showed good concurrent validity, and respondents with more than one year of experience with homeless patients scored significantly higher than did those with less experience. Factor analysis yielded three subscales: Personal Advocacy, Social Advocacy, and Cynicism. The HPATHI demonstrated strong reliability for the total scale and satisfactory test-retest reliability. Extreme group comparisons suggested that experience with the homeless rather than medical training itself could affect health

  4. Design and Validation of a Virtual Player for Studying Interpersonal Coordination in the Mirror Game.

    PubMed

    Zhai, Chao; Alderisio, Francesco; Slowinski, Piotr; Tsaneva-Atanasova, Krasimira; di Bernardo, Mario

    2018-03-01

    The mirror game has been recently proposed as a simple, yet powerful paradigm for studying interpersonal interactions. It has been suggested that a virtual partner able to play the game with human subjects can be an effective tool to affect the underlying neural processes needed to establish the necessary connections between the players, and also to provide new clinical interventions for rehabilitation of patients suffering from social disorders. Inspired by the motor processes of the central nervous system (CNS) and the musculoskeletal system in the human body, in this paper we develop a novel interactive cognitive architecture based on nonlinear control theory to drive a virtual player (VP) to play the mirror game with a human player (HP) in different configurations. Specifically, we consider two cases: 1) the VP acts as leader and 2) the VP acts as follower. The crucial problem is to design a feedback control architecture capable of imitating and following or leading an HP in a joint action task. The movement of the end-effector of the VP is modeled by means of a feedback controlled Haken-Kelso-Bunz (HKB) oscillator, which is coupled with the observed motion of the HP measured in real time. To this aim, two types of control algorithms (adaptive control and optimal control) are used and implemented on the HKB model so that the VP can generate a human-like motion while satisfying certain kinematic constraints. A proof of convergence of the control algorithms is presented together with an extensive numerical and experimental validation of their effectiveness. A comparison with other existing designs is also discussed, showing the flexibility and the advantages of our control-based approach.

  5. Experimental validation of systematically designed acoustic hyperbolic meta material slab exhibiting negative refraction

    NASA Astrophysics Data System (ADS)

    Christiansen, Rasmus E.; Sigmund, Ole

    2016-09-01

    This Letter reports on the experimental validation of a two-dimensional acoustic hyperbolic metamaterial slab optimized to exhibit negative refractive behavior. The slab was designed using a topology optimization based systematic design method allowing for tailoring the refractive behavior. The experimental results confirm the predicted refractive capability as well as the predicted transmission at an interface. The study simultaneously provides an estimate of the attenuation inside the slab stemming from the boundary layer effects—insight which can be utilized in the further design of the metamaterial slabs. The capability of tailoring the refractive behavior opens possibilities for different applications. For instance, a slab exhibiting zero refraction across a wide angular range is capable of funneling acoustic energy through it, while a material exhibiting the negative refractive behavior across a wide angular range provides lensing and collimating capabilities.

  6. PSI-Center Validation Studies

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Sutherland, D. A.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2014-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with 3D extended MHD simulations using the NIMROD, HiFi, and PSI-TET codes. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), HBT-EP (Columbia), HIT-SI (U Wash-UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition (BOD) is used to compare experiments with simulations. BOD separates data sets into spatial and temporal structures, giving greater weight to dominant structures. Several BOD metrics are being formulated with the goal of quantitive validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  7. Design and validation of Segment--freely available software for cardiovascular image analysis.

    PubMed

    Heiberg, Einar; Sjögren, Jane; Ugander, Martin; Carlsson, Marcus; Engblom, Henrik; Arheden, Håkan

    2010-01-11

    Commercially available software for cardiovascular image analysis often has limited functionality and frequently lacks the careful validation that is required for clinical studies. We have already implemented a cardiovascular image analysis software package and released it as freeware for the research community. However, it was distributed as a stand-alone application and other researchers could not extend it by writing their own custom image analysis algorithms. We believe that the work required to make a clinically applicable prototype can be reduced by making the software extensible, so that researchers can develop their own modules or improvements. Such an initiative might then serve as a bridge between image analysis research and cardiovascular research. The aim of this article is therefore to present the design and validation of a cardiovascular image analysis software package (Segment) and to announce its release in a source code format. Segment can be used for image analysis in magnetic resonance imaging (MRI), computed tomography (CT), single photon emission computed tomography (SPECT) and positron emission tomography (PET). Some of its main features include loading of DICOM images from all major scanner vendors, simultaneous display of multiple image stacks and plane intersections, automated segmentation of the left ventricle, quantification of MRI flow, tools for manual and general object segmentation, quantitative regional wall motion analysis, myocardial viability analysis and image fusion tools. Here we present an overview of the validation results and validation procedures for the functionality of the software. We describe a technique to ensure continued accuracy and validity of the software by implementing and using a test script that tests the functionality of the software and validates the output. The software has been made freely available for research purposes in a source code format on the project home page http://segment.heiberg.se. Segment is

  8. Validity and power of association testing in family-based sampling designs: evidence for and against the common wisdom.

    PubMed

    Knight, Stacey; Camp, Nicola J

    2011-04-01

    Current common wisdom posits that association analyses using family-based designs have inflated type 1 error rates (if relationships are ignored) and independent controls are more powerful than familial controls. We explore these suppositions. We show theoretically that family-based designs can have deflated type-error rates. Through simulation, we examine the validity and power of family designs for several scenarios: cases from randomly or selectively ascertained pedigrees; and familial or independent controls. Family structures considered are as follows: sibships, nuclear families, moderate-sized and extended pedigrees. Three methods were considered with the χ(2) test for trend: variance correction (VC), weighted (weights assigned to account for genetic similarity), and naïve (ignoring relatedness) as well as the Modified Quasi-likelihood Score (MQLS) test. Selectively ascertained pedigrees had similar levels of disease enrichment; random ascertainment had no such restriction. Data for 1,000 cases and 1,000 controls were created under the null and alternate models. The VC and MQLS methods were always valid. The naïve method was anti-conservative if independent controls were used and valid or conservative in designs with familial controls. The weighted association method was generally valid for independent controls, and was conservative for familial controls. With regard to power, independent controls were more powerful for small-to-moderate selectively ascertained pedigrees, but familial and independent controls were equivalent in the extended pedigrees and familial controls were consistently more powerful for all randomly ascertained pedigrees. These results suggest a more complex situation than previously assumed, which has important implications for study design and analysis. © 2011 Wiley-Liss, Inc.

  9. Validation of The Scenarios Designed For The Eu Registration of Pesticides

    NASA Astrophysics Data System (ADS)

    Piñeros Garcet, J. D.; de Nie, D.; Vanclooster, M.; Tiktak, A.; Klein, M.

    As part of recent efforts to harmonise registration procedures for pesticides within the EU, a set of uniform principles were developed, setting out the detailed evaluation and decision making criteria for pesticide registration. The EU directive 91/414/EEC places great importance on the use of validated models to calculate Predicted Envi- ronmental Concentrations (PECs), as a basis for assessing the environmental risks and health effects. To be used in a harmonised registration process, the quality of PEC modelling needs to be assured. Quality assurance of mathematical modelling implies, amongst others, the validation of the environmental modelling scenarios. The FOrum for the CO-ordination of pesticide fate models and their USe (FOCUS), is the cur- rent platform where common modelling methodologies are designed and subjected for approval to the European authorities. In 2000, the FOCUS groundwater scenarios working group defined the procedures for realising tier 1 PEC groundwater calcula- tions for the active substances of plant protection products at the pan-european level. The procedures and guidelines were approved by the Standing Committee on Plant Health, and are now recommended for tier 1 PEC groundwater calculations in the reg-istration dossier. Yet, the working group also identified a range of uncertainties related to the validity of the present leaching scenarios. To mitigate some of these problems,the EU R&D project APECOP was designed and approved for support in the frame-work of the EU-FP5-Quality of Life Programme. One of the objectives of the project is to evaluate the appropriateness of the current Tier 1 groundwater scenarios. In this paper, we summarise the methodology and results of the scenarios validation.

  10. Validation of The Scenarios Designed For The Eu Registration of Pesticides

    NASA Astrophysics Data System (ADS)

    Piñeros Garcet, J. D.; de Nie, D.; Vanclooster, M.; Tiktak, A.; Klein, M.; Jones, A.

    As part of recent efforts to harmonise registration procedures for pesticides within the EU, a set of uniform principles were developed, setting out the detailed evaluation and decision making criteria for pesticide registration. The EU directive 91/414/EEC places great importance on the use of validated models to calculate Predicted Envi- ronmental Concentrations (PECs), as a basis for assessing the environmental risks and health effects. To be used in a harmonised registration process, the quality of PEC modelling needs to be assured. Quality assurance of mathematical modelling implies, amongst others, the validation of the environmental modelling scenarios. The FOrum for the CO-ordination of pesticide fate models and their USe (FOCUS), is the cur- rent platform where common modelling methodologies are designed and subjected for approval to the European authorities. In 2000, the FOCUS groundwater scenarios working group defined the procedures for realising tier 1 PEC groundwater calcula- tions for the active substances of plant protection products at the pan-european level. The procedures and guidelines were approved by the Standing Committee on Plant Health, and are now recommended for tier 1 PEC groundwater calculations in the reg- istration dossier. Yet, the working group also identified a range of uncertainties related to the validity of the present leaching scenarios. To mitigate some of these problems, the EU R&D project APECOP was designed and approved for support in the frame- work of the EU-FP5-Quality of Life Programme. One of the objectives of the project is to evaluate the appropriateness of the current Tier 1 groundwater scenarios. In this paper, we summarise the methodology and results of the scenarios validation.

  11. Design and Validation of an Infrared Badal Optometer for Laser Speckle (IBOLS)

    PubMed Central

    Teel, Danielle F. W.; Copland, R. James; Jacobs, Robert J.; Wells, Thad; Neal, Daniel R.; Thibos, Larry N.

    2009-01-01

    Purpose To validate the design of an infrared wavefront aberrometer with a Badal optometer employing the principle of laser speckle generated by a spinning disk and infrared light. The instrument was designed for subjective meridional refraction in infrared light by human patients. Methods Validation employed a model eye with known refractive error determined with an objective infrared wavefront aberrometer. The model eye was used to produce a speckle pattern on an artificial retina with controlled amounts of ametropia introduced with auxiliary ophthalmic lenses. A human observer performed the psychophysical task of observing the speckle pattern (with the aid of a video camera sensitive to infrared radiation) formed on the artificial retina. Refraction was performed by adjusting the vergence of incident light with the Badal optometer to nullify the motion of laser speckle. Validation of the method was performed for different levels of spherical ametropia and for various configurations of an astigmatic model eye. Results Subjective measurements of meridional refractive error over the range −4D to + 4D agreed with astigmatic refractive errors predicted by the power of the model eye in the meridian of motion of the spinning disk. Conclusions Use of a Badal optometer to control laser speckle is a valid method for determining subjective refractive error at infrared wavelengths. Such an instrument will be useful for comparing objective measures of refractive error obtained for the human eye with autorefractors and wavefront aberrometers that employ infrared radiation. PMID:18772719

  12. Design and Validation of a Photographic Expressive Persian Grammar Test for Children Aged 4-6 Years

    ERIC Educational Resources Information Center

    Haresabadi, Fatemeh; Ebadi, Abbas; Shirazi, Tahereh Sima; Dastjerdi Kazemi, Mehdi

    2016-01-01

    Syntax has a high importance among linguistic parameters, and syntax-related problems are the most common in language disorders. Therefore, the present study aimed to design a Photographic Expressive Persian Grammar Test for Iranian children in the age group of 4-6 years and to determine its validity and reliability. First, the target…

  13. Population Health Metrics Research Consortium gold standard verbal autopsy validation study: design, implementation, and development of analysis datasets

    PubMed Central

    2011-01-01

    Background Verbal autopsy methods are critically important for evaluating the leading causes of death in populations without adequate vital registration systems. With a myriad of analytical and data collection approaches, it is essential to create a high quality validation dataset from different populations to evaluate comparative method performance and make recommendations for future verbal autopsy implementation. This study was undertaken to compile a set of strictly defined gold standard deaths for which verbal autopsies were collected to validate the accuracy of different methods of verbal autopsy cause of death assignment. Methods Data collection was implemented in six sites in four countries: Andhra Pradesh, India; Bohol, Philippines; Dar es Salaam, Tanzania; Mexico City, Mexico; Pemba Island, Tanzania; and Uttar Pradesh, India. The Population Health Metrics Research Consortium (PHMRC) developed stringent diagnostic criteria including laboratory, pathology, and medical imaging findings to identify gold standard deaths in health facilities as well as an enhanced verbal autopsy instrument based on World Health Organization (WHO) standards. A cause list was constructed based on the WHO Global Burden of Disease estimates of the leading causes of death, potential to identify unique signs and symptoms, and the likely existence of sufficient medical technology to ascertain gold standard cases. Blinded verbal autopsies were collected on all gold standard deaths. Results Over 12,000 verbal autopsies on deaths with gold standard diagnoses were collected (7,836 adults, 2,075 children, 1,629 neonates, and 1,002 stillbirths). Difficulties in finding sufficient cases to meet gold standard criteria as well as problems with misclassification for certain causes meant that the target list of causes for analysis was reduced to 34 for adults, 21 for children, and 10 for neonates, excluding stillbirths. To ensure strict independence for the validation of methods and assessment of

  14. Study design and "evidence" in patient-oriented research.

    PubMed

    Concato, John

    2013-06-01

    Individual studies in patient-oriented research, whether described as "comparative effectiveness" or using other terms, are based on underlying methodological designs. A simple taxonomy of study designs includes randomized controlled trials on the one hand, and observational studies (such as case series, cohort studies, and case-control studies) on the other. A rigid hierarchy of these design types is a fairly recent phenomenon, promoted as a tenet of "evidence-based medicine," with randomized controlled trials receiving gold-standard status in terms of producing valid results. Although randomized trials have many strengths, and contribute substantially to the evidence base in clinical care, making presumptions about the quality of a study based solely on category of research design is unscientific. Both the limitations of randomized trials as well as the strengths of observational studies tend to be overlooked when a priori assumptions are made. This essay presents an argument in support of a more balanced approach to evaluating evidence, and discusses representative examples from the general medical as well as pulmonary and critical care literature. The simultaneous consideration of validity (whether results are correct "internally") and generalizability (how well results apply to "external" populations) is warranted in assessing whether a study's results are accurate for patients likely to receive the intervention-examining the intersection of clinical and methodological issues in what can be called a medicine-based evidence approach. Examination of cause-effect associations in patient-oriented research should recognize both the strengths and limitations of randomized trials as well as observational studies.

  15. Alternative Fistula Risk Score for Pancreatoduodenectomy (a-FRS): Design and International External Validation.

    PubMed

    Mungroop, Timothy H; van Rijssen, L Bengt; van Klaveren, David; Smits, F Jasmijn; van Woerden, Victor; Linnemann, Ralph J; de Pastena, Matteo; Klompmaker, Sjors; Marchegiani, Giovanni; Ecker, Brett L; van Dieren, Susan; Bonsing, Bert; Busch, Olivier R; van Dam, Ronald M; Erdmann, Joris; van Eijck, Casper H; Gerhards, Michael F; van Goor, Harry; van der Harst, Erwin; de Hingh, Ignace H; de Jong, Koert P; Kazemier, Geert; Luyer, Misha; Shamali, Awad; Barbaro, Salvatore; Armstrong, Thomas; Takhar, Arjun; Hamady, Zaed; Klaase, Joost; Lips, Daan J; Molenaar, I Quintus; Nieuwenhuijs, Vincent B; Rupert, Coen; van Santvoort, Hjalmar C; Scheepers, Joris J; van der Schelling, George P; Bassi, Claudio; Vollmer, Charles M; Steyerberg, Ewout W; Abu Hilal, Mohammed; Groot Koerkamp, Bas; Besselink, Marc G

    2017-12-12

    The aim of this study was to develop an alternative fistula risk score (a-FRS) for postoperative pancreatic fistula (POPF) after pancreatoduodenectomy, without blood loss as a predictor. Blood loss, one of the predictors of the original-FRS, was not a significant factor during 2 recent external validations. The a-FRS was developed in 2 databases: the Dutch Pancreatic Cancer Audit (18 centers) and the University Hospital Southampton NHS. Primary outcome was grade B/C POPF according to the 2005 International Study Group on Pancreatic Surgery (ISGPS) definition. The score was externally validated in 2 independent databases (University Hospital of Verona and University Hospital of Pennsylvania), using both 2005 and 2016 ISGPS definitions. The a-FRS was also compared with the original-FRS. For model design, 1924 patients were included of whom 12% developed POPF. Three predictors were strongly associated with POPF: soft pancreatic texture [odds ratio (OR) 2.58, 95% confidence interval (95% CI) 1.80-3.69], small pancreatic duct diameter (per mm increase, OR: 0.68, 95% CI: 0.61-0.76), and high body mass index (BMI) (per kg/m increase, OR: 1.07, 95% CI: 1.04-1.11). Discrimination was adequate with an area under curve (AUC) of 0.75 (95% CI: 0.71-0.78) after internal validation, and 0.78 (0.74-0.82) after external validation. The predictive capacity of a-FRS was comparable with the original-FRS, both for the 2005 definition (AUC 0.78 vs 0.75, P = 0.03), and 2016 definition (AUC 0.72 vs 0.70, P = 0.05). The a-FRS predicts POPF after pancreatoduodenectomy based on 3 easily available variables (pancreatic texture, duct diameter, BMI) without blood loss and pathology, and was successfully validated for both the 2005 and 2016 POPF definition.

  16. MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR

    PubMed Central

    Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo

    2015-01-01

    Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. PMID:26109350

  17. Validation studies and proficiency testing.

    PubMed

    Ankilam, Elke; Heinze, Petra; Kay, Simon; Van den Eede, Guy; Popping, Bert

    2002-01-01

    Genetically modified organisms (GMOs) entered the European food market in 1996. Current legislation demands the labeling of food products if they contain <1% GMO, as assessed for each ingredient of the product. To create confidence in the testing methods and to complement enforcement requirements, there is an urgent need for internationally validated methods, which could serve as reference methods. To date, several methods have been submitted to validation trials at an international level; approaches now exist that can be used in different circumstances and for different food matrixes. Moreover, the requirement for the formal validation of methods is clearly accepted; several national and international bodies are active in organizing studies. Further validation studies, especially on the quantitative polymerase chain reaction methods, need to be performed to cover the rising demand for new extraction methods and other background matrixes, as well as for novel GMO constructs.

  18. [Design and validation of a brief questionnaire to assess young´s sexual knowledge].

    PubMed

    Leon-Larios, Fátima; Gómez-Baya, Diego

    2018-06-01

    Only very few instruments have been developed to assess sexual knowledge and practices. Most of the research to date has been carried out with adolescent samples, but not with university students, who are also at a particularly risky stage. The aim of this study was to design and validate a brief questionnaire to assess young´s sexual knowledge, practices and behaviors to design health education programs in the university context. We created a specific questionnaire about sexual pattern in university adolescents and a brief questionnaire consisted of 9 items (true/false) about contraception, sexuality and sexual transmission diseases. We carried out a pilot study, reliability (KR-20) and validity analyses using factorial analysis and examining the association with other variables. 566 students from University of Seville participated during 2015/16. One item was eliminated because of comprehension (only 13.9% of correct answers) and weak or non significant associations (p more than 0.05). Finally, the scale was formed by 8 items and had good internal consistency reliability (KR-20 = 0.57), and both factorial and external validity reliability. A three-factor model showed good data fit, χ2 (14, N=566)=17.48, p= 0.232, Comparative Fit Index CFI = 0.97, root mean squared error of prediction RMSEA = 0.02. Participants with less knowledge about sexuality were whose did not receive any information (M=6.82, SD=1.41), without partner (M=6.87, SD=1.35), had an abortion (M=6.43, SD=1.95) and did not use any contraceptive method (M=6.66, SD=0.58) or coitus interruptus (M=6.55, SD=1.39), and had less sexual relationships, e.g., once or twice a year (M=6.49, SD=1.70). This questionnaire is a short instrument to assess students´ practices and knowledge about sexuality and contraception. The analyses of reliability and validity have shown the good psychometric properties of this instrument.

  19. 40 CFR 761.395 - A validation study.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false A validation study. 761.395 Section... PROHIBITIONS Comparison Study for Validating a New Performance-Based Decontamination Solvent Under § 761.79(d)(4) § 761.395 A validation study. (a) Decontaminate the following prepared sample surfaces using the...

  20. 40 CFR 761.395 - A validation study.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false A validation study. 761.395 Section... PROHIBITIONS Comparison Study for Validating a New Performance-Based Decontamination Solvent Under § 761.79(d)(4) § 761.395 A validation study. (a) Decontaminate the following prepared sample surfaces using the...

  1. The Contribution of Rubrics to the Validity of Performance Assessment: A Study of the Conservation-Restoration and Design Undergraduate Degrees

    ERIC Educational Resources Information Center

    Menéndez-Varela, José-Luis; Gregori-Giralt, Eva

    2016-01-01

    Rubrics have attained considerable importance in the authentic and sustainable assessment paradigm; nevertheless, few studies have examined their contribution to validity, especially outside the domain of educational studies. This empirical study used a quantitative approach to analyse the validity of a rubrics-based performance assessment. Raters…

  2. A model-based design and validation approach with OMEGA-UML and the IF toolset

    NASA Astrophysics Data System (ADS)

    Ben-hafaiedh, Imene; Constant, Olivier; Graf, Susanne; Robbana, Riadh

    2009-03-01

    Intelligent, embedded systems such as autonomous robots and other industrial systems are becoming increasingly more heterogeneous with respect to the platforms on which they are implemented, and thus the software architecture more complex to design and analyse. In this context, it is important to have well-defined design methodologies which should be supported by (1) high level design concepts allowing to master the design complexity, (2) concepts for the expression of non-functional requirements and (3) analysis tools allowing to verify or invalidate that the system under development will be able to conform to its requirements. We illustrate here such an approach for the design of complex embedded systems on hand of a small case study used as a running example for illustration purposes. We briefly present the important concepts of the OMEGA-RT UML profile, we show how we use this profile in a modelling approach, and explain how these concepts are used in the IFx verification toolbox to integrate validation into the design flow and make scalable verification possible.

  3. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  4. 40 CFR 761.395 - A validation study.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... 40 Protection of Environment 32 2013-07-01 2013-07-01 false A validation study. 761.395 Section...)(4) § 761.395 A validation study. (a) Decontaminate the following prepared sample surfaces using the... must be 10 µg/100 cm2, then the validation study failed and the solvent may not be used for...

  5. 40 CFR 761.395 - A validation study.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 31 2014-07-01 2014-07-01 false A validation study. 761.395 Section...)(4) § 761.395 A validation study. (a) Decontaminate the following prepared sample surfaces using the... must be 10 µg/100 cm2, then the validation study failed and the solvent may not be used for...

  6. 40 CFR 761.395 - A validation study.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 32 2012-07-01 2012-07-01 false A validation study. 761.395 Section...)(4) § 761.395 A validation study. (a) Decontaminate the following prepared sample surfaces using the... must be 10 µg/100 cm2, then the validation study failed and the solvent may not be used for...

  7. An exploratory sequential design to validate measures of moral emotions.

    PubMed

    Márquez, Margarita G; Delgado, Ana R

    2017-05-01

    This paper presents an exploratory and sequential mixed methods approach in validating measures of knowledge of the moral emotions of contempt, anger and disgust. The sample comprised 60 participants in the qualitative phase when a measurement instrument was designed. Item stems, response options and correction keys were planned following the results obtained in a descriptive phenomenological analysis of the interviews. In the quantitative phase, the scale was used with a sample of 102 Spanish participants, and the results were analysed with the Rasch model. In the qualitative phase, salient themes included reasons, objects and action tendencies. In the quantitative phase, good psychometric properties were obtained. The model fit was adequate. However, some changes had to be made to the scale in order to improve the proportion of variance explained. Substantive and methodological im-plications of this mixed-methods study are discussed. Had the study used a single re-search method in isolation, aspects of the global understanding of contempt, anger and disgust would have been lost.

  8. Designing and Validation a Visual Fatigue Questionnaire for Video Display Terminals Operators

    PubMed Central

    Rajabi-Vardanjani, Hassan; Habibi, Ehsanollah; Pourabdian, Siyamak; Dehghan, Habibollah; Maracy, Mohammad Reza

    2014-01-01

    Background: Along with the rapid growth of technology its related tools such as computer, monitors and video display terminals (VDTs) grow as well. Based on the studies, the most common complaint reported is of the VDT users. Methods: This study attempts to design a proper tool to assess the visual fatigue of the VDT users. First draft of the questionnaire was prepared after a thorough study on the books, papers and similar questionnaires. The validity and reliability of the questionnaire was confirmed using the content validity index (CVI) beside that of the Cronbach's Coefficient Alpha. Then, a cross-sectional study was carried out on 248 of the VDT users in different professions. A theoretical model with four categories of symptoms of visual fatigue was derived from the previous studies and questionnaires. Having used the AMOS16 software, the construct validity of the questionnaire was evaluated using the confirmatory factor analysis. The correlation co-efficiency of the internal domains was calculated using the SPSS 11.5 software. To assess the quality check index and determining the visual fatigue levels, visual fatigue of the VDT users was measured by the questionnaire and visual fatigue meter (VFM) device. Cut-off points were identified by receiver operating characteristic curves. Results: CVI and reliability co-efficiency were both equal to 0.75. Model fit indices including root mean of squared error approximation, goodness of fit index and adjusted goodness of fit index were obtained 0.026, 0.96 and 0.92 respectfully. The correlation between the results measured with the questionnaire and VFM-90.1 device was −0.87. Cut-off points of the questionnaire were 0.65, 2.36 and 3.88. The confirmed questionnaire consists of four main areas: Eye strain (4 questions), visual impairment (5 questions) and the surface impairment of the eye (3 questions) and the out of eye problems (3 questions). Conclusions: The visual fatigue questionnaire contains 15 questions and

  9. Design, Validation, and Use of an Evaluation Instrument for Monitoring Systemic Reform.

    ERIC Educational Resources Information Center

    Scantlebury, Kathryn; Boone, William; Kahle, Jane Butler; Fraser, Barry J.

    2001-01-01

    Describes the design, development, validation, and use of an instrument that measures student attitudes and several environmental dimensions (i.e., standards-based teaching, home support, and peer support). Indicates that the classroom environment (standards-based teaching practices) was the strongest independent predictor of both achievement and…

  10. MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR.

    PubMed

    Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo

    2015-11-16

    Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Study design elements for rigorous quasi-experimental comparative effectiveness research.

    PubMed

    Maciejewski, Matthew L; Curtis, Lesley H; Dowd, Bryan

    2013-03-01

    Quasi-experiments are likely to be the workhorse study design used to generate evidence about the comparative effectiveness of alternative treatments, because of their feasibility, timeliness, affordability and external validity compared with randomized trials. In this review, we outline potential sources of discordance in results between quasi-experiments and experiments, review study design choices that can improve the internal validity of quasi-experiments, and outline innovative data linkage strategies that may be particularly useful in quasi-experimental comparative effectiveness research. There is an urgent need to resolve the debate about the evidentiary value of quasi-experiments since equal consideration of rigorous quasi-experiments will broaden the base of evidence that can be brought to bear in clinical decision-making and governmental policy-making.

  12. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 30 2010-07-01 2010-07-01 false Preparing validation study samples..., AND USE PROHIBITIONS Comparison Study for Validating a New Performance-Based Decontamination Solvent Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to...

  13. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 31 2011-07-01 2011-07-01 false Preparing validation study samples..., AND USE PROHIBITIONS Comparison Study for Validating a New Performance-Based Decontamination Solvent Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to...

  14. Open and Distance Education Accreditation Standards Scale: Validity and Reliability Studies

    ERIC Educational Resources Information Center

    Can, Ertug

    2016-01-01

    The purpose of this study is to develop, and test the validity and reliability of a scale for the use of researchers to determine the accreditation standards of open and distance education based on the views of administrators, teachers, staff and students. This research was designed according to the general descriptive survey model since it aims…

  15. Reliability, Validity, and Usability of Data Extraction Programs for Single-Case Research Designs.

    PubMed

    Moeyaert, Mariola; Maggin, Daniel; Verkuilen, Jay

    2016-11-01

    Single-case experimental designs (SCEDs) have been increasingly used in recent years to inform the development and validation of effective interventions in the behavioral sciences. An important aspect of this work has been the extension of meta-analytic and other statistical innovations to SCED data. Standard practice within SCED methods is to display data graphically, which requires subsequent users to extract the data, either manually or using data extraction programs. Previous research has examined issues of reliability and validity of data extraction programs in the past, but typically at an aggregate level. Little is known, however, about the coding of individual data points. We focused on four different software programs that can be used for this purpose (i.e., Ungraph, DataThief, WebPlotDigitizer, and XYit), and examined the reliability of numeric coding, the validity compared with real data, and overall program usability. This study indicates that the reliability and validity of the retrieved data are independent of the specific software program, but are dependent on the individual single-case study graphs. Differences were found in program usability in terms of user friendliness, data retrieval time, and license costs. Ungraph and WebPlotDigitizer received the highest usability scores. DataThief was perceived as unacceptable and the time needed to retrieve the data was double that of the other three programs. WebPlotDigitizer was the only program free to use. As a consequence, WebPlotDigitizer turned out to be the best option in terms of usability, time to retrieve the data, and costs, although the usability scores of Ungraph were also strong. © The Author(s) 2016.

  16. 29 CFR 1607.5 - General standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 29 Labor 4 2010-07-01 2010-07-01 false General standards for validity studies. 1607.5 Section 1607... studies. A. Acceptable types of validity studies. For the purposes of satisfying these guidelines, users may rely upon criterion-related validity studies, content validity studies or construct validity...

  17. [The design and content validity of an infection control health education program for adolescents with cancer].

    PubMed

    Wang, Huey-Yuh; Chen, Yueh-Chih; Lin, Dong-Tsamn; Gau, Bih-Shya

    2005-06-01

    The purpose of this article is to describe the process of designing an Infection Control Health Education Program (ICP) for adolescents with cancer, to describe the content of that program, and to evaluate its validity. The program consisted of an audiovisual "Infection Control Health Education Program in Video Compact Disc (VCD)" and "Self-Care Daily Checklist (SCDC)". The VCD was developed from systematic literature reviews and consultations with experts in pediatric oncology care. It addresses the main issues of infection control among adolescents. The content of the SCDC was designed to enhance adolescents' self-care capabilities by means of twice daily self-recording. The response format for content validity of the VCD and SCDC was a 5-point Likert scale. The mean score for content validity was 4.72 for the VCD and 4.82 for the SCDC. The percentage of expert agreement was 99% for the VCD and 98% for the SCDC. In summary, the VCD was effective in improving adolescents' capacity for self-care and the extensive reinforcement SCDC was also shown to be useful. In a subsequent pilot study, the authors used this program to increase adolescent cancer patients' self-care knowledge and behavior for, and decrease their levels of secondary infection.

  18. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  19. Design and validation of a questionnaire to evaluate the usability of computerized critical care information systems.

    PubMed

    von Dincklage, Falk; Lichtner, Gregor; Suchodolski, Klaudiusz; Ragaller, Maximilian; Friesdorf, Wolfgang; Podtschaske, Beatrice

    2017-08-01

    The implementation of computerized critical care information systems (CCIS) can improve the quality of clinical care and staff satisfaction, but also holds risks of disrupting the workflow with consecutive negative impacts. The usability of CCIS is one of the key factors determining their benefits and weaknesses. However, no tailored instrument exists to measure the usability of such systems. Therefore, the aim of this study was to design and validate a questionnaire that measures the usability of CCIS. Following a mixed-method design approach, we developed a questionnaire comprising two evaluation models to assess the usability of CCIS: (1) the task-specific model rates the usability individually for several tasks which CCIS could support and which we derived by analyzing work processes in the ICU; (2) the characteristic-specific model rates the different aspects of the usability, as defined by the international standard "ergonomics of human-system interaction". We tested validity and reliability of the digital version of the questionnaire in a sample population. In the sample population of 535 participants both usability evaluation models showed a strong correlation with the overall rating of the system (multiple correlation coefficients ≥0.80) as well as a very high internal consistency (Cronbach's alpha ≥0.93). The novel questionnaire is a valid and reliable instrument to measure the usability of CCIS and can be used to study the influence of the usability on their implementation benefits and weaknesses.

  20. Block 2 Solid Rocket Motor (SRM) conceptual design study. Volume 1: Appendices

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The design studies task implements the primary objective of developing a Block II Solid Rocket Motor (SRM) design offering improved flight safety and reliability. The SRM literature was reviewed. The Preliminary Development and Validation Plan is presented.

  1. Design and validation of the eyesafe ladar testbed (ELT) using the LadarSIM system simulator

    NASA Astrophysics Data System (ADS)

    Neilsen, Kevin D.; Budge, Scott E.; Pack, Robert T.; Fullmer, R. Rees; Cook, T. Dean

    2009-05-01

    The development of an experimental full-waveform LADAR system has been enhanced with the assistance of the LadarSIM system simulation software. The Eyesafe LADAR Test-bed (ELT) was designed as a raster scanning, single-beam, energy-detection LADAR with the capability of digitizing and recording the return pulse waveform at up to 2 GHz for 3D off-line image formation research in the laboratory. To assist in the design phase, the full-waveform LADAR simulation in LadarSIM was used to simulate the expected return waveforms for various system design parameters, target characteristics, and target ranges. Once the design was finalized and the ELT constructed, the measured specifications of the system and experimental data captured from the operational sensor were used to validate the behavior of the system as predicted during the design phase. This paper presents the methodology used, and lessons learned from this "design, build, validate" process. Simulated results from the design phase are presented, and these are compared to simulated results using measured system parameters and operational sensor data. The advantages of this simulation-based process are also presented.

  2. Validation study of the Japanese version of the Obsessive-Compulsive Drinking Scale.

    PubMed

    Tatsuzawa, Yasutaka; Yoshimasu, Haruo; Moriyama, Yasushi; Furusawa, Teruyuki; Yoshino, Aihide

    2002-02-01

    The Obsessive-Compulsive Drinking Scale (OCDS) is a self-rating questionnaire that measures cognitive and behavioral aspects of craving for alcohol. The OCDS consists of two subscales: the obsessive thoughts of drinking subscale (OS) and the compulsive drinking subscale (CS). This study aims to validate the Japanese version of the OCDS. First, internal consistency and discriminant validity were evaluated. Second, a prospective longitudinal 3-month outcome study of 67 patients with alcohol dependence who participated in a relapse prevention program was designed to assess the concurrent and predictive validity of the OCDS. The OCDS demonstrated high internal consistency. The OS had high discriminant validity, while the CS did not. Twenty-three patients (34.3%) dropped out of treatment. These patients had significantly higher OS scores than those who completed the program. At 3 months, the relapse group had significantly higher OCDS scores than the no relapse group. Also, the OCDS score was higher in subjects who had early-onset alcohol dependence than late-onset dependence. The OCDS is useful for evaluating cognitive aspect of craving and predicts dropout and relapse.

  3. Experimental validation of a new heterogeneous mechanical test design

    NASA Astrophysics Data System (ADS)

    Aquino, J.; Campos, A. Andrade; Souto, N.; Thuillier, S.

    2018-05-01

    Standard material parameters identification strategies generally use an extensive number of classical tests for collecting the required experimental data. However, a great effort has been made recently by the scientific and industrial communities to support this experimental database on heterogeneous tests. These tests can provide richer information on the material behavior allowing the identification of a more complete set of material parameters. This is a result of the recent development of full-field measurements techniques, like digital image correlation (DIC), that can capture the heterogeneous deformation fields on the specimen surface during the test. Recently, new specimen geometries were designed to enhance the richness of the strain field and capture supplementary strain states. The butterfly specimen is an example of these new geometries, designed through a numerical optimization procedure where an indicator capable of evaluating the heterogeneity and the richness of strain information. However, no experimental validation was yet performed. The aim of this work is to experimentally validate the heterogeneous butterfly mechanical test in the parameter identification framework. For this aim, DIC technique and a Finite Element Model Up-date inverse strategy are used together for the parameter identification of a DC04 steel, as well as the calculation of the indicator. The experimental tests are carried out in a universal testing machine with the ARAMIS measuring system to provide the strain states on the specimen surface. The identification strategy is accomplished with the data obtained from the experimental tests and the results are compared to a reference numerical solution.

  4. School Climate of Educational Institutions: Design and Validation of a Diagnostic Scale

    ERIC Educational Resources Information Center

    Becerra, Sandra

    2016-01-01

    School climate is recognized as a relevant factor for the improvement of educative processes, favoring the administrative processes and optimum school performance. The present article is the result of a quantitative research model which had the objective of psychometrically designing and validating a scale to diagnose the organizational climate of…

  5. Construction and Validation of a Questionnaire to Study Future Teachers' Beliefs about Cultural Diversity

    ERIC Educational Resources Information Center

    López López, M. Carmen; Hinojosa Pareja, Eva F.

    2016-01-01

    The article presents the construction and validation process of a questionnaire designed to study student teachers' beliefs about cultural diversity. The study, beyond highlighting the complexity involved in the study of beliefs, emphasises their relevance in implementing inclusive educational processes that guarantee the right to a good education…

  6. Prospective study of one million deaths in India: rationale, design, and validation results.

    PubMed

    Jha, Prabhat; Gajalakshmi, Vendhan; Gupta, Prakash C; Kumar, Rajesh; Mony, Prem; Dhingra, Neeraj; Peto, Richard

    2006-02-01

    Over 75% of the annual estimated 9.5 million deaths in India occur in the home, and the large majority of these do not have a certified cause. India and other developing countries urgently need reliable quantification of the causes of death. They also need better epidemiological evidence about the relevance of physical (such as blood pressure and obesity), behavioral (such as smoking, alcohol, HIV-1 risk taking, and immunization history), and biological (such as blood lipids and gene polymorphisms) measurements to the development of disease in individuals or disease rates in populations. We report here on the rationale, design, and implementation of the world's largest prospective study of the causes and correlates of mortality. We will monitor nearly 14 million people in 2.4 million nationally representative Indian households (6.3 million people in 1.1 million households in the 1998-2003 sample frame and 7.6 million people in 1.3 million households in the 2004-2014 sample frame) for vital status and, if dead, the causes of death through a well-validated verbal autopsy (VA) instrument. About 300,000 deaths from 1998-2003 and some 700,000 deaths from 2004-2014 are expected; of these about 850,000 will be coded by two physicians to provide causes of death by gender, age, socioeconomic status, and geographical region. Pilot studies will evaluate the addition of physical and biological measurements, specifically dried blood spots. Preliminary results from over 35,000 deaths suggest that VA can ascertain the leading causes of death, reduce the misclassification of causes, and derive the probable underlying cause of death when it has not been reported. VA yields broad classification of the underlying causes in about 90% of deaths before age 70. In old age, however, the proportion of classifiable deaths is lower. By tracking underlying demographic denominators, the study permits quantification of absolute mortality rates. Household case-control, proportional mortality

  7. Performing a Content Validation Study.

    ERIC Educational Resources Information Center

    Spool, Mark D.

    Content validity is concerned with three components: (1) the job content; (2) the test content, and (3) the strength of the relationship between the two. A content validation study, to be considered adequate and defensible should include at least the following four procedures: (1) A thorough and accurate job analysis (to define the job content);…

  8. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  9. Design validation and performance of closed loop gas recirculation system

    NASA Astrophysics Data System (ADS)

    Kalmani, S. D.; Joshi, A. V.; Majumder, G.; Mondal, N. K.; Shinde, R. R.

    2016-11-01

    A pilot experimental set up of the India Based Neutrino Observatory's ICAL detector has been operational for the last 4 years at TIFR, Mumbai. Twelve glass RPC detectors of size 2 × 2 m2, with a gas gap of 2 mm are under test in a closed loop gas recirculation system. These RPCs are continuously purged individually, with a gas mixture of R134a (C2H2F4), isobutane (iC4H10) and sulphur hexafluoride (SF6) at a steady rate of 360 ml/h to maintain about one volume change a day. To economize gas mixture consumption and to reduce the effluents from being released into the atmosphere, a closed loop system has been designed, fabricated and installed at TIFR. The pressure and flow rate in the loop is controlled by mass flow controllers and pressure transmitters. The performance and integrity of RPCs in the pilot experimental set up is being monitored to assess the effect of periodic fluctuation and transients in atmospheric pressure and temperature, room pressure variation, flow pulsations, uniformity of gas distribution and power failures. The capability of closed loop gas recirculation system to respond to these changes is also studied. The conclusions from the above experiment are presented. The validations of the first design considerations and subsequent modifications have provided improved guidelines for the future design of the engineering module gas system.

  10. An empirical study of software design practices

    NASA Technical Reports Server (NTRS)

    Card, David N.; Church, Victor E.; Agresti, William W.

    1986-01-01

    Software engineers have developed a large body of software design theory and folklore, much of which was never validated. The results of an empirical study of software design practices in one specific environment are presented. The practices examined affect module size, module strength, data coupling, descendant span, unreferenced variables, and software reuse. Measures characteristic of these practices were extracted from 887 FORTRAN modules developed for five flight dynamics software projects monitored by the Software Engineering Laboratory (SEL). The relationship of these measures to cost and fault rate was analyzed using a contingency table procedure. The results show that some recommended design practices, despite their intuitive appeal, are ineffective in this environment, whereas others are very effective.

  11. Assessing the Generalizable Skills of Post-Secondary Vocational Students. A Validation Study.

    ERIC Educational Resources Information Center

    Greenan, James P.; Smith, Brandon B.

    A study examined the feasibility, reliability, and validity of two instruments designed to assess the degree to which postsecondary vocational students possessed those generalizable skills that are believed to be functionally relevant to success in a vocational program. The instruments, a student self-rating and a teacher rating form, contained 81…

  12. Quasi-experimental study designs series-paper 6: risk of bias assessment.

    PubMed

    Waddington, Hugh; Aloe, Ariel M; Becker, Betsy Jane; Djimeu, Eric W; Hombrados, Jorge Garcia; Tugwell, Peter; Wells, George; Reeves, Barney

    2017-09-01

    Rigorous and transparent bias assessment is a core component of high-quality systematic reviews. We assess modifications to existing risk of bias approaches to incorporate rigorous quasi-experimental approaches with selection on unobservables. These are nonrandomized studies using design-based approaches to control for unobservable sources of confounding such as difference studies, instrumental variables, interrupted time series, natural experiments, and regression-discontinuity designs. We review existing risk of bias tools. Drawing on these tools, we present domains of bias and suggest directions for evaluation questions. The review suggests that existing risk of bias tools provide, to different degrees, incomplete transparent criteria to assess the validity of these designs. The paper then presents an approach to evaluating the internal validity of quasi-experiments with selection on unobservables. We conclude that tools for nonrandomized studies of interventions need to be further developed to incorporate evaluation questions for quasi-experiments with selection on unobservables. Copyright © 2017 Elsevier Inc. All rights reserved.

  13. Validation study and routine control monitoring of moist heat sterilization procedures.

    PubMed

    Shintani, Hideharu

    2012-06-01

    The proposed approach to validation of steam sterilization in autoclaves follows the basic life cycle concepts applicable to all validation programs. Understand the function of sterilization process, develop and understand the cycles to carry out the process, and define a suitable test or series of tests to confirm that the function of the process is suitably ensured by the structure provided. Sterilization of product and components and parts that come in direct contact with sterilized product is the most critical of pharmaceutical processes. Consequently, this process requires a most rigorous and detailed approach to validation. An understanding of the process requires a basic understanding of microbial death, the parameters that facilitate that death, the accepted definition of sterility, and the relationship between the definition and sterilization parameters. Autoclaves and support systems need to be designed, installed, and qualified in a manner that ensures their continued reliability. Lastly, the test program must be complete and definitive. In this paper, in addition to validation study, documentation of IQ, OQ and PQ concretely were described.

  14. Validation study of an electronic method of condensed outcomes tools reporting in orthopaedics.

    PubMed

    Farr, Jack; Verma, Nikhil; Cole, Brian J

    2013-12-01

    Patient-reported outcomes (PRO) instruments are a vital source of data for evaluating the efficacy of medical treatments. Historically, outcomes instruments have been designed, validated, and implemented as paper-based questionnaires. The collection of paper-based outcomes information may result in patients becoming fatigued as they respond to redundant questions. This problem is exacerbated when multiple PRO measures are provided to a single patient. In addition, the management and analysis of data collected in paper format involves labor-intensive processes to score and render the data analyzable. Computer-based outcomes systems have the potential to mitigate these problems by reformatting multiple outcomes tools into a single, user-friendly tool.The study aimed to determine whether the electronic outcomes system presented produces results comparable with the test-retest correlations reported for the corresponding orthopedic paper-based outcomes instruments.The study is designed as a crossover study based on consecutive orthopaedic patients arriving at one of two designated orthopedic knee clinics.Patients were assigned to complete either a paper or a computer-administered questionnaire based on a similar set of questions (Knee injury and Osteoarthritis Outcome Score, International Knee Documentation Committee form, 36-Item Short Form survey, version 1, Lysholm Knee Scoring Scale). Each patient completed the same surveys using the other instrument, so that all patients had completed both paper and electronic versions. Correlations between the results from the two modes were studied and compared with test-retest data from the original validation studies.The original validation studies established test-retest reliability by computing correlation coefficients for two administrations of the paper instrument. Those correlation coefficients were all in the range of 0.7 to 0.9, which was deemed satisfactory. The present study computed correlation coefficients between

  15. [Design and validation of an instrument to assess families at risk for health problems].

    PubMed

    Puschel, Klaus; Repetto, Paula; Solar, María Olga; Soto, Gabriela; González, Karla

    2012-04-01

    There is a paucity of screening instruments with a high clinical predictive value to identify families at risk and therefore, develop focused interventions in primary care. To develop an easy to apply screening instrument with a high clinical predictive value to identify families with a higher health vulnerability. In the first stage of the study an instrument with a high content validity was designed through a review of existent instruments, qualitative interviews with families and expert opinions following a Delphi approach of three rounds. In the second stage, concurrent validity was tested through a comparative analysis between the pilot instrument and a family clinical interview conducted to 300 families randomly selected from a population registered at a primary care clinic in Santiago. The sampling was blocked based on the presence of diabetes, depression, child asthma, behavioral disorders, presence of an older person or the lack of previous conditions among family members. The third stage, was directed to test the clinical predictive validity of the instrument by comparing the baseline vulnerability obtained by the instrument and the change in clinical status and health related quality of life perceptions of the family members after nine months of follow-up. The final SALUFAM instrument included 13 items and had a high internal consistency (Cronbach's alpha: 0.821), high test re-test reproducibility (Pearson correlation: 0.84) and a high clinical predictive value for clinical deterioration (Odds ratio: 1.826; 95% confidence intervals: 1.101-3.029). SALUFAM instrument is applicable, replicable, has a high content validity, concurrent validity and clinical predictive value.

  16. The HealthNuts population-based study of paediatric food allergy: validity, safety and acceptability.

    PubMed

    Osborne, N J; Koplin, J J; Martin, P E; Gurrin, L C; Thiele, L; Tang, M L; Ponsonby, A-L; Dharmage, S C; Allen, K J

    2010-10-01

    The incidence of hospital admissions for food allergy-related anaphylaxis in Australia has increased, in line with world-wide trends. However, a valid measure of food allergy prevalence and risk factor data from a population-based study is still lacking. To describe the study design and methods used to recruit infants from a population for skin prick testing and oral food challenges, and the use of preliminary data to investigate the extent to which the study sample is representative of the target population. The study sampling frame design comprises 12-month-old infants presenting for routine scheduled vaccination at immunization clinics in Melbourne, Australia. We compared demographic features of participating families to population summary statistics from the Victorian Perinatal census database, and administered a survey to those non-responders who chose not to participate in the study. Study design proved acceptable to the community with good uptake (response rate 73.4%), with 2171 participants recruited. Demographic information on the study population mirrored the Victorian population with most the population parameters measured falling within our confidence intervals (CI). Use of a non-responder questionnaire revealed that a higher proportion of infants who declined to participate (non-responders) were already eating and tolerating peanuts, than those agreeing to participate (54.4%; 95% CI 50.8, 58.0 vs. 27.4%; 95% CI 25.5, 29.3 among participants). A high proportion of individuals approached in a community setting participated in a food allergy study. The study population differed from the eligible sample in relation to family history of allergy and prior consumption and peanut tolerance, providing some insights into the internal validity of the sample. The study exhibited external validity on general demographics to all births in Victoria. © 2010 Blackwell Publishing Ltd.

  17. The Study Designed by a Committee

    PubMed Central

    Henry, David B.; Farrell, Albert D.

    2009-01-01

    This article describes the research design of the Multisite Violence Prevention Project (MVPP), organized and funded by the National Center for Injury Prevention and Control (NCIPC) at the Centers for Disease Control and Prevention (CDC). CDC's objectives, refined in the course of collaboration among investigators, were to evaluate the efficacy of universal and targeted interventions designed to produce change at the school level. The project's design was developed collaboratively, and is a 2 × 2 cluster-randomized true experimental design in which schools within four separate sites were assigned randomly to four conditions: (1) no-intervention control group, (2) universal intervention, (3) targeted intervention, and (4) combined universal and targeted interventions. A total of 37 schools are participating in this study with 8–12 schools per site. The impact of the interventions on two successive cohorts of sixth-grade students will be assessed based on multiple waves of data from multiple sources of information, including teachers, students, parents, and archival data. The nesting of students within teachers, families, schools and sites created a number of challenges for designing and implementing the study. The final design represents both resolution and compromise on a number of creative tensions existing in large-scale prevention trials, including tensions between cost and statistical power, and between internal and external validity. Strengths and limitations of the final design are discussed. PMID:14732183

  18. Design and validation of a method for evaluation of interocular interaction.

    PubMed

    Lai, Xin Jie Angela; Alexander, Jack; Ho, Arthur; Yang, Zhikuan; He, Mingguang; Suttle, Catherine

    2012-02-01

    To design a simple viewing system allowing dichoptic masking, and to validate this system in adults and children with normal vision. A Trial Frame Apparatus (TFA) was designed to evaluate interocular interaction. This device consists of a trial frame, a 1 mm pinhole in front of the tested eye and a full or partial occluder in front of the non-tested eye. The difference in visual function in one eye between the full- and partial-occlusion conditions was termed the Interaction Index. In experiment 1, low-contrast acuity was measured in six adults using five types of partial occluder. Interaction Index was compared between these five, and the occluder showing the highest Index was used in experiment 2. In experiment 2, low-contrast acuity, contrast sensitivity, and alignment sensitivity were measured in the non-dominant eye of 45 subjects (15 older adults, 15 young adults, and 15 children), using the TFA and an existing well-validated device (shutter goggles) with full and partial occlusion of the dominant eye. These measurements were repeated on 11 subjects of each group using TFA in the partial-occlusion condition only. Repeatability of visual function measurements using TFA was assessed using the Bland-Altman method and agreement between TFA and goggles in terms of visual functions and interactions was assessed using the Bland-Altman method and t-test. In all three subject groups, the TFA showed a high level of repeatability in all visual function measurements. Contrast sensitivity was significantly poorer when measured using TFA than using goggles (p < 0.05). However, Interaction Index of all three visual functions showed acceptable agreement between TFA and goggles (p > 0.05). The TFA may provide an acceptable method for the study of some forms of dichoptic masking in populations where more complex devices (e.g., shutter goggles) cannot be used.

  19. Standing wave design and experimental validation of a tandem simulated moving bed process for insulin purification.

    PubMed

    Xie, Yi; Mun, Sungyong; Kim, Jinhyun; Wang, Nien-Hwa Linda

    2002-01-01

    A tandem simulated moving bed (SMB) process for insulin purification has been proposed and validated experimentally. The mixture to be separated consists of insulin, high molecular weight proteins, and zinc chloride. A systematic approach based on the standing wave design, rate model simulations, and experiments was used to develop this multicomponent separation process. The standing wave design was applied to specify the SMB operating conditions of a lab-scale unit with 10 columns. The design was validated with rate model simulations prior to experiments. The experimental results show 99.9% purity and 99% yield, which closely agree with the model predictions and the standing wave design targets. The agreement proves that the standing wave design can ensure high purity and high yield for the tandem SMB process. Compared to a conventional batch SEC process, the tandem SMB has 10% higher yield, 400% higher throughput, and 72% lower eluant consumption. In contrast, a design that ignores the effects of mass transfer and nonideal flow cannot meet the purity requirement and gives less than 96% yield.

  20. Design and Validation of a Rubric to Assess the Use of American Psychological Association Style in Scientific Articles

    ERIC Educational Resources Information Center

    Merma Molina, Gladys; Peña Alfaro, Hilda; Peña Alfaro González, Silvia Rosa

    2017-01-01

    In this study, the researchers will explore the process of designing and validating a rubric to evaluate the adaptation of scientific articles in the format of the "American Psychological Association" (APA). The rubric will evaluate certain aspects of the APA format that allow authors, editors, and evaluators to decide if the scientific…

  1. Reducing Threats to Validity by Design in a Nonrandomized Experiment of a School-Wide Prevention Model

    ERIC Educational Resources Information Center

    Sørlie, Mari-Anne; Ogden, Terje

    2014-01-01

    This paper reviews literature on the rationale, challenges, and recommendations for choosing a nonequivalent comparison (NEC) group design when evaluating intervention effects. After reviewing frequently addressed threats to validity, the paper describes recommendations for strengthening the research design and how the recommendations were…

  2. IR-drop analysis for validating power grids and standard cell architectures in sub-10nm node designs

    NASA Astrophysics Data System (ADS)

    Ban, Yongchan; Wang, Chenchen; Zeng, Jia; Kye, Jongwook

    2017-03-01

    Since chip performance and power are highly dependent on the operating voltage, the robust power distribution network (PDN) is of utmost importance in designs to provide with the reliable voltage without voltage (IR)-drop. However, rapid increase of parasitic resistance and capacitance (RC) in interconnects makes IR-drop much worse with technology scaling. This paper shows various IR-drop analyses in sub 10nm designs. The major objectives are to validate standard cell architectures, where different sizes of power/ground and metal tracks are validated, and to validate PDN architecture, where types of power hook-up approaches are evaluated with IR-drop calculation. To estimate IR-drops in 10nm and below technologies, we first prepare physically routed designs given standard cell libraries, where we use open RISC RTL, synthesize the CPU, and apply placement & routing with process-design kits (PDK). Then, static and dynamic IR-drop flows are set up with commercial tools. Using the IR-drop flow, we compare standard cell architectures, and analysis impacts on performance, power, and area (PPA) with the previous technology-node designs. With this IR-drop flow, we can optimize the best PDN structure against IR-drops as well as types of standard cell library.

  3. A Design of a Novel Airborne Aerosol Spectrometer for Remote Sensing Validation

    NASA Astrophysics Data System (ADS)

    Adler, G. A.; Brock, C. A.; Dube, W. P.; Erdesz, F.; Gordon, T.; Law, D. C.; Manfred, K.; Mason, B. J.; McLaughlin, R. J.; Richardson, M.; Wagner, N. L.; Washenfelder, R. A.; Murphy, D. M.

    2016-12-01

    Aerosols and their effect on the radiative properties of clouds contribute one of the largest sources of uncertainty to the Earth's energy budget. Many current global assessments, of atmospheric aerosol radiative forcing rely heavily on remote sensing observation; therefore, in situ aircraft and ground-based measurements are essential for validation of remote sensing measurements. Cavity ringdown spectrometers (CRD) measure aerosol extinction and are commonly used to validate remote sensing observations. These instruments have been deployed on aircraft based platforms over the years thus providing the opportunity to measure these properties over large areas in various conditions. However, deployment of the CRD on an aircraft platform has drawbacks. Typically, aircraft based CRDs draw sampled aerosol into a cabin based instrument through long lengths of tubing. This limits the ability of the instrument to measure: 1) Course mode aerosols (e.g. dust) 2) Aerosols at high relative humidity (above 90%) Here we describe the design of a novel aircraft based open path CRD. The open path CRD is intended to be mounted external to the cabin and has no sample tubing for aerosol delivery, thus measuring optical properties of all aerosol at the ambient conditions. However, the design of an open path CRD for operation on a wing-mounted aircraft platform has certain design complexities. The instrument's special design features include 2 CRD channels, 2 airfoils around the open Path CRD and a configuration which could be easily aligned and rigid at the same time. This novel implementation of cavity ringdown spectroscopy will provide a better assessment of the accuracy of remote sensing satellite measurements

  4. [Design and validation of a questionnaire to assess the level of general knowledge on eating disorders in students of Health Sciences].

    PubMed

    Sánchez Socarrás, Violeida; Aguilar Martínez, Alicia; Vaqué Crusellas, Cristina; Milá Villarroel, Raimon; González Rivas, Fabián

    2016-01-01

    To design and validate a questionnaire to assess the level of knowledge regarding eating disorders in college students. Observational, prospective, and longitudinal study, with the design of the questionnaire based on a conceptual review and validation by a cognitive pre-test and pilot test-retest, with analysis of the psychometric properties in each application. University Foundation of Bages, Barcelona. Marco community care. A total of 140 students from Health Sciences; 53 women and 87 men with a mean age of 21.87 years; 28 participated in the pre-test and 112 in the test-retests, 110 students completed the study. Validity and stability study using Cronbach α and Pearson product-moment correlation coefficient statistics; relationship skills with sex and type of study, non-parametric statistical Mann-Whitney and Kruskal-Wallis tests; for demographic variables, absolute or percentage frequencies, as well as mean, central tendency and standard deviation as measures of dispersion were calculated. The statistical significance level was 95% confidence. The questionnaire was obtained that had 10 questions divided into four dimensions (classification, demographics characteristics of patients, risk factors and clinical manifestations of eating disorders). The scale showed good internal consistency in its final version (Cronbach α=0.724) and adequate stability (Pearson correlation 0.749). The designed tool can be accurately used to assess Health Sciences students' knowledge of eating disorders. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  5. Design, objectives, execution and reporting of published open-label extension studies.

    PubMed

    Megan, Bowers; Pickering, Ruth M; Weatherall, Mark

    2012-04-01

    Open-label extension (OLE) studies following blinded randomized controlled trials (RCTs) of pharmaceuticals are increasingly being carried out but do not conform to regulatory standards and questions surround the validity of their evidence. OLE studies are usually discussed as a homogenous group, yet substantial differences in study design still meet the definition of an OLE. We describe published papers reporting OLE studies focussing on stated objectives, design, conduct and reporting. A search of Embase and Medline databases for 1996 to July 2008 revealed 268 papers reporting OLE studies that met our eligibility criteria. A random sample of 50 was selected for detailed review. Over 80% of the studies had efficacy stated as an objective. The most common methods of allocation at the start of the OLE were for all RCT participants to switch to one active treatment or for only participants on the new drug to continue, but in three studies all participants were re-randomized at the start of the OLE. Eligibility criteria and other selection factors resulted in on average of 74% of participants in the preceding RCT(s) enrolling in the OLE and only 57% completed it. Published OLE studies do not form a homogenous group with respect to design or retention of participants, and thus the validity of evidence from an OLE should be judged on an individual basis. The term 'open label' suggests bias through lack of blinding, but slippage in relation to the sample randomized in the preceding RCT may be the more important threat to validity. © 2010 Blackwell Publishing Ltd.

  6. The perils of ignoring design effects in experimental studies: lessons from a mammography screening trial.

    PubMed

    Glenn, Beth A; Bastani, Roshan; Maxwell, Annette E

    2013-01-01

    Threats to external validity, including pretest sensitisation and the interaction of selection and an intervention, are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomised trial designed to promote mammography use in a high-risk sample of women. During the trial, recruitment and intervention, implementation took place in three cohorts (with different ethnic composition), utilising two different designs (pretest-posttest control group design and posttest only control group design). Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample. These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity.

  7. Design and validation of a questionnaire for measuring perceived risk of skin cancer.

    PubMed

    Morales-Sánchez, M A; Peralta-Pedrero, M L; Domínguez-Gómez, M A

    2014-04-01

    A perceived risk of cancer encourages preventive behavior while the lack of such a perception is a barrier to risk reduction. There are no instruments in Spanish to measure this perceived risk and thus quantify response to interventions for preventing this disease at a population level. The aim of this study was to design and validate a self-administered questionnaire for measuring the perceived risk of skin cancer. A self-administered questionnaire with a visual Likert-type scale was designed based on the results of the analysis of the content of a survey performed in 100 patients in the Dr. Ladislao de la Pascua Skin Clinic, Distrito Federal México, Mexico. Subsequently, the questionnaire was administered to a sample of 359 adult patients who attended the clinic for the first time. As no gold standard exists for measuring the perceived risk of skin cancer, the construct was validated through factor analysis. The final questionnaire had 18 items. The internal consistency measured with Cronbach α was 0.824 overall. In the factor analysis, 4 factors (denoted as affective, behavioral, severity, and susceptibility) and an indicator of risk accounted for 65.133% of the variance. The psychometric properties of the scale were appropriate for measuring the perception of risk in adult patients (aged 18 years or more) who attended the dermatology clinic. Copyright © 2013 Elsevier España, S.L. and AEDV. All rights reserved.

  8. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  9. The Perils of Ignoring Design Effects in Experimental Studies: Lessons from a Mammography Screening Trial

    PubMed Central

    Glenn, Beth A.; Bastani, Roshan; Maxwell, Annette E.

    2013-01-01

    Objective Threats to external validity including pretest sensitization and the interaction of selection and an intervention are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomized trial designed to promote mammography use in a high risk sample of women. Design During the trial, recruitment and intervention implementation took place in three cohorts (with different ethnic composition), utilizing two different designs (pretest-posttest control group design; posttest only control group design). Results Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample. Conclusion These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity. PMID:23289517

  10. Quality by Design: Multidimensional exploration of the design space in high performance liquid chromatography method development for better robustness before validation.

    PubMed

    Monks, K; Molnár, I; Rieger, H-J; Bogáti, B; Szabó, E

    2012-04-06

    Robust HPLC separations lead to fewer analysis failures and better method transfer as well as providing an assurance of quality. This work presents the systematic development of an optimal, robust, fast UHPLC method for the simultaneous assay of two APIs of an eye drop sample and their impurities, in accordance with Quality by Design principles. Chromatography software is employed to effectively generate design spaces (Method Operable Design Regions), which are subsequently employed to determine the final method conditions and to evaluate robustness prior to validation. Copyright © 2011 Elsevier B.V. All rights reserved.

  11. Validation of time to task performance assessment method in simulation: A comparative design study.

    PubMed

    Shinnick, Mary Ann; Woo, Mary A

    2018-05-01

    There is a lack of objective and valid measures for assessing nursing clinical competence which could adversely impact patient safety. Therefore, we evaluated an objective assessment of clinical competence, Time to Task (ability to perform specific, critical nursing care activities within 5 min), and compared it to two subjective measures, (Lasater Clinical Judgement Rubric [LCJR] and common "pass/fail" assessment). Using a prospective, "Known Groups" (Expert vs. Novice nurses) comparative design, Expert nurses (ICU nurses with >5 years of ICU experience) and Novice nurses (senior prelicensure nursing students) participated individually in a simulation of a patient in decompensated heart failure. Fourteen nursing instructors or preceptors, blinded to group assignment, reviewed 28 simulation videos (15 Expert and 13 Novice) and scored them using the LCJR and pass/fail assessments. Time to Task assessment was scored based on time thresholds for specific nursing actions prospectively set by an expert clinical panel. Statistical analysis consisted of Medians Test and sensitivity and specificity analyses. The LCJR total score was significantly different between Experts and Novices (p < 0.01) and revealed adequate sensitivity (ability to correctly identify "Expert" nurses; 0.72) but had a low specificity (ability to correctly identify "Novice" nurses; 0.40). For the subjective measure 'pass/fail', sensitivity was high (0.90) but specificity was low (0.47). The Time to Task measure had statistical significance between Expert and Novice groups (p < 0.01) and sensitivity (0.80) and specificity (0.85) were good. Commonly used subjective measures of clinical nursing competence have difficulties with achieving acceptable specificity. However, an objective measure, Time to Task, had good sensitivity and specificity in differentiating between groups. While more than one assessment instrument should be used to determine nurse competency, an objective measure, such as

  12. Accounting for Test Variability through Sizing Local Domains in Sequential Design Optimization with Concurrent Calibration-Based Model Validation

    DTIC Science & Technology

    2013-08-01

    in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey

  13. Validity Studies of the Filial Anxiety Scale.

    ERIC Educational Resources Information Center

    Murray, Paul D.; And Others

    1996-01-01

    Factor analytic and construct validity studies were conducted to explore the validity of Cicirelli's 13-item Filial Anxiety Scale (FAS). The State-Trait Anxiety Inventory and the Marlowe-Crowne Social Desirability Scale were a part of the investigation. Results offer support for the validity of the FAS subscales and the FAS' usefulness as an…

  14. A NASA Perspective and Validation and Testing of Design Hardening for the Natural Space Radiation Environment (GOMAC Tech 03)

    NASA Technical Reports Server (NTRS)

    Day, John H. (Technical Monitor); LaBel, Kenneth A.; Howard, James W.; Carts, Martin A.; Seidleck, Christine

    2003-01-01

    With the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the natural space radiation environment issues: total ionizing dose (TID) and single event effects (SEE). Topics of discussion include: Types of tests that are required, Design coverage (i.e., design libraries: do they need validating for each application?) A new task within NASA to compare existing design. This latter task is a new effort in FY03 utilizing a 8051 microcontroller core from multiple design hardening developers as a test vehicle to evaluate each mitigative technique.

  15. Design and Validation of a 150 MHz HFFQCM Sensor for Bio-Sensing Applications

    PubMed Central

    Fernández, Román; García, Pablo; García, María; Jiménez, Yolanda; Arnau, Antonio

    2017-01-01

    Acoustic wave resonators have become suitable devices for a broad range of sensing applications due to their sensitivity, low cost, and integration capability, which are all factors that meet the requirements for the resonators to be used as sensing elements for portable point of care (PoC) platforms. In this work, the design, characterization, and validation of a 150 MHz high fundamental frequency quartz crystal microbalance (HFF-QCM) sensor for bio-sensing applications are introduced. Finite element method (FEM) simulations of the proposed design are in good agreement with the electrical characterization of the manufactured resonators. The sensor is also validated for bio-sensing applications. For this purpose, a specific sensor cell was designed and manufactured that addresses the critical requirements associated with this type of sensor and application. Due to the small sensing area and the sensor’s fragility, these requirements include a low-volume flow chamber in the nanoliter range, and a system approach that provides the appropriate pressure control for assuring liquid confinement while maintaining the integrity of the sensor with a good base line stability and easy sensor replacement. The sensor characteristics make it suitable for consideration as the elemental part of a sensor matrix in a multichannel platform for point of care applications. PMID:28885551

  16. Simulators' validation study: Problem solution logic

    NASA Technical Reports Server (NTRS)

    Schoultz, M. B.

    1974-01-01

    A study was conducted to validate the ground based simulators used for aircraft environment in ride-quality research. The logic to the approach for solving this problem is developed. The overall problem solution flow chart is presented. The factors which could influence the human response to the environment on board the aircraft are analyzed. The mathematical models used in the study are explained. The steps which were followed in conducting the validation tests are outlined.

  17. Family Early Literacy Practices Questionnaire: A Validation Study for a Spanish-Speaking Population

    ERIC Educational Resources Information Center

    Lewis, Kandia

    2012-01-01

    The purpose of the current study was to evaluate the psychometric validity of a Spanish translated version of a family involvement questionnaire (the FELP) using a mixed-methods design. Thus, statistical analyses (i.e., factor analysis, reliability analysis, and item analysis) and qualitative analyses (i.e., focus group data) were assessed.…

  18. Applying Case-Based Method in Designing Self-Directed Online Instruction: A Formative Research Study

    ERIC Educational Resources Information Center

    Luo, Heng; Koszalka, Tiffany A.; Arnone, Marilyn P.; Choi, Ikseon

    2018-01-01

    This study investigated the case-based method (CBM) instructional-design theory and its application in designing self-directed online instruction. The purpose of this study was to validate and refine the theory for a self-directed online instruction context. Guided by formative research methodology, this study first developed an online tutorial…

  19. The validation of a computer-adaptive test (CAT) for assessing health-related quality of life in children and adolescents in a clinical sample: study design, methods and first results of the Kids-CAT study.

    PubMed

    Barthel, D; Otto, C; Nolte, S; Meyrose, A-K; Fischer, F; Devine, J; Walter, O; Mierke, A; Fischer, K I; Thyen, U; Klein, M; Ankermann, T; Rose, M; Ravens-Sieberer, U

    2017-05-01

    Recently, we developed a computer-adaptive test (CAT) for assessing health-related quality of life (HRQoL) in children and adolescents: the Kids-CAT. It measures five generic HRQoL dimensions. The aims of this article were (1) to present the study design and (2) to investigate its psychometric properties in a clinical setting. The Kids-CAT study is a longitudinal prospective study with eight measurements over one year at two University Medical Centers in Germany. For validating the Kids-CAT, 270 consecutive 7- to 17-year-old patients with asthma (n = 52), diabetes (n = 182) or juvenile arthritis (n = 36) answered well-established HRQoL instruments (Pediatric Quality of Life Inventory™ (PedsQL), KIDSCREEN-27) and scales measuring related constructs (e.g., social support, self-efficacy). Measurement precision, test-retest reliability, convergent and discriminant validity were investigated. The mean standard error of measurement ranged between .38 and .49 for the five dimensions, which equals a reliability between .86 and .76, respectively. The Kids-CAT measured most reliably in the lower HRQoL range. Convergent validity was supported by moderate to high correlations of the Kids-CAT dimensions with corresponding PedsQL dimensions ranging between .52 and .72. A lower correlation was found between the social dimensions of both instruments. Discriminant validity was confirmed by lower correlations with non-corresponding subscales of the PedsQL. The Kids-CAT measures pediatric HRQoL reliably, particularly in lower areas of HRQoL. Its test-retest reliability should be re-investigated in future studies. The validity of the instrument was demonstrated. Overall, results suggest that the Kids-CAT is a promising candidate for detecting psychosocial needs in chronically ill children.

  20. Developmental framework to validate future designs of ballistic neck protection.

    PubMed

    Breeze, J; Midwinter, M J; Pope, D; Porter, K; Hepper, A E; Clasper, J

    2013-01-01

    The number of neck injuries has increased during the war in Afghanistan, and they have become an appreciable source of mortality and long-term morbidity for UK servicemen. A three-dimensional numerical model of the neck is necessary to allow simulation of penetrating injury from explosive fragments so that the design of body armour can be optimal, and a framework is required to validate and describe the individual components of this program. An interdisciplinary consensus group consisting of military maxillofacial surgeons, and biomedical, physical, and material scientists was convened to generate the components of the framework, and as a result it incorporates the following components: analysis of deaths and long-term morbidity, assessment of critical cervical structures for incorporation into the model, characterisation of explosive fragments, evaluation of the material of which the body armour is made, and mapping of the entry sites of fragments. The resulting numerical model will simulate the wound tract produced by fragments of differing masses and velocities, and illustrate the effects of temporary cavities on cervical neurovascular structures. Using this framework, a new shirt to be worn under body armour that incorporates ballistic cervical protection has been developed for use in Afghanistan. New designs of the collar validated by human factors and assessment of coverage are currently being incorporated into early versions of the numerical model. The aim of this paper is to describe this developmental framework and provide an update on the current progress of its individual components. Crown Copyright © 2012. Published by Elsevier Ltd. All rights reserved.

  1. Design and validation of an MR-conditional robot for transcranial focused ultrasound surgery in infants

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Price, Karl D., E-mail: karl.price@sickkids.ca

    . An MR conditional robot has been designed and manufactured to design specifications. The system has demonstrated its feasibility as a platform for MRgFUS interventions for neonatal patients. The success of the system in experimental trials suggests that it is ready to be used for validation of the transcranial intervention in animal studies.« less

  2. Validity of the estimates of oral cholera vaccine effectiveness derived from the test-negative design.

    PubMed

    Ali, Mohammad; You, Young Ae; Sur, Dipika; Kanungo, Suman; Kim, Deok Ryun; Deen, Jacqueline; Lopez, Anna Lena; Wierzba, Thomas F; Bhattacharya, Sujit K; Clemens, John D

    2016-01-20

    The test-negative design (TND) has emerged as a simple method for evaluating vaccine effectiveness (VE). Its utility for evaluating oral cholera vaccine (OCV) effectiveness is unknown. We examined this method's validity in assessing OCV effectiveness by comparing the results of TND analyses with those of conventional cohort analyses. Randomized controlled trials of OCV were conducted in Matlab (Bangladesh) and Kolkata (India), and an observational cohort design was used in Zanzibar (Tanzania). For all three studies, VE using the TND was estimated from the odds ratio (OR) relating vaccination status to fecal test status (Vibrio cholerae O1 positive or negative) among diarrheal patients enrolled during surveillance (VE= (1-OR)×100%). In cohort analyses of these studies, we employed the Cox proportional hazard model for estimating VE (=1-hazard ratio)×100%). OCV effectiveness estimates obtained using the TND (Matlab: 51%, 95% CI:37-62%; Kolkata: 67%, 95% CI:57-75%) were similar to the cohort analyses of these RCTs (Matlab: 52%, 95% CI:43-60% and Kolkata: 66%, 95% CI:55-74%). The TND VE estimate for the Zanzibar data was 94% (95% CI:84-98%) compared with 82% (95% CI:58-93%) in the cohort analysis. After adjusting for residual confounding in the cohort analysis of the Zanzibar study, using a bias indicator condition, we observed almost no difference in the two estimates. Our findings suggest that the TND is a valid approach for evaluating OCV effectiveness in routine vaccination programs. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Vision Test Validation Study for the Health Examination Survey Among Youths 12-17 years.

    ERIC Educational Resources Information Center

    Roberts, Jean

    A validation study of the vision test battery used in the Health Examination Survey of 1966-1970 was conducted among 210 youths 12-17 years-old who had been part of the larger survey. The study was designed to discover the degree of correspondence between survey test results and clinical examination by an opthalmologist in determining the…

  4. Internal Validity: A Must in Research Designs

    ERIC Educational Resources Information Center

    Cahit, Kaya

    2015-01-01

    In experimental research, internal validity refers to what extent researchers can conclude that changes in dependent variable (i.e. outcome) are caused by manipulations in independent variable. The causal inference permits researchers to meaningfully interpret research results. This article discusses (a) internal validity threats in social and…

  5. The Teenage Nonviolence Test: Concurrent and Discriminant Validity.

    ERIC Educational Resources Information Center

    Konen, Kristopher; Mayton, Daniel M., II; Delva, Zenita; Sonnen, Melinda; Dahl, William; Montgomery, Richard

    This study was designed to document the validity of the Teenage Nonviolence Test (TNT). In this study the concurrent validity of the TNT in various ways, the validity of the TNT using known groups, and the discriminant validity of the TNT by evaluating its relationships with other psychological constructs were assessed. The results showed that the…

  6. Design, Implementation and Validation of the Three-Wheel Holonomic Motion System of the Assistant Personal Robot (APR).

    PubMed

    Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi

    2016-10-10

    This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.

  7. Design, Implementation and Validation of the Three-Wheel Holonomic Motion System of the Assistant Personal Robot (APR)

    PubMed Central

    Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi

    2016-01-01

    This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm. PMID:27735857

  8. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  9. Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure Validation Simulation Study

    NASA Technical Reports Server (NTRS)

    Murdoch, Jennifer L.; Bussink, Frank J. L.; Chamberlain, James P.; Chartrand, Ryan C.; Palmer, Michael T.; Palmer, Susan O.

    2008-01-01

    The Enhanced Oceanic Operations Human-In-The-Loop In-Trail Procedure (ITP) Validation Simulation Study investigated the viability of an ITP designed to enable oceanic flight level changes that would not otherwise be possible. Twelve commercial airline pilots with current oceanic experience flew a series of simulated scenarios involving either standard or ITP flight level change maneuvers and provided subjective workload ratings, assessments of ITP validity and acceptability, and objective performance measures associated with the appropriate selection, request, and execution of ITP flight level change maneuvers. In the majority of scenarios, subject pilots correctly assessed the traffic situation, selected an appropriate response (i.e., either a standard flight level change request, an ITP request, or no request), and executed their selected flight level change procedure, if any, without error. Workload ratings for ITP maneuvers were acceptable and not substantially higher than for standard flight level change maneuvers, and, for the majority of scenarios and subject pilots, subjective acceptability ratings and comments for ITP were generally high and positive. Qualitatively, the ITP was found to be valid and acceptable. However, the error rates for ITP maneuvers were higher than for standard flight level changes, and these errors may have design implications for both the ITP and the study's prototype traffic display. These errors and their implications are discussed.

  10. A consensus-based framework for design, validation, and implementation of simulation-based training curricula in surgery.

    PubMed

    Zevin, Boris; Levy, Jeffrey S; Satava, Richard M; Grantcharov, Teodor P

    2012-10-01

    Simulation-based training can improve technical and nontechnical skills in surgery. To date, there is no consensus on the principles for design, validation, and implementation of a simulation-based surgical training curriculum. The aim of this study was to define such principles and formulate them into an interoperable framework using international expert consensus based on the Delphi method. Literature was reviewed, 4 international experts were queried, and consensus conference of national and international members of surgical societies was held to identify the items for the Delphi survey. Forty-five international experts in surgical education were invited to complete the online survey by ranking each item on a Likert scale from 1 to 5. Consensus was predefined as Cronbach's α ≥0.80. Items that 80% of experts ranked as ≥4 were included in the final framework. Twenty-four international experts with training in general surgery (n = 11), orthopaedic surgery (n = 2), obstetrics and gynecology (n = 3), urology (n = 1), plastic surgery (n = 1), pediatric surgery (n = 1), otolaryngology (n = 1), vascular surgery (n = 1), military (n = 1), and doctorate-level educators (n = 2) completed the iterative online Delphi survey. Consensus among participants was achieved after one round of the survey (Cronbach's α = 0.91). The final framework included predevelopment analysis; cognitive, psychomotor, and team-based training; curriculum validation evaluation and improvement; and maintenance of training. The Delphi methodology allowed for determination of international expert consensus on the principles for design, validation, and implementation of a simulation-based surgical training curriculum. These principles were formulated into a framework that can be used internationally across surgical specialties as a step-by-step guide for the development and validation of future simulation-based training curricula. Copyright © 2012 American College of Surgeons. Published by

  11. Design, Implementation and Validation of a Europe-Wide Pedagogical Framework for E-Learning

    ERIC Educational Resources Information Center

    Granic, Andrina; Mifsud, Charles; Cukusic, Maja

    2009-01-01

    Within the context of a Europe-wide project UNITE, a number of European partners set out to design, implement and validate a pedagogical framework (PF) for e- and m-Learning in secondary schools. The process of formulating and testing the PF was an evolutionary one that reflected the experiences and skills of the various European partners and…

  12. Modification and Validation of Conceptual Design Aerodynamic Prediction Method HASC95 With VTXCHN

    NASA Technical Reports Server (NTRS)

    Albright, Alan E.; Dixon, Charles J.; Hegedus, Martin C.

    1996-01-01

    A conceptual/preliminary design level subsonic aerodynamic prediction code HASC (High Angle of Attack Stability and Control) has been improved in several areas, validated, and documented. The improved code includes improved methodologies for increased accuracy and robustness, and simplified input/output files. An engineering method called VTXCHN (Vortex Chine) for prediciting nose vortex shedding from circular and non-circular forebodies with sharp chine edges has been improved and integrated into the HASC code. This report contains a summary of modifications, description of the code, user's guide, and validation of HASC. Appendices include discussion of a new HASC utility code, listings of sample input and output files, and a discussion of the application of HASC to buffet analysis.

  13. Supersonic Retro-Propulsion Experimental Design for Computational Fluid Dynamics Model Validation

    NASA Technical Reports Server (NTRS)

    Berry, Scott A.; Laws, Christopher T.; Kleb, W. L.; Rhode, Matthew N.; Spells, Courtney; McCrea, Andrew C.; Truble, Kerry A.; Schauerhamer, Daniel G.; Oberkampf, William L.

    2011-01-01

    The development of supersonic retro-propulsion, an enabling technology for heavy payload exploration missions to Mars, is the primary focus for the present paper. A new experimental model, intended to provide computational fluid dynamics model validation data, was recently designed for the Langley Research Center Unitary Plan Wind Tunnel Test Section 2. Pre-test computations were instrumental for sizing and refining the model, over the Mach number range of 2.4 to 4.6, such that tunnel blockage and internal flow separation issues would be minimized. A 5-in diameter 70-deg sphere-cone forebody, which accommodates up to four 4:1 area ratio nozzles, followed by a 10-in long cylindrical aftbody was developed for this study based on the computational results. The model was designed to allow for a large number of surface pressure measurements on the forebody and aftbody. Supplemental data included high-speed Schlieren video and internal pressures and temperatures. The run matrix was developed to allow for the quantification of various sources of experimental uncertainty, such as random errors due to run-to-run variations and bias errors due to flow field or model misalignments. Some preliminary results and observations from the test are presented, although detailed analyses of the data and uncertainties are still on going.

  14. In-Trail Procedure Air Traffic Control Procedures Validation Simulation Study

    NASA Technical Reports Server (NTRS)

    Chartrand, Ryan C.; Hewitt, Katrin P.; Sweeney, Peter B.; Graff, Thomas J.; Jones, Kenneth M.

    2012-01-01

    In August 2007, Airservices Australia (Airservices) and the United States National Aeronautics and Space Administration (NASA) conducted a validation experiment of the air traffic control (ATC) procedures associated with the Automatic Dependant Surveillance-Broadcast (ADS-B) In-Trail Procedure (ITP). ITP is an Airborne Traffic Situation Awareness (ATSA) application designed for near-term use in procedural airspace in which ADS-B data are used to facilitate climb and descent maneuvers. NASA and Airservices conducted the experiment in Airservices simulator in Melbourne, Australia. Twelve current operational air traffic controllers participated in the experiment, which identified aspects of the ITP that could be improved (mainly in the communication and controller approval process). Results showed that controllers viewed the ITP as valid and acceptable. This paper describes the experiment design and results.

  15. Making clinical trials more relevant: improving and validating the PRECIS tool for matching trial design decisions to trial purpose.

    PubMed

    Loudon, Kirsty; Zwarenstein, Merrick; Sullivan, Frank; Donnan, Peter; Treweek, Shaun

    2013-04-27

    If you want to know which of two or more healthcare interventions is most effective, the randomised controlled trial is the design of choice. Randomisation, however, does not itself promote the applicability of the results to situations other than the one in which the trial was done. A tool published in 2009, PRECIS (PRagmatic Explanatory Continuum Indicator Summaries) aimed to help trialists design trials that produced results matched to the aim of the trial, be that supporting clinical decision-making, or increasing knowledge of how an intervention works. Though generally positive, groups evaluating the tool have also found weaknesses, mainly that its inter-rater reliability is not clear, that it needs a scoring system and that some new domains might be needed. The aim of the study is to: Produce an improved and validated version of the PRECIS tool. Use this tool to compare the internal validity of, and effect estimates from, a set of explanatory and pragmatic trials matched by intervention. The study has four phases. Phase 1 involves brainstorming and a two-round Delphi survey of authors who cited PRECIS. In Phase 2, the Delphi results will then be discussed and alternative versions of PRECIS-2 developed and user-tested by experienced trialists. Phase 3 will evaluate the validity and reliability of the most promising PRECIS-2 candidate using a sample of 15 to 20 trials rated by 15 international trialists. We will assess inter-rater reliability, and raters' subjective global ratings of pragmatism compared to PRECIS-2 to assess convergent and face validity. Phase 4, to determine if pragmatic trials sacrifice internal validity in order to achieve applicability, will compare the internal validity and effect estimates of matched explanatory and pragmatic trials of the same intervention, condition and participants. Effect sizes for the trials will then be compared in a meta-regression. The Cochrane Risk of Bias scores will be compared with the PRECIS-2 scores of

  16. Making clinical trials more relevant: improving and validating the PRECIS tool for matching trial design decisions to trial purpose

    PubMed Central

    2013-01-01

    Background If you want to know which of two or more healthcare interventions is most effective, the randomised controlled trial is the design of choice. Randomisation, however, does not itself promote the applicability of the results to situations other than the one in which the trial was done. A tool published in 2009, PRECIS (PRagmatic Explanatory Continuum Indicator Summaries) aimed to help trialists design trials that produced results matched to the aim of the trial, be that supporting clinical decision-making, or increasing knowledge of how an intervention works. Though generally positive, groups evaluating the tool have also found weaknesses, mainly that its inter-rater reliability is not clear, that it needs a scoring system and that some new domains might be needed. The aim of the study is to: Produce an improved and validated version of the PRECIS tool. Use this tool to compare the internal validity of, and effect estimates from, a set of explanatory and pragmatic trials matched by intervention. Methods The study has four phases. Phase 1 involves brainstorming and a two-round Delphi survey of authors who cited PRECIS. In Phase 2, the Delphi results will then be discussed and alternative versions of PRECIS-2 developed and user-tested by experienced trialists. Phase 3 will evaluate the validity and reliability of the most promising PRECIS-2 candidate using a sample of 15 to 20 trials rated by 15 international trialists. We will assess inter-rater reliability, and raters’ subjective global ratings of pragmatism compared to PRECIS-2 to assess convergent and face validity. Phase 4, to determine if pragmatic trials sacrifice internal validity in order to achieve applicability, will compare the internal validity and effect estimates of matched explanatory and pragmatic trials of the same intervention, condition and participants. Effect sizes for the trials will then be compared in a meta-regression. The Cochrane Risk of Bias scores will be compared with the

  17. Structural exploration for the refinement of anticancer matrix metalloproteinase-2 inhibitor designing approaches through robust validated multi-QSARs

    NASA Astrophysics Data System (ADS)

    Adhikari, Nilanjan; Amin, Sk. Abdul; Saha, Achintya; Jha, Tarun

    2018-03-01

    Matrix metalloproteinase-2 (MMP-2) is a promising pharmacological target for designing potential anticancer drugs. MMP-2 plays critical functions in apoptosis by cleaving the DNA repair enzyme namely poly (ADP-ribose) polymerase (PARP). Moreover, MMP-2 expression triggers the vascular endothelial growth factor (VEGF) having a positive influence on tumor size, invasion, and angiogenesis. Therefore, it is an urgent need to develop potential MMP-2 inhibitors without any toxicity but better pharmacokinetic property. In this article, robust validated multi-quantitative structure-activity relationship (QSAR) modeling approaches were attempted on a dataset of 222 MMP-2 inhibitors to explore the important structural and pharmacophoric requirements for higher MMP-2 inhibition. Different validated regression and classification-based QSARs, pharmacophore mapping and 3D-QSAR techniques were performed. These results were challenged and subjected to further validation to explain 24 in house MMP-2 inhibitors to judge the reliability of these models further. All these models were individually validated internally as well as externally and were supported and validated by each other. These results were further justified by molecular docking analysis. Modeling techniques adopted here not only helps to explore the necessary structural and pharmacophoric requirements but also for the overall validation and refinement techniques for designing potential MMP-2 inhibitors.

  18. Design, testing and validation of an innovative web-based instrument to evaluate school meal quality.

    PubMed

    Patterson, Emma; Quetel, Anna-Karin; Lilja, Karin; Simma, Marit; Olsson, Linnea; Elinder, Liselotte Schäfer

    2013-06-01

    To develop a feasible, valid, reliable web-based instrument to objectively evaluate school meal quality in Swedish primary schools. The construct 'school meal quality' was operationalized by an expert panel into six domains, one of which was nutritional quality. An instrument was drafted and pilot-tested. Face validity was evaluated by the panel. Feasibility was established via a large national study. Food-based criteria to predict the nutritional adequacy of school meals in terms of fat quality, iron, vitamin D and fibre content were developed. Predictive validity was evaluated by comparing the nutritional adequacy of school menus based on these criteria with the results from a nutritional analysis. Inter-rater reliability was also assessed. The instrument was developed between 2010 and 2012. It is designed for use in all primary schools by school catering and/or management representatives. A pilot-test of eighty schools in Stockholm (autumn 2010) and a further test of feasibility in 191 schools nationally (spring 2011). The four nutrient-specific food-based criteria predicted nutritional adequacy with sensitivity ranging from 0.85 to 1.0, specificity from 0.45 to 1.0 and accuracy from 0.67 to 1.0. The sample in the national study was statistically representative and the majority of users rated the questionnaire positively, suggesting the instrument is feasible. The inter-rater reliability was fair to almost perfect for continuous variables and agreement was ≥ 67 % for categorical variables. An innovative web-based system to comprehensively monitor school meal quality across several domains, with validated questions in the nutritional domain, is available in Sweden for the first time.

  19. Clinical Validation Trial of a Diagnostic for Ebola Zaire Antigen Detection: Design Rationale and Challenges to Implementation

    PubMed Central

    Schieffelin, John; Moses, Lina M; Shaffer, Jeffrey; Goba, Augustine; Grant, Donald S

    2015-01-01

    The current Ebola outbreak in West Africa has affected more people than all previous outbreaks combined. The current diagnostic method of choice, quantitative polymerase chain reaction, requires specialized conditions as well as specially trained technicians. Insufficient testing capacity has extended the time from sample collection to results. These delays have led to further delays in the transfer and treatment to Ebola Treatment Units. A sensitive and specific point-of-care device that could be used reliably in low resource settings by healthcare workers with minimal training would increase the efficiency of triage and appropriate transfer of care. This article describes a study designed to validate the sensitivity and specificity of the ReEBOVTM RDT using venous whole blood and capillary blood obtained via fingerprick. We present the scientific and clinical rationale for the decisions made in the design of a diagnostic validation study to be conducted in an outbreak setting. The multi-site strategy greatly complicated implementation. In addition, a decrease in cases in one geographic area along with a concomitant increase in other areas made site selection challenging. Initiation of clinical trials during rapidly evolving outbreaks requires significant cooperation on a national level between research teams implementing studies and clinical care providers. Coordination and streamlining of approval process is essential if trials are to be implemented in a timely fashion. PMID:26768566

  20. Do placebo based validation standards mimic real batch products behaviour? Case studies.

    PubMed

    Bouabidi, A; Talbi, M; Bouklouze, A; El Karbane, M; Bourichi, H; El Guezzar, M; Ziemons, E; Hubert, Ph; Rozet, E

    2011-06-01

    Analytical methods validation is a mandatory step to evaluate the ability of developed methods to provide accurate results for their routine application. Validation usually involves validation standards or quality control samples that are prepared in placebo or reconstituted matrix made of a mixture of all the ingredients composing the drug product except the active substance or the analyte under investigation. However, one of the main concerns that can be made with this approach is that it may lack an important source of variability that come from the manufacturing process. The question that remains at the end of the validation step is about the transferability of the quantitative performance from validation standards to real authentic drug product samples. In this work, this topic is investigated through three case studies. Three analytical methods were validated using the commonly spiked placebo validation standards at several concentration levels as well as using samples coming from authentic batch samples (tablets and syrups). The results showed that, depending on the type of response function used as calibration curve, there were various degrees of differences in the results accuracy obtained with the two types of samples. Nonetheless the use of spiked placebo validation standards was showed to mimic relatively well the quantitative behaviour of the analytical methods with authentic batch samples. Adding these authentic batch samples into the validation design may help the analyst to select and confirm the most fit for purpose calibration curve and thus increase the accuracy and reliability of the results generated by the method in routine application. Copyright © 2011 Elsevier B.V. All rights reserved.

  1. Design and validation of a questionnaire to assess organizational culture in French hospital wards.

    PubMed

    Saillour-Glénisson, F; Domecq, S; Kret, M; Sibe, M; Dumond, J P; Michel, P

    2016-09-17

    Although many organizational culture questionnaires have been developed, there is a lack of any validated multidimensional questionnaire assessing organizational culture at hospital ward level and adapted to health care context. Facing the lack of an appropriate tool, a multidisciplinary team designed and validated a dimensional organizational culture questionnaire for healthcare settings to be administered at ward level. A database of organizational culture items and themes was created after extensive literature review. Items were regrouped into dimensions and subdimensions (classification validated by experts). Pre-test and face validation was conducted with 15 health care professionals. In a stratified cluster random sample of hospitals, the psychometric validation was conducted in three phases on a sample of 859 healthcare professionals from 36 multidisciplinary medicine services: 1) the exploratory phase included a description of responses' saturation levels, factor and correlations analyses and an internal consistency analysis (Cronbach's alpha coefficient); 2) confirmatory phase used the Structural Equation Modeling (SEM); 3) reproducibility was studied by a test-retest. The overall response rate was 80 %; the completion average was 97 %. The metrological results were: a global Cronbach's alpha coefficient of 0.93, higher than 0.70 for 12 sub-dimensions; all Dillon-Goldstein's rho coefficients higher than 0.70; an excellent quality of external model with a Goodness of Fitness (GoF) criterion of 0.99. Seventy percent of the items had a reproducibility ranging from moderate (Intra-Class Coefficient between 50 and 70 % for 25 items) to good (ICC higher than 70 % for 33 items). COMEt (Contexte Organisationnel et Managérial en Etablissement de Santé) questionnaire is a validated multidimensional organizational culture questionnaire made of 6 dimensions, 21 sub-dimensions and 83 items. It is the first dimensional organizational culture questionnaire

  2. Experimental studies of characteristic combustion-driven flows for CFD validation

    NASA Technical Reports Server (NTRS)

    Santoro, R. J.; Moser, M.; Anderson, W.; Pal, S.; Ryan, H.; Merkle, C. L.

    1992-01-01

    A series of rocket-related studies intended to develop a suitable data base for validation of Computational Fluid Dynamics (CFD) models of characteristic combustion-driven flows was undertaken at the Propulsion Engineering Research Center at Penn State. Included are studies of coaxial and impinging jet injectors as well as chamber wall heat transfer effects. The objective of these studies is to provide fundamental understanding and benchmark quality data for phenomena important to rocket combustion under well-characterized conditions. Diagnostic techniques utilized in these studies emphasize determinations of velocity, temperature, spray and droplet characteristics, and combustion zone distribution. Since laser diagnostic approaches are favored, the development of an optically accessible rocket chamber has been a high priority in the initial phase of the project. During the design phase for this chamber, the advice and input of the CFD modeling community were actively sought through presentations and written surveys. Based on this procedure, a suitable uni-element rocket chamber was fabricated and is presently under preliminary testing. Results of these tests, as well as the survey findings leading to the chamber design, were presented.

  3. The study designed by a committee: design of the Multisite Violence Prevention Project.

    PubMed

    Henry, David B; Farrell, Albert D

    2004-01-01

    This article describes the research design of the Multisite Violence Prevention Project (MVPP), organized and funded by the National Center for Injury Prevention and Control (NCIPC) at the Centers for Disease Control and Prevention (CDC). CDC's objectives, refined in the course of collaboration among investigators, were to evaluate the efficacy of universal and targeted interventions designed to produce change at the school level. The project's design was developed collaboratively, and is a 2 x 2 cluster-randomized true experimental design in which schools within four separate sites were assigned randomly to four conditions: (1) no-intervention control group, (2) universal intervention, (3) targeted intervention, and (4) combined universal and targeted interventions. A total of 37 schools are participating in this study with 8-12 schools per site. The impact of the interventions on two successive cohorts of sixth-grade students will be assessed based on multiple waves of data from multiple sources of information, including teachers, students, parents, and archival data. The nesting of students within teachers, families, schools and sites created a number of challenges for designing and implementing the study. The final design represents both resolution and compromise on a number of creative tensions existing in large-scale prevention trials, including tensions between cost and statistical power, and between internal and external validity. Strengths and limitations of the final design are discussed.

  4. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire.

    PubMed

    Unver, Vesile; Basak, Tulay; Watts, Penni; Gaioso, Vanessa; Moss, Jacqueline; Tastan, Sevinc; Iyigun, Emine; Tosun, Nuran

    2017-02-01

    The purpose of this study was to adapt the "Student Satisfaction and Self-Confidence in Learning Scale" (SCLS), "Simulation Design Scale" (SDS), and "Educational Practices Questionnaire" (EPQ) developed by Jeffries and Rizzolo into Turkish and establish the reliability and the validity of these translated scales. A sample of 87 nursing students participated in this study. These scales were cross-culturally adapted through a process including translation, comparison with original version, back translation, and pretesting. Construct validity was evaluated by factor analysis, and criterion validity was evaluated using the Perceived Learning Scale, Patient Intervention Self-confidence/Competency Scale, and Educational Belief Scale. Cronbach's alpha values were found as 0.77-0.85 for SCLS, 0.73-0.86 for SDS, and 0.61-0.86 for EPQ. The results of this study show that the Turkish versions of all scales are validated and reliable measurement tools.

  5. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 29 Labor 4 2011-07-01 2011-07-01 false Use of other validity studies. 1607.7 Section 1607.7 Labor... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection...

  6. Screening for cognitive impairment in older individuals. Validation study of a computer-based test.

    PubMed

    Green, R C; Green, J; Harrison, J M; Kutner, M H

    1994-08-01

    This study examined the validity of a computer-based cognitive test that was recently designed to screen the elderly for cognitive impairment. Criterion-related validity was examined by comparing test scores of impaired patients and normal control subjects. Construct-related validity was computed through correlations between computer-based subtests and related conventional neuropsychological subtests. University center for memory disorders. Fifty-two patients with mild cognitive impairment by strict clinical criteria and 50 unimpaired, age- and education-matched control subjects. Control subjects were rigorously screened by neurological, neuropsychological, imaging, and electrophysiological criteria to identify and exclude individuals with occult abnormalities. Using a cut-off total score of 126, this computer-based instrument had a sensitivity of 0.83 and a specificity of 0.96. Using a prevalence estimate of 10%, predictive values, positive and negative, were 0.70 and 0.96, respectively. Computer-based subtests correlated significantly with conventional neuropsychological tests measuring similar cognitive domains. Thirteen (17.8%) of 73 volunteers with normal medical histories were excluded from the control group, with unsuspected abnormalities on standard neuropsychological tests, electroencephalograms, or magnetic resonance imaging scans. Computer-based testing is a valid screening methodology for the detection of mild cognitive impairment in the elderly, although this particular test has important limitations. Broader applications of computer-based testing will require extensive population-based validation. Future studies should recognize that normal control subjects without a history of disease who are typically used in validation studies may have a high incidence of unsuspected abnormalities on neurodiagnostic studies.

  7. Development and validation of LC-MS/MS method for the quantification of oxcarbazepine in human plasma using an experimental design.

    PubMed

    Srinubabu, Gedela; Ratnam, Bandaru Veera Venkata; Rao, Allam Appa; Rao, Medicherla Narasimha

    2008-01-01

    A rapid tandem mass spectrometric (MS-MS) method for the quantification of Oxcarbazepine (OXB) in human plasma using imipramine as an internal standard (IS) has been developed and validated. Chromatographic separation was achieved isocratically on a C18 reversed-phase column within 3.0 min, using a mobile phase of acetonitrile-10 mM ammonium formate (90 : 10 v/v) at a flow rate of 0.3 ml/min. Quantitation was achieved using multiple reaction monitoring (MRM) scan at MRM transitions m/z 253>208 and m/z 281>86 for OXB and the IS respectively. Calibration curves were linear over the concentration range of 0.2-16 mug/ml (r>0.999) with a limit of quantification of 0.2 mug/ml. Analytical recoveries of OXB from spiked human plasma were in the range of 74.9 to 76.3%. Plackett-Burman design was applied for screening of chromatographic and mass spectrometric factors; factorial design was applied for optimization of essential factors for the robustness study. A linear model was postulated and a 2(3) full factorial design was employed to estimate the model coefficients for intermediate precision. More specifically, experimental design helps the researcher to verify if changes in factor values produce a statistically significant variation of the observed response. The strategy is most effective if statistical design is used in most or all stages of the screening and optimizing process for future method validation of pharmacokinetic and bioequivalence studies.

  8. Flutter suppression for the Active Flexible Wing - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of a control law for an active flutter suppression system for the Active Flexible Wing wind-tunnel model is presented. The design was accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach relied on a fundamental understanding of the flutter mechanism to formulate understanding of the flutter mechanism to formulate a simple control law structure. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in the design model. The flutter suppression controller was also successfully operated in combination with a rolling maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  9. Ares I-X Flight Test Validation of Control Design Tools in the Frequency-Domain

    NASA Technical Reports Server (NTRS)

    Johnson, Matthew; Hannan, Mike; Brandon, Jay; Derry, Stephen

    2011-01-01

    A major motivation of the Ares I-X flight test program was to Design for Data, in order to maximize the usefulness of the data recorded in support of Ares I modeling and validation of design and analysis tools. The Design for Data effort was intended to enable good post-flight characterizations of the flight control system, the vehicle structural dynamics, and also the aerodynamic characteristics of the vehicle. To extract the necessary data from the system during flight, a set of small predetermined Programmed Test Inputs (PTIs) was injected directly into the TVC signal. These PTIs were designed to excite the necessary vehicle dynamics while exhibiting a minimal impact on loads. The method is similar to common approaches in aircraft flight test programs, but with unique launch vehicle challenges due to rapidly changing states, short duration of flight, a tight flight envelope, and an inability to repeat any test. This paper documents the validation effort of the stability analysis tools to the flight data which was performed by comparing the post-flight calculated frequency response of the vehicle to the frequency response calculated by the stability analysis tools used to design and analyze the preflight models during the control design effort. The comparison between flight day frequency response and stability tool analysis for flight of the simulated vehicle shows good agreement and provides a high level of confidence in the stability analysis tools for use in any future program. This is true for both a nominal model as well as for dispersed analysis, which shows that the flight day frequency response is enveloped by the vehicle s preflight uncertainty models.

  10. Design and validation of a Cannabis Use Intention Questionnaire (CUIQ) for adolescents.

    PubMed

    Lloret Irles, Daniel; Morell-Gomis, Ramón; Laguía, Ana; Moriano, Juan A

    2018-01-01

    In Spain, one in four 14 to 18-year-old adolescents has used cannabis during the last twelve months. Demand for treatment has increased in European countries. These facts have prompted the development of preventive interventions that require screening tools in order to identify the vulnerable population and to properly asses the efficacy of such interventions. The Theory of Planned Behaviour (TPB), widely used to forecast behavioural intention, has also demonstrated a good predictive capacity in addictions. The aim of this study is to design and validate a Cannabis Use Intention Questionnaire (CUIQ) based on TPB. 1,011 teenagers answered a set of tests to assess attitude towards use, subjective norms, self-efficacy towards non-use, and intention to use cannabis. CUIQ had good psychometric properties. Structural Equation Modelling results confirm the predictive model on intention to use cannabis in the Spanish adolescent sample, classified as users and non-users, explaining 40% of variance of intention to consume. CUIQ is aimed at providing a better understanding of the psychological processes that lead to cannabis use and allowing the evaluation of programmes. This can be particularly useful for improving the design and implementation of selective prevention programmes.

  11. Validation of holistic nursing competencies: role-delineation study, 2012.

    PubMed

    Erickson, Helen Lorraine; Erickson, Margaret Elizabeth; Campbell, Joan A; Brekke, Mary E; Sandor, M Kay

    2013-12-01

    The American Holistic Nurses Credentialing Corporation (AHNCC), certifying body for nurses practicing within the precepts of holistic nursing, uses a systematic process to guide program development. A previous publication described their early work that distinguished basic and advanced holistic nursing and development of related examinations. A more recent publication described the work of AHNCC from 2004 to 2012, including a role-delineation study (RDS) that was undertaken to identify and validate competencies currently used by holistic nurses. A final report describes the RDS design, methods, and raw data information. This article discusses AHNCC's goals for undertaking the 2012 Holistic Nursing RDS and the implications for the certification programs.

  12. Design and analysis issues in quantitative proteomics studies.

    PubMed

    Karp, Natasha A; Lilley, Kathryn S

    2007-09-01

    Quantitative proteomics is the comparison of distinct proteomes which enables the identification of protein species which exhibit changes in expression or post-translational state in response to a given stimulus. Many different quantitative techniques are being utilized and generate large datasets. Independent of the technique used, these large datasets need robust data analysis to ensure valid conclusions are drawn from such studies. Approaches to address the problems that arise with large datasets are discussed to give insight into the types of statistical analyses of data appropriate for the various experimental strategies that can be employed by quantitative proteomic studies. This review also highlights the importance of employing a robust experimental design and highlights various issues surrounding the design of experiments. The concepts and examples discussed within will show how robust design and analysis will lead to confident results that will ensure quantitative proteomics delivers.

  13. Design of a Competency Evaluation Model for Clinical Nursing Practicum, Based on Standardized Language Systems: Psychometric Validation Study.

    PubMed

    Iglesias-Parra, Maria Rosa; García-Guerrero, Alfonso; García-Mayor, Silvia; Kaknani-Uttumchandani, Shakira; León-Campos, Álvaro; Morales-Asencio, José Miguel

    2015-07-01

    To develop an evaluation system of clinical competencies for the practicum of nursing students based on the Nursing Interventions Classification (NIC). Psychometric validation study: the first two phases addressed definition and content validation, and the third phase consisted of a cross-sectional study for analyzing reliability. The study population was undergraduate nursing students and clinical tutors. Through the Delphi technique, 26 competencies and 91 interventions were isolated. Cronbach's α was 0.96. Factor analysis yielded 18 factors that explained 68.82% of the variance. Overall inter-item correlation was 0.26, and total-item correlation ranged between 0.66 and 0.19. A competency system for the nursing practicum, structured on the NIC, is a reliable method for assessing and evaluating clinical competencies. Further evaluations in other contexts are needed. The availability of standardized language systems in the nursing discipline supposes an ideal framework to develop the nursing curricula. © 2015 Sigma Theta Tau International.

  14. Characterizing problematic hypoglycaemia: iterative design and preliminary psychometric validation of the Hypoglycaemia Awareness Questionnaire (HypoA-Q).

    PubMed

    Speight, J; Barendse, S M; Singh, H; Little, S A; Inkster, B; Frier, B M; Heller, S R; Rutter, M K; Shaw, J A M

    2016-03-01

    To design and conduct preliminary validation of a measure of hypoglycaemia awareness and problematic hypoglycaemia, the Hypoglycaemia Awareness Questionnaire. Exploratory and cognitive debriefing interviews were conducted with 17 adults (nine of whom were women) with Type 1 diabetes (mean ± sd age 48 ± 10 years). Questionnaire items were modified in consultation with diabetologists/psychologists. Psychometric validation was undertaken using data from 120 adults (53 women) with Type 1 diabetes (mean ± sd age 44 ± 16 years; 50% with clinically diagnosed impaired awareness of hypoglycaemia), who completed the following questionnaires: the Hypoglycaemia Awareness Questionnaire, the Gold score, the Clarke questionnaire and the Problem Areas in Diabetes questionnaire. Iterative design resulted in 33 items eliciting responses about awareness of hypoglycaemia when awake/asleep and hypoglycaemia frequency, severity and impact (healthcare utilization). Psychometric analysis identified three subscales reflecting 'impaired awareness', 'symptom level' and 'symptom frequency'. Convergent validity was indicated by strong correlations between the 'impaired awareness' subscale and existing measures of awareness: (Gold: rs =0.75, P < 0.01; Clarke: rs =0.76, P < 0.01). Divergent validity was indicated by weaker correlations with diabetes-related distress (Problem Areas in Diabetes: rs =0.25, P < 0.01) and HbA1c (rs =-0.05, non-significant). The 'impaired awareness' subscale and other items discriminated between those with impaired and intact awareness (Gold score). The 'impaired awareness' subscale and other items contributed significantly to models explaining the occurrence of severe hypoglycaemia and hypoglycaemia when asleep. This preliminary validation shows the Hypoglycaemia Awareness Questionnaire has robust face and content validity; satisfactory structure; internal reliability; convergent, divergent and known groups validity. The impaired awareness subscale and other

  15. Compulsive sexual behavior inventory: a preliminary study of reliability and validity.

    PubMed

    Coleman, E; Miner, M; Ohlerking, F; Raymond, N

    2001-01-01

    This preliminary study was designed to develop empirically a scale of compulsive sexual behavior (CSB) and to test its reliability and validity in a sample of individuals with nonparaphilic CSB (N = 15), in a sample of pedophiles (N = 35) in treatment for sexual offending, and in a sample of normal controls (N = 42). Following a factor analysis and a varimax rotation, those items with factor loadings on the rotated factors of greater than .60 were retained. Three factors were identified, which appeared to measure control, abuse, and violence. Cronbach's alphas indicated that the subscales have good reliability. The 28-item scale was then tested for validity by a linear discriminant function analysis. The scale successfully discriminated the nonparaphilic CSB sample and the pedophiles from controls. Further analysis indicated that this scale is a valid measure of CSB in that there were significant differences between the three groups on the control subscale. Pedophiles scored significantly lower than the other two groups on the abuse subscale, with the other two groups not scoring significantly differently from one another. This indicated that pedophiles were more abusive than the nonparaphilic CSB individuals or the controls. Pedophiles scored significantly lower than controls on the violence subscale. Nonparaphilic individuals with compulsive sexual behavior scored slightly lower on the violence subscale, although not significantly different. As a preliminary study, there are several limitations to this study, which should be addressed, in further studies with larger sample sizes.

  16. A study for active control research and validation using the Total In-Flight Simulator (TIFS) aircraft

    NASA Technical Reports Server (NTRS)

    Chen, R. T. N.; Daughaday, H.; Andrisani, D., II; Till, R. D.; Weingarten, N. C.

    1975-01-01

    The results of a feasibility study and preliminary design for active control research and validation using the Total In-Flight Simulator (TIFS) aircraft are documented. Active control functions which can be demonstrated on the TIFS aircraft and the cost of preparing, equipping, and operating the TIFS aircraft for active control technology development are determined. It is shown that the TIFS aircraft is as a suitable test bed for inflight research and validation of many ACT concepts.

  17. The Chinese version of the Outcome Expectations for Exercise scale: validation study.

    PubMed

    Lee, Ling-Ling; Chiu, Yu-Yun; Ho, Chin-Chih; Wu, Shu-Chen; Watson, Roger

    2011-06-01

    Estimates of the reliability and validity of the English nine-item Outcome Expectations for Exercise (OEE) scale have been tested and found to be valid for use in various settings, particularly among older people, with good internal consistency and validity. Data on the use of the OEE scale among older Chinese people living in the community and how cultural differences might affect the administration of the OEE scale are limited. To test the validity and reliability of the Chinese version of the Outcome Expectations for Exercise scale among older people. A cross-sectional validation study was designed to test the Chinese version of the OEE scale (OEE-C). Reliability was examined by testing both the internal consistency for the overall scale and the squared multiple correlation coefficient for the single item measure. The validity of the scale was tested on the basis of both a traditional psychometric test and a confirmatory factor analysis using structural equation modelling. The Mokken Scaling Procedure (MSP) was used to investigate if there were any hierarchical, cumulative sets of items in the measure. The OEE-C scale was tested in a group of older people in Taiwan (n=108, mean age=77.1). There was acceptable internal consistency (alpha=.85) and model fit in the scale. Evidence of the validity of the measure was demonstrated by the tests for criterion-related validity and construct validity. There was a statistically significant correlation between exercise outcome expectations and exercise self-efficacy (r=.34, p<.01). An analysis of the Mokken Scaling Procedure found that nine items of the scale were all retained in the analysis and the resulting scale was reliable and statistically significant (p=.0008). The results obtained in the present study provided acceptable levels of reliability and validity evidence for the Chinese Outcome Expectations for Exercise scale when used with older people in Taiwan. Future testing of the OEE-C scale needs to be carried out

  18. Copenhagen Psychosocial Questionnaire - A validation study using the Job Demand-Resources model.

    PubMed

    Berthelsen, Hanne; Hakanen, Jari J; Westerlund, Hugo

    2018-01-01

    This study aims at investigating the nomological validity of the Copenhagen Psychosocial Questionnaire (COPSOQ II) by using an extension of the Job Demands-Resources (JD-R) model with aspects of work ability as outcome. The study design is cross-sectional. All staff working at public dental organizations in four regions of Sweden were invited to complete an electronic questionnaire (75% response rate, n = 1345). The questionnaire was based on COPSOQ II scales, the Utrecht Work Engagement scale, and the one-item Work Ability Score in combination with a proprietary item. The data was analysed by Structural Equation Modelling. This study contributed to the literature by showing that: A) The scale characteristics were satisfactory and the construct validity of COPSOQ instrument could be integrated in the JD-R framework; B) Job resources arising from leadership may be a driver of the two processes included in the JD-R model; and C) Both the health impairment and motivational processes were associated with WA, and the results suggested that leadership may impact WA, in particularly by securing task resources. In conclusion, the nomological validity of COPSOQ was supported as the JD-R model-can be operationalized by the instrument. This may be helpful for transferral of complex survey results and work life theories to practitioners in the field.

  19. Further Validation of the Coach Identity Prominence Scale

    ERIC Educational Resources Information Center

    Pope, J. Paige; Hall, Craig R.

    2014-01-01

    This study was designed to examine select psychometric properties of the Coach Identity Prominence Scale (CIPS), including the reliability, factorial validity, convergent validity, discriminant validity, and predictive validity. Coaches (N = 338) who averaged 37 (SD = 12.27) years of age, had a mean of 13 (SD = 9.90) years of coaching experience,…

  20. Paired split-plot designs of multireader multicase studies.

    PubMed

    Chen, Weijie; Gong, Qi; Gallas, Brandon D

    2018-07-01

    The widely used multireader multicase ROC study design for comparing imaging modalities is the fully crossed (FC) design: every reader reads every case of both modalities. We investigate paired split-plot (PSP) designs that may allow for reduced cost and increased flexibility compared with the FC design. In the PSP design, case images from two modalities are read by the same readers, thereby the readings are paired across modalities. However, within each modality, not every reader reads every case. Instead, both the readers and the cases are partitioned into a fixed number of groups and each group of readers reads its own group of cases-a split-plot design. Using a [Formula: see text]-statistic based variance analysis for AUC (i.e., area under the ROC curve), we show analytically that precision can be gained by the PSP design as compared with the FC design with the same number of readers and readings. Equivalently, we show that the PSP design can achieve the same statistical power as the FC design with a reduced number of readings. The trade-off for the increased precision in the PSP design is the cost of collecting a larger number of truth-verified patient cases than the FC design. This means that one can trade-off between different sources of cost and choose a least burdensome design. We provide a validation study to show the iMRMC software can be reliably used for analyzing data from both FC and PSP designs. Finally, we demonstrate the advantages of the PSP design with a reader study comparing full-field digital mammography with screen-film mammography.

  1. Verification and Validation of Requirements on the CEV Parachute Assembly System Using Design of Experiments

    NASA Technical Reports Server (NTRS)

    Schulte, Peter Z.; Moore, James W.

    2011-01-01

    The Crew Exploration Vehicle Parachute Assembly System (CPAS) project conducts computer simulations to verify that flight performance requirements on parachute loads and terminal rate of descent are met. Design of Experiments (DoE) provides a systematic method for variation of simulation input parameters. When implemented and interpreted correctly, a DoE study of parachute simulation tools indicates values and combinations of parameters that may cause requirement limits to be violated. This paper describes one implementation of DoE that is currently being developed by CPAS, explains how DoE results can be interpreted, and presents the results of several preliminary studies. The potential uses of DoE to validate parachute simulation models and verify requirements are also explored.

  2. Propeller aircraft interior noise model utilization study and validation

    NASA Technical Reports Server (NTRS)

    Pope, L. D.

    1984-01-01

    Utilization and validation of a computer program designed for aircraft interior noise prediction is considered. The program, entitled PAIN (an acronym for Propeller Aircraft Interior Noise), permits (in theory) predictions of sound levels inside propeller driven aircraft arising from sidewall transmission. The objective of the work reported was to determine the practicality of making predictions for various airplanes and the extent of the program's capabilities. The ultimate purpose was to discern the quality of predictions for tonal levels inside an aircraft occurring at the propeller blade passage frequency and its harmonics. The effort involved three tasks: (1) program validation through comparisons of predictions with scale-model test results; (2) development of utilization schemes for large (full scale) fuselages; and (3) validation through comparisons of predictions with measurements taken in flight tests on a turboprop aircraft. Findings should enable future users of the program to efficiently undertake and correctly interpret predictions.

  3. Design, Development and Validation of a Model of Problem Solving for Egyptian Science Classes

    ERIC Educational Resources Information Center

    Shahat, Mohamed A.; Ohle, Annika; Treagust, David F.; Fischer, Hans E.

    2013-01-01

    Educators and policymakers envision the future of education in Egypt as enabling learners to acquire scientific inquiry and problem-solving skills. In this article, we describe the validation of a model for problem solving and the design of instruments for evaluating new teaching methods in Egyptian science classes. The instruments were based on…

  4. Explanatory Versus Pragmatic Trials: An Essential Concept in Study Design and Interpretation.

    PubMed

    Merali, Zamir; Wilson, Jefferson R

    2017-11-01

    Randomized clinical trials often represent the highest level of clinical evidence available to evaluate the efficacy of an intervention in clinical medicine. Although the process of randomization serves to maximize internal validity, the external validity, or generalizability, of such studies depends on several factors determined at the design phase of the trial including eligibility criteria, study setting, and outcomes of interest. In general, explanatory trials are optimized to demonstrate the efficacy of an intervention in a highly selected patient group; however, findings from these studies may not be generalizable to the larger clinical problem. In contrast, pragmatic trials attempt to understand the real-world benefit of an intervention by incorporating design elements that allow for greater generalizability and clinical applicability of study results. In this article we describe the explanatory-pragmatic continuum for clinical trials in greater detail. Further, a well-accepted tool for grading trials on this continuum is described, and applied, to 2 recently published trials pertaining to the surgical management of lumbar degenerative spondylolisthesis.

  5. An exploration into study design for biomarker identification: issues and recommendations.

    PubMed

    Hall, Jacqueline A; Brown, Robert; Paul, Jim

    2007-01-01

    Genomic profiling produces large amounts of data and a challenge remains in identifying relevant biological processes associated with clinical outcome. Many candidate biomarkers have been identified but few have been successfully validated and make an impact clinically. This review focuses on some of the study design issues encountered in data mining for biomarker identification with illustrations of how study design may influence the final results. This includes issues of clinical endpoint use and selection, power, statistical, biological and clinical significance. We give particular attention to study design for the application of supervised clustering methods for identification of gene networks associated with clinical outcome and provide recommendations for future work to increase the success of identification of clinically relevant biomarkers.

  6. Preliminary Axial Flow Turbine Design and Off-Design Performance Analysis Methods for Rotary Wing Aircraft Engines. Part 1; Validation

    NASA Technical Reports Server (NTRS)

    Chen, Shu-cheng, S.

    2009-01-01

    For the preliminary design and the off-design performance analysis of axial flow turbines, a pair of intermediate level-of-fidelity computer codes, TD2-2 (design; reference 1) and AXOD (off-design; reference 2), are being evaluated for use in turbine design and performance prediction of the modern high performance aircraft engines. TD2-2 employs a streamline curvature method for design, while AXOD approaches the flow analysis with an equal radius-height domain decomposition strategy. Both methods resolve only the flows in the annulus region while modeling the impact introduced by the blade rows. The mathematical formulations and derivations involved in both methods are documented in references 3, 4 for TD2-2) and in reference 5 (for AXOD). The focus of this paper is to discuss the fundamental issues of applicability and compatibility of the two codes as a pair of companion pieces, to perform preliminary design and off-design analysis for modern aircraft engine turbines. Two validation cases for the design and the off-design prediction using TD2-2 and AXOD conducted on two existing high efficiency turbines, developed and tested in the NASA/GE Energy Efficient Engine (GE-E3) Program, the High Pressure Turbine (HPT; two stages, air cooled) and the Low Pressure Turbine (LPT; five stages, un-cooled), are provided in support of the analysis and discussion presented in this paper.

  7. Parent Reports of Young Spanish-English Bilingual Children's Productive Vocabulary: A Development and Validation Study.

    PubMed

    Mancilla-Martinez, Jeannette; Gámez, Perla B; Vagh, Shaher Banu; Lesaux, Nonie K

    2016-01-01

    This 2-phase study aims to extend research on parent report measures of children's productive vocabulary by investigating the development (n = 38) of the Spanish Vocabulary Extension and validity (n = 194) of the 100-item Spanish and English MacArthur-Bates Communicative Development Inventories Toddler Short Forms and Upward Extension (Fenson et al., 2000, 2007; Jackson-Maldonado, Marchman, & Fernald, 2013) and the Spanish Vocabulary Extension for use with parents from low-income homes and their 24- to 48-month-old Spanish-English bilingual children. Study participants were drawn from Early Head Start and Head Start collaborative programs in the Northeastern United States in which English was the primary language used in the classroom. All families reported Spanish or Spanish-English as their home language(s). The MacArthur Communicative Development Inventories as well as the researcher-designed Spanish Vocabulary Extension were used as measures of children's English and Spanish productive vocabularies. Findings revealed the forms' concurrent and discriminant validity, on the basis of standardized measures of vocabulary, as measures of productive vocabulary for this growing bilingual population. These findings suggest that parent reports, including our researcher-designed form, represent a valid, cost-effective mechanism for vocabulary monitoring purposes in early childhood education settings.

  8. A critical analysis of test-retest reliability in instrument validation studies of cancer patients under palliative care: a systematic review

    PubMed Central

    2014-01-01

    Background Patient-reported outcome validation needs to achieve validity and reliability standards. Among reliability analysis parameters, test-retest reliability is an important psychometric property. Retested patients must be in a clinically stable condition. This is particularly problematic in palliative care (PC) settings because advanced cancer patients are prone to a faster rate of clinical deterioration. The aim of this study was to evaluate the methods by which multi-symptom and health-related qualities of life (HRQoL) based on patient-reported outcomes (PROs) have been validated in oncological PC settings with regards to test-retest reliability. Methods A systematic search of PubMed (1966 to June 2013), EMBASE (1980 to June 2013), PsychInfo (1806 to June 2013), CINAHL (1980 to June 2013), and SCIELO (1998 to June 2013), and specific PRO databases was performed. Studies were included if they described a set of validation studies. Studies were included if they described a set of validation studies for an instrument developed to measure multi-symptom or multidimensional HRQoL in advanced cancer patients under PC. The COSMIN checklist was used to rate the methodological quality of the study designs. Results We identified 89 validation studies from 746 potentially relevant articles. From those 89 articles, 31 measured test-retest reliability and were included in this review. Upon critical analysis of the overall quality of the criteria used to determine the test-retest reliability, 6 (19.4%), 17 (54.8%), and 8 (25.8%) of these articles were rated as good, fair, or poor, respectively, and no article was classified as excellent. Multi-symptom instruments were retested over a shortened interval when compared to the HRQoL instruments (median values 24 hours and 168 hours, respectively; p = 0.001). Validation studies that included objective confirmation of clinical stability in their design yielded better results for the test-retest analysis with regard to both

  9. Development of a Valid and Reliable Knee Articular Cartilage Condition-Specific Study Methodological Quality Score.

    PubMed

    Harris, Joshua D; Erickson, Brandon J; Cvetanovich, Gregory L; Abrams, Geoffrey D; McCormick, Frank M; Gupta, Anil K; Verma, Nikhil N; Bach, Bernard R; Cole, Brian J

    2014-02-01

    Condition-specific questionnaires are important components in evaluation of outcomes of surgical interventions. No condition-specific study methodological quality questionnaire exists for evaluation of outcomes of articular cartilage surgery in the knee. To develop a reliable and valid knee articular cartilage-specific study methodological quality questionnaire. Cross-sectional study. A stepwise, a priori-designed framework was created for development of a novel questionnaire. Relevant items to the topic were identified and extracted from a recent systematic review of 194 investigations of knee articular cartilage surgery. In addition, relevant items from existing generic study methodological quality questionnaires were identified. Items for a preliminary questionnaire were generated. Redundant and irrelevant items were eliminated, and acceptable items modified. The instrument was pretested and items weighed. The instrument, the MARK score (Methodological quality of ARticular cartilage studies of the Knee), was tested for validity (criterion validity) and reliability (inter- and intraobserver). A 19-item, 3-domain MARK score was developed. The 100-point scale score demonstrated face validity (focus group of 8 orthopaedic surgeons) and criterion validity (strong correlation to Cochrane Quality Assessment score and Modified Coleman Methodology Score). Interobserver reliability for the overall score was good (intraclass correlation coefficient [ICC], 0.842), and for all individual items of the MARK score, acceptable to perfect (ICC, 0.70-1.000). Intraobserver reliability ICC assessed over a 3-week interval was strong for 2 reviewers (≥0.90). The MARK score is a valid and reliable knee articular cartilage condition-specific study methodological quality instrument. This condition-specific questionnaire may be used to evaluate the quality of studies reporting outcomes of articular cartilage surgery in the knee.

  10. Critical validation studies of neurofeedback.

    PubMed

    Gruzelier, John; Egner, Tobias

    2005-01-01

    The field of neurofeedback training has proceeded largely without validation. In this article the authors review studies directed at validating sensory motor rhythm, beta and alpha-theta protocols for improving attention, memory, and music performance in healthy participants. Importantly, benefits were demonstrable with cognitive and neurophysiologic measures that were predicted on the basis of regression models of learning to enhance sensory motor rhythm and beta activity. The first evidence of operant control over the alpha-theta ratio is provided, together with remarkable improvements in artistic aspects of music performance equivalent to two class grades in conservatory students. These are initial steps in providing a much needed scientific basis to neurofeedback.

  11. [Design and validation of the CSR-Hospital-SP scale to measure corporate social responsibility].

    PubMed

    Mira, José Joaquín; Lorenzo, Susana; Navarro, Isabel; Pérez-Jover, Virtudes; Vitaller, Julián

    2013-01-01

    To design and validate a scale (CSR-Hospital-SP) to determine health professionals' views on the approach of management to corporate social responsibility (CSR) in their hospital. The literature was reviewed to identify the main CSR scales and select the dimensions to be evaluated. The initial version of the scale consisted of 25 items. A convenience sample of a minimum of 224 health professionals working in five public hospitals in five autonomous regions were invited to respond. Floor and ceiling effects, internal consistency, reliability, and construct validity were analyzed. A total of 233 health professionals responded. The CSR-Hospital-SP scale had 20 items grouped into four factors. The item-total correlation was higher than 0.30; all factor loadings were greater than 0.50; 59.57% of the variance was explained; Cronbach's alpha was 0.90; Spearman-Brown's coefficient was 0.82. The CSR-Hospital-SP scale is a tool designed for hospitals that implement accountability mechanisms and promote socially responsible management approaches. Copyright © 2012 SESPAS. Published by Elsevier Espana. All rights reserved.

  12. Towards practical application of sensors for monitoring animal health; design and validation of a model to detect ketosis.

    PubMed

    Steensels, Machteld; Maltz, Ephraim; Bahr, Claudia; Berckmans, Daniel; Antler, Aharon; Halachmi, Ilan

    2017-05-01

    The objective of this study was to design and validate a mathematical model to detect post-calving ketosis. The validation was conducted in four commercial dairy farms in Israel, on a total of 706 multiparous Holstein dairy cows: 203 cows clinically diagnosed with ketosis and 503 healthy cows. A logistic binary regression model was developed, where the dependent variable is categorical (healthy/diseased) and a set of explanatory variables were measured with existing commercial sensors: rumination duration, activity and milk yield of each individual cow. In a first validation step (within-farm), the model was calibrated on the database of each farm separately. Two thirds of the sick cows and an equal number of healthy cows were randomly selected for model validation. The remaining one third of the cows, which did not participate in the model validation, were used for model calibration. In order to overcome the random selection effect, this procedure was repeated 100 times. In a second (between-farms) validation step, the model was calibrated on one farm and validated on another farm. Within-farm accuracy, ranging from 74 to 79%, was higher than between-farm accuracy, ranging from 49 to 72%, in all farms. The within-farm sensitivities ranged from 78 to 90%, and specificities ranged from 71 to 74%. The between-farms sensitivities ranged from 65 to 95%. The developed model can be improved in future research, by employing other variables that can be added; or by exploring other models to achieve greater sensitivity and specificity.

  13. Design and Experimental Validation of a USBL Underwater Acoustic Positioning System.

    PubMed

    Reis, Joel; Morgado, Marco; Batista, Pedro; Oliveira, Paulo; Silvestre, Carlos

    2016-09-14

    This paper presents the steps for developing a low-cost POrtableNavigation Tool for Underwater Scenarios (PONTUS) to be used as a localization device for subsea targets. PONTUS consists of an integrated ultra-short baseline acoustic positioning system aided by an inertial navigation system. Built on a practical design, it can be mounted on an underwater robotic vehicle or be operated by a scuba diver. It also features a graphical user interface that provides information on the tracking of the designated target, in addition to some details on the physical properties inside PONTUS. A full disclosure of the architecture of the tool is first presented, followed by thorough technical descriptions of the hardware components ensemble and the software development process. A series of experiments was carried out to validate the developed prototype, and the results are presented herein, which allow assessing its overall performance.

  14. Design and Experimental Validation of a USBL Underwater Acoustic Positioning System

    PubMed Central

    Reis, Joel; Morgado, Marco; Batista, Pedro; Oliveira, Paulo; Silvestre, Carlos

    2016-01-01

    This paper presents the steps for developing a low-cost POrtableNavigation Tool for Underwater Scenarios (PONTUS) to be used as a localization device for subsea targets. PONTUS consists of an integrated ultra-short baseline acoustic positioning system aided by an inertial navigation system. Built on a practical design, it can be mounted on an underwater robotic vehicle or be operated by a scuba diver. It also features a graphical user interface that provides information on the tracking of the designated target, in addition to some details on the physical properties inside PONTUS. A full disclosure of the architecture of the tool is first presented, followed by thorough technical descriptions of the hardware components ensemble and the software development process. A series of experiments was carried out to validate the developed prototype, and the results are presented herein, which allow assessing its overall performance. PMID:27649181

  15. Design and experimental validation of a flutter suppression controller for the active flexible wing

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and extensive simulation based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite modeling errors in predicted flutter dynamic pressure and flutter frequency. The flutter suppression controller was also successfully operated in combination with another controller to perform flutter suppression during rapid rolling maneuvers.

  16. Assessing reliability and validity measures in managed care studies.

    PubMed

    Montoya, Isaac D

    2003-01-01

    To review the reliability and validity literature and develop an understanding of these concepts as applied to managed care studies. Reliability is a test of how well an instrument measures the same input at varying times and under varying conditions. Validity is a test of how accurately an instrument measures what one believes is being measured. A review of reliability and validity instructional material was conducted. Studies of managed care practices and programs abound. However, many of these studies utilize measurement instruments that were developed for other purposes or for a population other than the one being sampled. In other cases, instruments have been developed without any testing of the instrument's performance. The lack of reliability and validity information may limit the value of these studies. This is particularly true when data are collected for one purpose and used for another. The usefulness of certain studies without reliability and validity measures is questionable, especially in cases where the literature contradicts itself

  17. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...

  18. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...

  19. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...

  20. Design and validation of a slender guideway for Maglev vehicle by simulation and experiment

    NASA Astrophysics Data System (ADS)

    Han, Jong-Boo; Han, Hyung-Suk; Kim, Sung-Soo; Yang, Seok-Jo; Kim, Ki-Jung

    2016-03-01

    Normally, Maglev (magnetic levitation) vehicles run on elevated guideways. The elevated guideway must satisfy various load conditions of the vehicle, and has to be designed to ensure ride quality, while ensuring that the levitation stability of the vehicle is not affected by the deflection of the guideway. However, because the elevated guideways of Maglev vehicles in South Korea and other countries fabricated so far have been based on over-conservative design criteria, the size of the structures has increased. Further, from the cost perspective, they are unfavourable when compared with other light rail transits such as monorail, rubber wheel, and steel wheel automatic guided transit. Therefore, a slender guideway that does have an adverse effect on the levitation stability of the vehicle is required through optimisation of design criteria. In this study, to predict the effect of various design parameters of the guideway on the dynamic behaviour of the vehicle, simulations were carried out using a dynamics model similar to the actual vehicle and guideway, and a limiting value of deflection ratio of the slender guideway to ensure levitation control is proposed. A guideway that meets the requirement as per the proposed limit for deflection ratio was designed and fabricated, and through a driving test of the vehicle, the validity of the slender guideway was verified. From the results, it was confirmed that although some increase in airgap and cabin acceleration was observed with the proposed slender guideway when compared with the conventional guideway, there was no notable adverse effect on the levitation stability and ride quality of the vehicle. Therefore, it can be inferred that the results of this study will become the basis for establishing design criteria for slender guideways of Maglev vehicles in future.

  1. 41 CFR 60-3.7 - Use of other validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... studies. 60-3.7 Section 60-3.7 Public Contracts and Property Management Other Provisions Relating to... of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection procedures by validity studies conducted by other users or conducted...

  2. 41 CFR 60-3.7 - Use of other validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... studies. 60-3.7 Section 60-3.7 Public Contracts and Property Management Other Provisions Relating to... of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection procedures by validity studies conducted by other users or conducted...

  3. External Validation Study of First Trimester Obstetric Prediction Models (Expect Study I): Research Protocol and Population Characteristics.

    PubMed

    Meertens, Linda Jacqueline Elisabeth; Scheepers, Hubertina Cj; De Vries, Raymond G; Dirksen, Carmen D; Korstjens, Irene; Mulder, Antonius Lm; Nieuwenhuijze, Marianne J; Nijhuis, Jan G; Spaanderman, Marc Ea; Smits, Luc Jm

    2017-10-26

    A number of first-trimester prediction models addressing important obstetric outcomes have been published. However, most models have not been externally validated. External validation is essential before implementing a prediction model in clinical practice. The objective of this paper is to describe the design of a study to externally validate existing first trimester obstetric prediction models, based upon maternal characteristics and standard measurements (eg, blood pressure), for the risk of pre-eclampsia (PE), gestational diabetes mellitus (GDM), spontaneous preterm birth (PTB), small-for-gestational-age (SGA) infants, and large-for-gestational-age (LGA) infants among Dutch pregnant women (Expect Study I). The results of a pilot study on the feasibility and acceptability of the recruitment process and the comprehensibility of the Pregnancy Questionnaire 1 are also reported. A multicenter prospective cohort study was performed in The Netherlands between July 1, 2013 and December 31, 2015. First trimester obstetric prediction models were systematically selected from the literature. Predictor variables were measured by the Web-based Pregnancy Questionnaire 1 and pregnancy outcomes were established using the Postpartum Questionnaire 1 and medical records. Information about maternal health-related quality of life, costs, and satisfaction with Dutch obstetric care was collected from a subsample of women. A pilot study was carried out before the official start of inclusion. External validity of the models will be evaluated by assessing discrimination and calibration. Based on the pilot study, minor improvements were made to the recruitment process and online Pregnancy Questionnaire 1. The validation cohort consists of 2614 women. Data analysis of the external validation study is in progress. This study will offer insight into the generalizability of existing, non-invasive first trimester prediction models for various obstetric outcomes in a Dutch obstetric population

  4. Empirical Assessment of Effect of Publication Bias on a Meta-Analysis of Validity Studies on University Matriculation Examinations in Nigeria

    ERIC Educational Resources Information Center

    Adeyemo, Emily Oluseyi

    2012-01-01

    This study examined the impact of publication bias on a meta-analysis of empirical studies on validity of University Matriculation Examinations in Nigeria with a view to determine the level of difference between published and unpublished articles. Specifically, the design was an ex-post facto, a causal comparative design. The sample size consisted…

  5. The impact of underreporting and overreporting on the validity of the Personality Inventory for DSM-5 (PID-5): A simulation analog design investigation.

    PubMed

    Dhillon, Sonya; Bagby, R Michael; Kushner, Shauna C; Burchett, Danielle

    2017-04-01

    The Personality Inventory for DSM-5 (PID-5) is a 220-item self-report instrument that assesses the alternative model of personality psychopathology in Section III (Emerging Measures and Models) of DSM-5 . Despite its relatively recent introduction, the PID-5 has generated an impressive accumulation of studies examining its psychometric properties, and the instrument is also already widely and frequently used in research studies. Although the PID-5 is psychometrically sound overall, reviews of this instrument express concern that this scale does not possess validity scales to detect invalidating levels of response bias, such as underreporting and overreporting. McGee Ng et al. (2016), using a "known-groups" (partial) criterion design, demonstrated that both underreporting and overreporting grossly affect mean scores on PID-5 scales. In the current investigation, we replicate these findings using an analog simulation design. An important extension to this replication study was the finding that the construct validity of the PID-5 was also significantly compromised by response bias, with statistically significant attenuation noted in validity coefficients of the PID-5 domain scales with scales from other instruments measuring congruent constructs. This attenuation was found for underreporting and overreporting bias. We believe there is a need to develop validity scales to screen for data-distorting response bias in research contexts and in clinical assessments where response bias is likely or otherwise suspected. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  6. Design, development and method validation of a novel multi-resonance microwave sensor for moisture measurement.

    PubMed

    Peters, Johanna; Taute, Wolfgang; Bartscher, Kathrin; Döscher, Claas; Höft, Michael; Knöchel, Reinhard; Breitkreutz, Jörg

    2017-04-08

    Microwave sensor systems using resonance technology at a single resonance in the range of 2-3 GHz have been shown to be a rapid and reliable tool for moisture determination in solid materials including pharmaceutical granules. So far, their application is limited to lower moisture ranges or limitations above certain moisture contents had to be accepted. Aim of the present study was to develop a novel multi-resonance sensor system in order to expand the measurement range. Therefore, a novel sensor using additional resonances over a wide frequency band was designed and used to investigate inherent limitations of first generation sensor systems and material-related limits. Using granule samples with different moisture contents, an experimental protocol for calibration and validation of the method was established. Pursuant to this protocol, a multiple linear regression (MLR) prediction model built by correlating microwave moisture values to the moisture determined by Karl Fischer titration was chosen and rated using conventional criteria such as coefficient of determination (R 2 ) and root mean square error of calibration (RMSEC). Using different operators, different analysis dates and different ambient conditions the method was fully validated following the guidance of ICH Q2(R1). The study clearly showed explanations for measurement uncertainties of first generation sensor systems which confirmed the approach to overcome these by using additional resonances. The established prediction model could be validated in the range of 7.6-19.6%, demonstrating its fit for its future purpose, the moisture content determination during wet granulations. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A Validation Study of the Japanese Version of the Addenbrooke's Cognitive Examination-Revised

    PubMed Central

    dos Santos Kawata, Kelssy Hitomi; Hashimoto, Ryusaku; Nishio, Yoshiyuki; Hayashi, Atsuko; Ogawa, Nanayo; Kanno, Shigenori; Hiraoka, Kotaro; Yokoi, Kayoko; Iizuka, Osamu; Mori, Etsuro

    2012-01-01

    The aim of this study was to validate the Japanese version of the Addenbrooke's Cognitive Examination-Revised (ACE-R) [Mori: Japanese Edition of Hodges JR's Cognitive Assessment for Clinicians, 2010] designed to detect dementia, and to compare its diagnostic accuracy with that of the Mini-Mental State Examination. The ACE-R was administered to 85 healthy individuals and 126 patients with dementia. The reliability assessment revealed a strong correlation in both groups. The internal consistency was excellent (α-coefficient = 0.88). Correlation with the Clinical Dementia Rating sum of boxes score was significant (rs = −0.61, p < 0.001). The area under the curve was 0.98 for the ACE-R and 0.96 for the Mini-Mental State Examination. The cut-off score of 80 showed a sensitivity of 94% and a specificity of 94%. Like the original ACE-R and the versions designed for other languages, the Japanese version of the ACE-R is a reliable and valid test for the detection of dementia. PMID:22619659

  8. Designing research: ex post facto designs.

    PubMed

    Giuffre, M

    1997-06-01

    The research design is the overall plan or structure of the study. The goal of a good research design is to insure internal validity and answer the question being asked. The only clear rule in selecting a design is that the question dictates the design. Over the next few issues this column will cover types of research designs and their inherent strengths and weaknesses. This article discusses ex post facto research.

  9. Copenhagen Psychosocial Questionnaire - A validation study using the Job Demand-Resources model

    PubMed Central

    Hakanen, Jari J.; Westerlund, Hugo

    2018-01-01

    Aim This study aims at investigating the nomological validity of the Copenhagen Psychosocial Questionnaire (COPSOQ II) by using an extension of the Job Demands-Resources (JD-R) model with aspects of work ability as outcome. Material and methods The study design is cross-sectional. All staff working at public dental organizations in four regions of Sweden were invited to complete an electronic questionnaire (75% response rate, n = 1345). The questionnaire was based on COPSOQ II scales, the Utrecht Work Engagement scale, and the one-item Work Ability Score in combination with a proprietary item. The data was analysed by Structural Equation Modelling. Results This study contributed to the literature by showing that: A) The scale characteristics were satisfactory and the construct validity of COPSOQ instrument could be integrated in the JD-R framework; B) Job resources arising from leadership may be a driver of the two processes included in the JD-R model; and C) Both the health impairment and motivational processes were associated with WA, and the results suggested that leadership may impact WA, in particularly by securing task resources. Conclusion In conclusion, the nomological validity of COPSOQ was supported as the JD-R model-can be operationalized by the instrument. This may be helpful for transferral of complex survey results and work life theories to practitioners in the field. PMID:29708998

  10. Effects of borehole design on complex electrical resistivity measurements: laboratory validation and numerical experiments

    NASA Astrophysics Data System (ADS)

    Treichel, A.; Huisman, J. A.; Zhao, Y.; Zimmermann, E.; Esser, O.; Kemna, A.; Vereecken, H.

    2012-12-01

    Geophysical measurements within a borehole are typically affected by the presence of the borehole. The focus of the current study is to quantify the effect of borehole design on broadband electrical impedance tomography (EIT) measurements within boreholes. Previous studies have shown that effects on the real part of the electrical resistivity are largest for boreholes with large diameters and for materials with a large formation factor. However, these studies have not considered the effect of the well casing and the filter gravel on the measurement of the real part of the electrical resistivity. In addition, the effect of borehole design on the imaginary part of the electrical resistivity has not been investigated yet. Therefore, the aim of this study is to investigate the effect of borehole design on the complex electrical resistivity using laboratory measurements and numerical simulations. In order to do so, we developed a high resolution two dimensional axisymmetric finite element model (FE) that enables us to simulate the effects of several key borehole design parameters (e.g. borehole diameter, thickness of PVC well casing) on the measurement process. For the material surrounding the borehole, realistic values for complex resistivity were obtained from a database of laboratory measurements of complex resistivity from the test site Krauthausen (Germany). The slotted PVC well casing is represented by an effective resistivity calculated from the water-filled slot volume and the PVC volume. Measurements with and without PVC well casing were made with a four-electrode EIT logging tool in a water-filled rain barrel. The initial comparison for the case that the logging tool was inserted in the PVC well casing showed a considerable mismatch between measured and modeled values. It was required to consider a complete electrode model instead of point electrodes to remove this mismatch. This validated model was used to investigate in detail how complex resistivity

  11. Disturbance Reduction Control Design for the ST7 Flight Validation Experiment

    NASA Technical Reports Server (NTRS)

    Maghami, P. G.; Hsu, O. C.; Markley, F. L.; Houghton, M. B.

    2003-01-01

    The Space Technology 7 experiment will perform an on-orbit system-level validation of two specific Disturbance Reduction System technologies: a gravitational reference sensor employing a free-floating test mass, and a set of micro-Newton colloidal thrusters. The ST7 Disturbance Reduction System is designed to maintain the spacecraft's position with respect to a free-floating test mass to less than 10 nm/Hz, over the frequency range of 1 to 30 mHz. This paper presents the design and analysis of the coupled, drag-free and attitude control systems that close the loop between the gravitational reference sensor and the micro-Newton thrusters, while incorporating star tracker data at low frequencies. A full 18 degree-of-freedom model, which incorporates rigid-body models of the spacecraft and two test masses, is used to evaluate the effects of actuation and measurement noise and disturbances on the performance of the drag-free system.

  12. Validation test of 125 Ah advanced design IPV nickel-hydrogen flight cells

    NASA Technical Reports Server (NTRS)

    Smithrick, John J.; Hall, Stephen W.

    1993-01-01

    An update of validation test results confirming the advanced design nickel-hydrogen cell is presented. An advanced 125 Ah individual pressure vessel Ni-H cell was designed. The primary function of the advanced cell is to store and deliver energy for long-term LEO spacecraft missions. The new features of this design are: (1) use of 26 percent rather than 31 percent KOH electrolyte; (2) use of a patented catalyzed wall wick; (3) use of serrated-edge separators to facilitate gaseous O and H flow within the cell, while maintaining physical contact with the wall wick for electrolyte management; and (4) use of a floating rather than a fixed stack to accommodate Ni electrode expansion due to charge/discharge cycling. The significant improvements resulting from these innovations are extended cycle life; enhanced thermal, electrolyte, and oxygen management; and accommodation of Ni electrode expansion. Six 125 Ah flight cells based on this design were fabricated; the catalyzed wall wick cells have been cycled for over 19,000 cycles with no cell failures in the continuing test. Two of the noncatalyzed wall wick cells failed (cycles 9588 and 13,900).

  13. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  14. Development and validation of Dutch version of Lasater Clinical Judgment Rubric in hospital practice: An instrument design study.

    PubMed

    Vreugdenhil, Jettie; Spek, Bea

    2018-03-01

    Clinical reasoning in patient care is a skill that cannot be observed directly. So far, no reliable, valid instrument exists for the assessment of nursing students' clinical reasoning skills in hospital practice. Lasater's clinical judgment rubric (LCJR), based on Tanner's model "Thinking like a nurse" has been tested, mainly in academic simulation settings. The aim is to develop a Dutch version of the LCJR (D-LCJR) and to test its psychometric properties when used in a hospital traineeship context. A mixed-model approach was used to develop and to validate the instrument. Ten dedicated educational units in a university hospital. A well-mixed group of 52 nursing students, nurse coaches and nurse educators. A Delphi panel developed the D-LCJR. Students' clinical reasoning skills were assessed "live" by nurse coaches, nurse educators and students who rated themselves. The psychometric properties tested during the assessment process are reliability, reproducibility, content validity and construct validity by testing two hypothesis: 1) a positive correlation between assessed and self-reported sum scores (convergent validity) and 2) a linear relation between experience and sum score (clinical validity). The obtained D-LCJR was found to be internally consistent, Cronbach's alpha 0.93. The rubric is also reproducible with intraclass correlations between 0.69 and 0.78. Experts judged it to be content valid. The two hypothesis were both tested significant, supporting evidence for construct validity. The translated and modified LCJR, is a promising tool for the evaluation of nursing students' development in clinical reasoning in hospital traineeships, by students, nurse coaches and nurse educators. More evidence on construct validity is necessary, in particular for students at the end of their hospital traineeship. Based on our research, the D-LCJR applied in hospital traineeships is a usable and reliable tool. Copyright © 2017 Elsevier Ltd. All rights reserved.

  15. AMOVA ["Accumulative Manifold Validation Analysis"]: An Advanced Statistical Methodology Designed to Measure and Test the Validity, Reliability, and Overall Efficacy of Inquiry-Based Psychometric Instruments

    ERIC Educational Resources Information Center

    Osler, James Edward, II

    2015-01-01

    This monograph provides an epistemological rational for the Accumulative Manifold Validation Analysis [also referred by the acronym "AMOVA"] statistical methodology designed to test psychometric instruments. This form of inquiry is a form of mathematical optimization in the discipline of linear stochastic modelling. AMOVA is an in-depth…

  16. Design and validation of a self-administered test to assess bullying (bull-M) in high school Mexicans: a pilot study.

    PubMed

    Ramos-Jimenez, Arnulfo; Wall-Medrano, Abraham; Villar, Oscar Esparza-Del; Hernández-Torres, Rosa P

    2013-04-11

    Bullying (Bull) is a public health problem worldwide, and Mexico is not exempt. However, its epidemiology and early detection in our country is limited, in part, by the lack of validated tests to ensure the respondents' anonymity. The aim of this study was to validate a self-administered test (Bull-M) for assessing Bull among high-school Mexicans. Experts and school teachers from highly violent areas of Ciudad Juarez (Chihuahua, México), reported common Bull behaviors. Then, a 10-item test was developed based on twelve of these behaviors; the students' and peers' participation in Bull acts and in some somatic consequences in Bull victims with a 5-point Likert frequency scale. Validation criteria were: content (CV, judges); reliability [Cronbach's alpha (CA), test-retest (spearman correlation, rs)]; construct [principal component (PCA), confirmatory factor (CFA), goodness-of-fit (GF) analysis]; and convergent (Bull-M vs. Bull-S test) validity. Bull-M showed good reliability (CA = 0.75, rs = 0.91; p < 0.001). Two factors were identified (PCA) and confirmed (CFA): "bullying me (victim)" and "bullying others (aggressor)". GF indices were: Root mean square error of approximation (0.031), GF index (0.97), and normalized fit index (0.92). Bull-M was as good as Bull-S for measuring Bull prevalence. Bull-M has a good reliability and convergent validity and a bi-modal factor structure for detecting Bull victims and aggressors; however, its external validity and sensitivity should be analyzed on a wider and different population.

  17. Design and validation of a self-administered test to assess bullying (bull-M) in high school Mexicans: a pilot study

    PubMed Central

    2013-01-01

    Background Bullying (Bull) is a public health problem worldwide, and Mexico is not exempt. However, its epidemiology and early detection in our country is limited, in part, by the lack of validated tests to ensure the respondents’ anonymity. The aim of this study was to validate a self-administered test (Bull-M) for assessing Bull among high-school Mexicans. Methods Experts and school teachers from highly violent areas of Ciudad Juarez (Chihuahua, México), reported common Bull behaviors. Then, a 10-item test was developed based on twelve of these behaviors; the students’ and peers’ participation in Bull acts and in some somatic consequences in Bull victims with a 5-point Likert frequency scale. Validation criteria were: content (CV, judges); reliability [Cronbach’s alpha (CA), test-retest (spearman correlation, rs)]; construct [principal component (PCA), confirmatory factor (CFA), goodness-of-fit (GF) analysis]; and convergent (Bull-M vs. Bull-S test) validity. Results Bull-M showed good reliability (CA = 0.75, rs = 0.91; p < 0.001). Two factors were identified (PCA) and confirmed (CFA): “bullying me (victim)” and “bullying others (aggressor)”. GF indices were: Root mean square error of approximation (0.031), GF index (0.97), and normalized fit index (0.92). Bull-M was as good as Bull-S for measuring Bull prevalence. Conclusions Bull-M has a good reliability and convergent validity and a bi-modal factor structure for detecting Bull victims and aggressors; however, its external validity and sensitivity should be analyzed on a wider and different population. PMID:23577755

  18. Validation of the Juhnke-Balkin Life Balance Inventory

    ERIC Educational Resources Information Center

    Davis, R. J.; Balkin, Richard S.; Juhnke, Gerald A.

    2014-01-01

    Life balance is an important construct within the counseling profession. A validation study utilizing exploratory factor analysis and multiple regression was conducted on the Juhnke-Balkin Life Balance Inventory. Results from the study serve as evidence of validity for an assessment instrument designed to measure life balance.

  19. A design of experiments approach to validation sampling for logistic regression modeling with error-prone medical records.

    PubMed

    Ouyang, Liwen; Apley, Daniel W; Mehrotra, Sanjay

    2016-04-01

    Electronic medical record (EMR) databases offer significant potential for developing clinical hypotheses and identifying disease risk associations by fitting statistical models that capture the relationship between a binary response variable and a set of predictor variables that represent clinical, phenotypical, and demographic data for the patient. However, EMR response data may be error prone for a variety of reasons. Performing a manual chart review to validate data accuracy is time consuming, which limits the number of chart reviews in a large database. The authors' objective is to develop a new design-of-experiments-based systematic chart validation and review (DSCVR) approach that is more powerful than the random validation sampling used in existing approaches. The DSCVR approach judiciously and efficiently selects the cases to validate (i.e., validate whether the response values are correct for those cases) for maximum information content, based only on their predictor variable values. The final predictive model will be fit using only the validation sample, ignoring the remainder of the unvalidated and unreliable error-prone data. A Fisher information based D-optimality criterion is used, and an algorithm for optimizing it is developed. The authors' method is tested in a simulation comparison that is based on a sudden cardiac arrest case study with 23 041 patients' records. This DSCVR approach, using the Fisher information based D-optimality criterion, results in a fitted model with much better predictive performance, as measured by the receiver operating characteristic curve and the accuracy in predicting whether a patient will experience the event, than a model fitted using a random validation sample. The simulation comparisons demonstrate that this DSCVR approach can produce predictive models that are significantly better than those produced from random validation sampling, especially when the event rate is low. © The Author 2015. Published by Oxford

  20. Validation of physical activity instruments: Black Women's Health Study.

    PubMed

    Carter-Nolan, Pamela L; Adams-Campbell, Lucile L; Makambi, Kepher; Lewis, Shantell; Palmer, Julie R; Rosenberg, Lynn

    2006-01-01

    Few studies have reported on the validity of physical activity measures in African Americans. The present study was designed to determine the validity of a self-administered physical activity questionnaire (PAQ) that was used in a large prospective study of African American women in the United States against an accelerometer (actigraph), an objective assessment of movement, and a seven-day activity diary. The study was conducted among 101 women enrolled in the Black Women's Health Study (BWHS) cohort who resided in the Washington, DC, metropolitan area, representing 11.2% (101/900) of this sample. Physical activity levels were obtained from the parent BWHS PAQ (eg, 1997 and 1999) and repeated in the present study. This information entailed hours per week of participation in walking for exercise, hours per week of moderate activity (eg, housework, gardening, and bowling), and hours per week of strenuous activity (eg, basketball, swimming, running, and aerobics) during the previous year. The participants were required to wear actigraphs for seven days and then record their physical activities in their diaries (seven-day physical activity diary) during this time. The diaries were used to record the amount and pattern of daily energy expenditure. Significant positive correlations were seen between the BWHS PAQ and the actigraph for total activity, r=.28; walking, r=.26; and vigorous activity, r=.40, P<.001. For the seven-day physical activity diary, the BWHS PAQ also demonstrated significant correlations for total (r=0.42, P<.01); moderate (r=.26, P<.05); and vigorous activities (r=.41, P<.01). The BWHS PAQ is a useful measure of physical activity in the BWHS cohort and thus has utility in prospective epidemiologic research.

  1. Two-Tiered Violence Risk Estimates: a validation study of an integrated-actuarial risk assessment instrument.

    PubMed

    Mills, Jeremy F; Gray, Andrew L

    2013-11-01

    This study is an initial validation study of the Two-Tiered Violence Risk Estimates instrument (TTV), a violence risk appraisal instrument designed to support an integrated-actuarial approach to violence risk assessment. The TTV was scored retrospectively from file information on a sample of violent offenders. Construct validity was examined by comparing the TTV with instruments that have shown utility to predict violence that were prospectively scored: The Historical-Clinical-Risk Management-20 (HCR-20) and Lifestyle Criminality Screening Form (LCSF). Predictive validity was examined through a long-term follow-up of 12.4 years with a sample of 78 incarcerated offenders. Results show the TTV to be highly correlated with the HCR-20 and LCSF. The base rate for violence over the follow-up period was 47.4%, and the TTV was equally predictive of violent recidivism relative to the HCR-20 and LCSF. Discussion centers on the advantages of an integrated-actuarial approach to the assessment of violence risk.

  2. The Space Technology-7 Disturbance Reduction System Precision Control Flight Validation Experiment Control System Design

    NASA Technical Reports Server (NTRS)

    O'Donnell, James R.; Hsu, Oscar C.; Maghami, Peirman G.; Markley, F. Landis

    2006-01-01

    As originally proposed, the Space Technology-7 Disturbance Reduction System (DRS) project, managed out of the Jet Propulsion Laboratory, was designed to validate technologies required for future missions such as the Laser Interferometer Space Antenna (LISA). The two technologies to be demonstrated by DRS were Gravitational Reference Sensors (GRSs) and Colloidal MicroNewton Thrusters (CMNTs). Control algorithms being designed by the Dynamic Control System (DCS) team at the Goddard Space Flight Center would control the spacecraft so that it flew about a freely-floating GRS test mass, keeping it centered within its housing. For programmatic reasons, the GRSs were descoped from DRS. The primary goals of the new mission are to validate the performance of the CMNTs and to demonstrate precise spacecraft position control. DRS will fly as a part of the European Space Agency (ESA) LISA Pathfinder (LPF) spacecraft along with a similar ESA experiment, the LISA Technology Package (LTP). With no GRS, the DCS attitude and drag-free control systems make use of the sensor being developed by ESA as a part of the LTP. The control system is designed to maintain the spacecraft s position with respect to the test mass, to within 10 nm/the square root of Hz over the DRS science frequency band of 1 to 30 mHz.

  3. Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD)

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R.

    2015-01-01

    Directed Design of Experiments for Validating Probability of Detection Capability of NDE Systems (DOEPOD) Manual v.1.2 The capability of an inspection system is established by applications of various methodologies to determine the probability of detection (POD). One accepted metric of an adequate inspection system is that there is 95% confidence that the POD is greater than 90% (90/95 POD). Design of experiments for validating probability of detection capability of nondestructive evaluation (NDE) systems (DOEPOD) is a methodology that is implemented via software to serve as a diagnostic tool providing detailed analysis of POD test data, guidance on establishing data distribution requirements, and resolving test issues. DOEPOD demands utilization of observance of occurrences. The DOEPOD capability has been developed to provide an efficient and accurate methodology that yields observed POD and confidence bounds for both Hit-Miss or signal amplitude testing. DOEPOD does not assume prescribed POD logarithmic or similar functions with assumed adequacy over a wide range of flaw sizes and inspection system technologies, so that multi-parameter curve fitting or model optimization approaches to generate a POD curve are not required. DOEPOD applications for supporting inspector qualifications is included.

  4. Final Design and Experimental Validation of the Thermal Performance of the LHC Lattice Cryostats

    NASA Astrophysics Data System (ADS)

    Bourcey, N.; Capatina, O.; Parma, V.; Poncet, A.; Rohmig, P.; Serio, L.; Skoczen, B.; Tock, J.-P.; Williams, L. R.

    2004-06-01

    The recent commissioning and operation of the LHC String 2 have given a first experimental validation of the global thermal performance of the LHC lattice cryostat at nominal cryogenic conditions. The cryostat designed to minimize the heat inleak from ambient temperature, houses under vacuum and thermally protects the cold mass, which contains the LHC twin-aperture superconducting magnets operating at 1.9 K in superfluid helium. Mechanical components linking the cold mass to the vacuum vessel, such as support posts and insulation vacuum barriers are designed with efficient thermalisations for heat interception to minimise heat conduction. Heat inleak by radiation is reduced by employing multilayer insulation (MLI) wrapped around the cold mass and around an aluminium thermal shield cooled to about 60 K. Measurements of the total helium vaporization rate in String 2 gives, after substraction of supplementary heat loads and end effects, an estimate of the total thermal load to a standard LHC cell (107 m) including two Short Straight Sections and six dipole cryomagnets. Temperature sensors installed at critical locations provide a temperature mapping which allows validation of the calculated and estimated thermal performance of the cryostat components, including efficiency of the heat interceptions.

  5. Design and validation of a GNC system for missions to asteroids: the AIM scenario

    NASA Astrophysics Data System (ADS)

    Pellacani, A.; Kicman, P.; Suatoni, M.; Casasco, M.; Gil, J.; Carnelli, I.

    2017-12-01

    Deep space missions, and in particular missions to asteroids, impose a certain level of autonomy that depends on the mission objectives. If the mission requires the spacecraft to perform close approaches to the target body (the extreme case being a landing scenario), the autonomy level must be increased to guarantee the fast and reactive response which is required in both nominal and contingency operations. The GNC system must be designed in accordance with the required level of autonomy. The GNC system designed and tested in the frame of ESA's Asteroid Impact Mission (AIM) system studies (Phase A/B1 and Consolidation Phase) is an example of an autonomous GNC system that meets the challenging objectives of AIM. The paper reports the design of such GNC system and its validation through a DDVV plan that includes Model-in-the-Loop and Hardware-in-the-Loop testing. Main focus is the translational navigation, which is able to provide online the relative state estimation with respect to the target body using exclusively cameras as relative navigation sensors. The relative navigation outputs are meant to be used for nominal spacecraft trajectory corrections as well as to estimate the collision risk with the asteroid and, if needed, to command the execution of a collision avoidance manoeuvre to guarantee spacecraft safety

  6. Experimental design and reporting standards for improving the internal validity of pre-clinical studies in the field of pain: Consensus of the IMI-Europain consortium.

    PubMed

    Knopp, K L; Stenfors, C; Baastrup, C; Bannon, A W; Calvo, M; Caspani, O; Currie, G; Finnerup, N B; Huang, W; Kennedy, J D; Lefevre, I; Machin, I; Macleod, M; Rees, H; Rice, A S C; Rutten, K; Segerdahl, M; Serra, J; Wodarski, R; Berge, O-G; Treedef, R-D

    2017-12-29

    Background and aims Pain is a subjective experience, and as such, pre-clinical models of human pain are highly simplified representations of clinical features. These models are nevertheless critical for the delivery of novel analgesics for human pain, providing pharmacodynamic measurements of activity and, where possible, on-target confirmation of that activity. It has, however, been suggested that at least 50% of all pre-clinical data, independent of discipline, cannot be replicated. Additionally, the paucity of "negative" data in the public domain indicates a publication bias, and significantly impacts the interpretation of failed attempts to replicate published findings. Evidence suggests that systematic biases in experimental design and conduct and insufficiencies in reporting play significant roles in poor reproducibility across pre-clinical studies. It then follows that recommendations on how to improve these factors are warranted. Methods Members of Europain, a pain research consortium funded by the European Innovative Medicines Initiative (IMI), developed internal recommendations on how to improve the reliability of pre-clinical studies between laboratories. This guidance is focused on two aspects: experimental design and conduct, and study reporting. Results Minimum requirements for experimental design and conduct were agreed upon across the dimensions of animal characteristics, sample size calculations, inclusion and exclusion criteria, random allocation to groups, allocation concealment, and blinded assessment of outcome. Building upon the Animals in Research: Reportingin vivo Experiments (ARRIVE) guidelines, reporting standards were developed for pre-clinical studies of pain. These include specific recommendations for reporting on ethical issues, experimental design and conduct, and data analysis and interpretation. Key principles such as sample size calculation, a priori definition of a primary efficacy measure, randomization, allocation concealments

  7. 41 CFR 60-3.5 - General standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... should avoid making employment decisions on the basis of measures of knowledges, skills, or abilities... General standards for validity studies. A. Acceptable types of validity studies. For the purposes of... of these guidelines, section 14 of this part. New strategies for showing the validity of selection...

  8. Using Patient Feedback to Optimize the Design of a Certolizumab Pegol Electromechanical Self-Injection Device: Insights from Human Factors Studies.

    PubMed

    Domańska, Barbara; Stumpp, Oliver; Poon, Steven; Oray, Serkan; Mountian, Irina; Pichon, Clovis

    2018-01-01

    We incorporated patient feedback from human factors studies (HFS) in the patient-centric design and validation of ava ® , an electromechanical device (e-Device) for self-injecting the anti-tumor necrosis factor certolizumab pegol (CZP). Healthcare professionals, caregivers, healthy volunteers, and patients with rheumatoid arthritis, psoriatic arthritis, ankylosing spondylitis, or Crohn's disease participated in 11 formative HFS to optimize the e-Device design through intended user feedback; nine studies involved simulated injections. Formative participant questionnaire feedback was collected following e-Device prototype handling. Validation HFS (one EU study and one US study) assessed the safe and effective setup and use of the e-Device using 22 predefined critical tasks. Task outcomes were categorized as "failures" if participants did not succeed within three attempts. Two hundred eighty-three participants entered formative (163) and validation (120) HFS; 260 participants performed one or more simulated e-Device self-injections. Design changes following formative HFS included alterations to buttons and the graphical user interface screen. All validation HFS participants completed critical tasks necessary for CZP dose delivery, with minimal critical task failures (12 of 572 critical tasks, 2.1%, in the EU study, and 2 of 5310 critical tasks, less than 0.1%, in the US study). CZP e-Device development was guided by intended user feedback through HFS, ensuring the final design addressed patients' needs. In both validation studies, participants successfully performed all critical tasks, demonstrating safe and effective e-Device self-injections. UCB Pharma. Plain language summary available on the journal website.

  9. The validity of the 4-Skills Scan: A double validation study.

    PubMed

    van Kernebeek, W G; de Kroon, M L A; Savelsbergh, G J P; Toussaint, H M

    2018-06-01

    Adequate gross motor skills are an essential aspect of a child's healthy development. Where physical education (PE) is part of the primary school curriculum, a strong curriculum-based emphasis on evaluation and support of motor skill development in PE is apparent. Monitoring motor development is then a task for the PE teacher. In order to fulfil this task, teachers need adequate tools. The 4-Skills Scan is a quick and easily manageable gross motor skill instrument; however, its validity has never been assessed. Therefore, the purpose of this study is to assess the construct and concurrent validity of both 4-Skills Scans (version 2007 and version 2015). A total of 212 primary school children (6 - 12 years old), was requested to participate in both versions of the 4-Skills Scan. For assessing construct validity, children covered an obstacle course with video recordings for observation by an expert panel. For concurrent validity, a comparison was made with the MABC-2, by calculating Pearson correlations. Multivariable linear regression analyses were performed to determine the contribution of each subscale to the construct of gross motor skills, according to the MABC-2 and the expert panel. Correlations between the 4-Skills Scans and expert valuations were moderate, with coefficients of .47 (version 2007) and .46 (version 2015). Correlations between the 4-Skills Scans and the MABC-2 (gross) were moderate (.56) for version 2007 and high (.64) for version 2015. It is concluded that both versions of the 4-Skills Scans are satisfactory valid instruments for assessing gross motor skills during PE lessons. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  10. Utility of the MMPI-2-RF (Restructured Form) Validity Scales in Detecting Malingering in a Criminal Forensic Setting: A Known-Groups Design

    ERIC Educational Resources Information Center

    Sellbom, Martin; Toomey, Joseph A.; Wygant, Dustin B.; Kucharski, L. Thomas; Duncan, Scott

    2010-01-01

    The current study examined the utility of the recently released Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) validity scales to detect feigned psychopathology in a criminal forensic setting. We used a known-groups design with the Structured Interview of Reported Symptoms (SIRS;…

  11. Measurement of predictive validity in violence risk assessment studies: a second-order systematic review.

    PubMed

    Singh, Jay P; Desmarais, Sarah L; Van Dorn, Richard A

    2013-01-01

    The objective of the present review was to examine how predictive validity is analyzed and reported in studies of instruments used to assess violence risk. We reviewed 47 predictive validity studies published between 1990 and 2011 of 25 instruments that were included in two recent systematic reviews. Although all studies reported receiver operating characteristic curve analyses and the area under the curve (AUC) performance indicator, this methodology was defined inconsistently and findings often were misinterpreted. In addition, there was between-study variation in benchmarks used to determine whether AUCs were small, moderate, or large in magnitude. Though virtually all of the included instruments were designed to produce categorical estimates of risk - through the use of either actuarial risk bins or structured professional judgments - only a minority of studies calculated performance indicators for these categorical estimates. In addition to AUCs, other performance indicators, such as correlation coefficients, were reported in 60% of studies, but were infrequently defined or interpreted. An investigation of sources of heterogeneity did not reveal significant variation in reporting practices as a function of risk assessment approach (actuarial vs. structured professional judgment), study authorship, geographic location, type of journal (general vs. specialized audience), sample size, or year of publication. Findings suggest a need for standardization of predictive validity reporting to improve comparison across studies and instruments. Copyright © 2013 John Wiley & Sons, Ltd.

  12. Persistent threats to validity in single-group interrupted time series analysis with a cross over design.

    PubMed

    Linden, Ariel

    2017-04-01

    The basic single-group interrupted time series analysis (ITSA) design has been shown to be susceptible to the most common threat to validity-history-the possibility that some other event caused the observed effect in the time series. A single-group ITSA with a crossover design (in which the intervention is introduced and withdrawn 1 or more times) should be more robust. In this paper, we describe and empirically assess the susceptibility of this design to bias from history. Time series data from 2 natural experiments (the effect of multiple repeals and reinstatements of Louisiana's motorcycle helmet law on motorcycle fatalities and the association between the implementation and withdrawal of Gorbachev's antialcohol campaign with Russia's mortality crisis) are used to illustrate that history remains a threat to ITSA validity, even in a crossover design. Both empirical examples reveal that the single-group ITSA with a crossover design may be biased because of history. In the case of motorcycle fatalities, helmet laws appeared effective in reducing mortality (while repealing the law increased mortality), but when a control group was added, it was shown that this trend was similar in both groups. In the case of Gorbachev's antialcohol campaign, only when contrasting the results against those of a control group was the withdrawal of the campaign found to be the more likely culprit in explaining the Russian mortality crisis than the collapse of the Soviet Union. Even with a robust crossover design, single-group ITSA models remain susceptible to bias from history. Therefore, a comparable control group design should be included, whenever possible. © 2016 John Wiley & Sons, Ltd.

  13. Braden scale (ALB) for assessing pressure ulcer risk in hospital patients: A validity and reliability study.

    PubMed

    Chen, Hong-Lin; Cao, Ying-Juan; Zhang, Wei; Wang, Jing; Huai, Bao-Sha

    2017-02-01

    The inter-rater reliability of Braden Scale is not so good. We modified the Braden(ALB) scale by defining nutrition subscale based on serum albumin, then assessed it's the validity and reliability in hospital patients. We designed a retrospective study for validity analysis, and a prospective study for reliability analysis. Receiver operating curve (ROC) and area under the curve (AUC) were used to evaluate the predictive validity. Intra-class correlation coefficient (ICC) was used to investigate the inter-rater reliability. Two thousand five hundred twenty-five patients were included for validity analysis, 76 patients (3.0%) developed pressure ulcer. Positive correlation was found between serum albumin and nutrition score in Braden scale (Spearman's coefficient 0.2203, P<0.0001). The AUCs for Braden scale and Braden(ALB) scale predicting pressure ulcer risk were 0.813 (95% CI 0.797-0.828; P<0.0001), and 0.859 (95% CI 0.845-0.872; P<0.0001), respectively. The Braden(ALB) scale was even more valid than the Braden scale (z=1.860, P=0.0628). In different age subgroups, the Braden(ALB) scale seems also more valid than the original Braden scale, but no statistically significant differences were found (P>0.05). The inter-rater reliability study showed the ICC-value for nutrition increased 45.9%, and increased 4.3% for total score. The Braden(ALB) scale has similar validity compared with the original Braden scale for in hospital patients. However, the inter-rater reliability was significantly increased. Copyright © 2016 Elsevier Inc. All rights reserved.

  14. The reliability and validity of a designed setup for the assessment of static back extensor force and endurance in older women with and without hyperkyphosis.

    PubMed

    Roghani, Taybeh; Khalkhali Zavieh, Minoo; Rahimi, Abbas; Talebian, Saeed; Manshadi, Farideh Dehghan; Akbarzadeh Baghban, Alireza; King, Nicole; Katzman, Wendy

    2018-01-25

    The purpose of this study was to investigate the intra-rater reliability and validity of a designed load cell setup for the measurement of back extensor muscle force and endurance. The study sample included 19 older women with hyperkyphosis, mean age 67.0 ± 5.0 years, and 14 older women without hyperkyphosis, mean age 63.0 ± 6.0 years. Maximum back extensor force and endurance were measured in a sitting position with a designed load cell setup. Tests were performed by the same examiner on two separate days within a 72-hour interval. The intra-rater reliability of the measurements was analyzed using intraclass correlation coefficient (ICC), standard errors of measurement (SEM), and minimal detectable change (MDC). The validity of the setup was determined using Pearson correlation analysis and independent t-test. Using our designed load cell, the values of ICC indicated very high reliability of force measurement (hyperkyphosis group: 0.96, normal group: 0.97) and high reliability of endurance measurement (hyperkyphosis group: 0.82, normal group: 0.89). For all tests, the values of SEM and MDC were low in both groups. A significant correlation between two documented forces (load cell force and target force) and significant differences in the muscle force and endurance among the two groups were found. The measurements of static back muscle force and endurance are reliable and valid with our designed setup in older women with and without hyperkyphosis.

  15. A portal to validated websites on cosmetic surgery: the design of an archetype.

    PubMed

    Parikh, A R; Kok, K; Redfern, B; Clarke, A; Withey, S; Butler, P E M

    2006-09-01

    There has recently been an increase in the usage of the Internet as a source of patient information. It is very difficult for laypersons to establish the accuracy and validity of these medical websites. Although many website assessment tools exist, most of these are not practical.A combination of consumer- and clinician-based website assessment tools was applied to 200 websites on cosmetic surgery. The top-scoring websites were used as links from a portal website that was designed using Microsoft Macromedia Suite.Seventy-one (35.5%) websites were excluded. One hundred fifteen websites (89%) failed to reach an acceptable standard.The provision of new websites has proceeded without quality controls. Patients need to be better educated on the limitations of the Internet. This paper suggests an archetypal model, which makes efficient use of existing resources, validates them, and is easily transferable to different health settings.

  16. Optimization and Validation of a Sensitive Method for HPLC-PDA Simultaneous Determination of Torasemide and Spironolactone in Human Plasma using Central Composite Design.

    PubMed

    Subramanian, Venkatesan; Nagappan, Kannappan; Sandeep Mannemala, Sai

    2015-01-01

    A sensitive, accurate, precise and rapid HPLC-PDA method was developed and validated for the simultaneous determination of torasemide and spironolactone in human plasma using Design of experiments. Central composite design was used to optimize the method using content of acetonitrile, concentration of buffer and pH of mobile phase as independent variables, while the retention factor of spironolactone, resolution between torasemide and phenobarbitone; and retention time of phenobarbitone were chosen as dependent variables. The chromatographic separation was achieved on Phenomenex C(18) column and the mobile phase comprising 20 mM potassium dihydrogen ortho phosphate buffer (pH-3.2) and acetonitrile in 82.5:17.5 v/v pumped at a flow rate of 1.0 mL min(-1). The method was validated according to USFDA guidelines in terms of selectivity, linearity, accuracy, precision, recovery and stability. The limit of quantitation values were 80 and 50 ng mL(-1) for torasemide and spironolactone respectively. Furthermore, the sensitivity and simplicity of the method suggests the validity of method for routine clinical studies.

  17. Development of Modal Test Techniques for Validation of a Solar Sail Design

    NASA Technical Reports Server (NTRS)

    Gaspar, James L.; Mann, Troy; Behun, Vaughn; Wilkie, W. Keats; Pappa, Richard

    2004-01-01

    This paper focuses on the development of modal test techniques for validation of a solar sail gossamer space structure design. The major focus is on validating and comparing the capabilities of various excitation techniques for modal testing solar sail components. One triangular shaped quadrant of a solar sail membrane was tested in a 1 Torr vacuum environment using various excitation techniques including, magnetic excitation, and surface-bonded piezoelectric patch actuators. Results from modal tests performed on the sail using piezoelectric patches at different positions are discussed. The excitation methods were evaluated for their applicability to in-vacuum ground testing and to the development of on orbit flight test techniques. The solar sail membrane was tested in the horizontal configuration at various tension levels to assess the variation in frequency with tension in a vacuum environment. A segment of a solar sail mast prototype was also tested in ambient atmospheric conditions using various excitation techniques, and these methods are also assessed for their ground test capabilities and on-orbit flight testing.

  18. Strategies to design clinical studies to identify predictive biomarkers in cancer research.

    PubMed

    Perez-Gracia, Jose Luis; Sanmamed, Miguel F; Bosch, Ana; Patiño-Garcia, Ana; Schalper, Kurt A; Segura, Victor; Bellmunt, Joaquim; Tabernero, Josep; Sweeney, Christopher J; Choueiri, Toni K; Martín, Miguel; Fusco, Juan Pablo; Rodriguez-Ruiz, Maria Esperanza; Calvo, Alfonso; Prior, Celia; Paz-Ares, Luis; Pio, Ruben; Gonzalez-Billalabeitia, Enrique; Gonzalez Hernandez, Alvaro; Páez, David; Piulats, Jose María; Gurpide, Alfonso; Andueza, Mapi; de Velasco, Guillermo; Pazo, Roberto; Grande, Enrique; Nicolas, Pilar; Abad-Santos, Francisco; Garcia-Donas, Jesus; Castellano, Daniel; Pajares, María J; Suarez, Cristina; Colomer, Ramon; Montuenga, Luis M; Melero, Ignacio

    2017-02-01

    The discovery of reliable biomarkers to predict efficacy and toxicity of anticancer drugs remains one of the key challenges in cancer research. Despite its relevance, no efficient study designs to identify promising candidate biomarkers have been established. This has led to the proliferation of a myriad of exploratory studies using dissimilar strategies, most of which fail to identify any promising targets and are seldom validated. The lack of a proper methodology also determines that many anti-cancer drugs are developed below their potential, due to failure to identify predictive biomarkers. While some drugs will be systematically administered to many patients who will not benefit from them, leading to unnecessary toxicities and costs, others will never reach registration due to our inability to identify the specific patient population in which they are active. Despite these drawbacks, a limited number of outstanding predictive biomarkers have been successfully identified and validated, and have changed the standard practice of oncology. In this manuscript, a multidisciplinary panel reviews how those key biomarkers were identified and, based on those experiences, proposes a methodological framework-the DESIGN guidelines-to standardize the clinical design of biomarker identification studies and to develop future research in this pivotal field. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  19. Using standardised patients to measure physicians' practice: validation study using audio recordings

    PubMed Central

    Luck, Jeff; Peabody, John W

    2002-01-01

    Objective To assess the validity of standardised patients to measure the quality of physicians' practice. Design Validation study of standardised patients' assessments. Physicians saw unannounced standardised patients presenting with common outpatient conditions. The standardised patients covertly tape recorded their visit and completed a checklist of quality criteria immediately afterwards. Their assessments were compared against independent assessments of the recordings by a trained medical records abstractor. Setting Four general internal medicine primary care clinics in California. Participants 144 randomly selected consenting physicians. Main outcome measures Rates of agreement between the patients' assessments and independent assessment. Results 40 visits, one per standardised patient, were recorded. The overall rate of agreement between the standardised patients' checklists and the independent assessment of the audio transcripts was 91% (κ=0.81). Disaggregating the data by medical condition, site, level of physicians' training, and domain (stage of the consultation) gave similar rates of agreement. Sensitivity of the standardised patients' assessments was 95%, and specificity was 85%. The area under the receiver operator characteristic curve was 90%. Conclusions Standardised patients' assessments seem to be a valid measure of the quality of physicians' care for a variety of common medical conditions in actual outpatient settings. Properly trained standardised patients compare well with independent assessment of recordings of the consultations and may justify their use as a “gold standard” in comparing the quality of care across sites or evaluating data obtained from other sources, such as medical records and clinical vignettes. What is already known on this topicStandardised patients are valid and reliable reporters of physicians' practice in the medical education settingHowever, validating standardised patients' measurements of quality of care in

  20. Rational design and validation of a vanilloid-sensitive TRPV2 ion channel.

    PubMed

    Yang, Fan; Vu, Simon; Yarov-Yarovoy, Vladimir; Zheng, Jie

    2016-06-28

    Vanilloids activation of TRPV1 represents an excellent model system of ligand-gated ion channels. Recent studies using cryo-electron microcopy (cryo-EM), computational analysis, and functional quantification revealed the location of capsaicin-binding site and critical residues mediating ligand-binding and channel activation. Based on these new findings, here we have successfully introduced high-affinity binding of capsaicin and resiniferatoxin to the vanilloid-insensitive TRPV2 channel, using a rationally designed minimal set of four point mutations (F467S-S498F-L505T-Q525E, termed TRPV2_Quad). We found that binding of resiniferatoxin activates TRPV2_Quad but the ligand-induced open state is relatively unstable, whereas binding of capsaicin to TRPV2_Quad antagonizes resiniferatoxin-induced activation likely through competition for the same binding sites. Using Rosetta-based molecular docking, we observed a common structural mechanism underlying vanilloids activation of TRPV1 and TRPV2_Quad, where the ligand serves as molecular "glue" that bridges the S4-S5 linker to the S1-S4 domain to open these channels. Our analysis revealed that capsaicin failed to activate TRPV2_Quad likely due to structural constraints preventing such bridge formation. These results not only validate our current working model for capsaicin activation of TRPV1 but also should help guide the design of drug candidate compounds for this important pain sensor.

  1. Design and Experimental Validation for Direct-Drive Fault-Tolerant Permanent-Magnet Vernier Machines

    PubMed Central

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis. PMID:25045729

  2. Design and experimental validation for direct-drive fault-tolerant permanent-magnet vernier machines.

    PubMed

    Liu, Guohai; Yang, Junqin; Chen, Ming; Chen, Qian

    2014-01-01

    A fault-tolerant permanent-magnet vernier (FT-PMV) machine is designed for direct-drive applications, incorporating the merits of high torque density and high reliability. Based on the so-called magnetic gearing effect, PMV machines have the ability of high torque density by introducing the flux-modulation poles (FMPs). This paper investigates the fault-tolerant characteristic of PMV machines and provides a design method, which is able to not only meet the fault-tolerant requirements but also keep the ability of high torque density. The operation principle of the proposed machine has been analyzed. The design process and optimization are presented specifically, such as the combination of slots and poles, the winding distribution, and the dimensions of PMs and teeth. By using the time-stepping finite element method (TS-FEM), the machine performances are evaluated. Finally, the FT-PMV machine is manufactured, and the experimental results are presented to validate the theoretical analysis.

  3. Beware of external validation! - A Comparative Study of Several Validation Techniques used in QSAR Modelling.

    PubMed

    Majumdar, Subhabrata; Basak, Subhash C

    2018-04-26

    Proper validation is an important aspect of QSAR modelling. External validation is one of the widely used validation methods in QSAR where the model is built on a subset of the data and validated on the rest of the samples. However, its effectiveness for datasets with a small number of samples but large number of predictors remains suspect. Calculating hundreds or thousands of molecular descriptors using currently available software has become the norm in QSAR research, owing to computational advances in the past few decades. Thus, for n chemical compounds and p descriptors calculated for each molecule, the typical chemometric dataset today has high value of p but small n (i.e. n < p). Motivated by the evidence of inadequacies of external validation in estimating the true predictive capability of a statistical model in recent literature, this paper performs an extensive and comparative study of this method with several other validation techniques. We compared four validation methods: leave-one-out, K-fold, external and multi-split validation, using statistical models built using the LASSO regression, which simultaneously performs variable selection and modelling. We used 300 simulated datasets and one real dataset of 95 congeneric amine mutagens for this evaluation. External validation metrics have high variation among different random splits of the data, hence are not recommended for predictive QSAR models. LOO has the overall best performance among all validation methods applied in our scenario. Results from external validation are too unstable for the datasets we analyzed. Based on our findings, we recommend using the LOO procedure for validating QSAR predictive models built on high-dimensional small-sample data. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  4. Validating the Vocabulary Levels Test with Fourth and Fifth Graders to Identify Students At-Risk in Vocabulary Development Using a Quasiexperimental Single Group Design

    ERIC Educational Resources Information Center

    Dunn, Suzanna

    2012-01-01

    This quasiexperimental single group design study investigated the validity of the Vocabulary Levels Test (VLT) to identify fourth and fifth grade students who are at-risk in vocabulary development. The subjects of the study were 88 fourth and fifth grade students at one elementary school in Washington State. The Group Reading Assessment and…

  5. On various metrics used for validation of predictive QSAR models with applications in virtual screening and focused library design.

    PubMed

    Roy, Kunal; Mitra, Indrani

    2011-07-01

    Quantitative structure-activity relationships (QSARs) have important applications in drug discovery research, environmental fate modeling, property prediction, etc. Validation has been recognized as a very important step for QSAR model development. As one of the important objectives of QSAR modeling is to predict activity/property/toxicity of new chemicals falling within the domain of applicability of the developed models and QSARs are being used for regulatory decisions, checking reliability of the models and confidence of their predictions is a very important aspect, which can be judged during the validation process. One prime application of a statistically significant QSAR model is virtual screening for molecules with improved potency based on the pharmacophoric features and the descriptors appearing in the QSAR model. Validated QSAR models may also be utilized for design of focused libraries which may be subsequently screened for the selection of hits. The present review focuses on various metrics used for validation of predictive QSAR models together with an overview of the application of QSAR models in the fields of virtual screening and focused library design for diverse series of compounds with citation of some recent examples.

  6. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...

  7. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...

  8. 29 CFR 1607.7 - Use of other validity studies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... EMPLOYEE SELECTION PROCEDURES (1978) General Principles § 1607.7 Use of other validity studies. A. Validity studies not conducted by the user. Users may, under certain circumstances, support the use of selection... described in test manuals. While publishers of selection procedures have a professional obligation to...

  9. Design and content validation of a set of SMS to promote seeking of specialized mental health care within the Allillanchu Project.

    PubMed

    Toyama, M; Diez-Canseco, F; Busse, P; Del Mastro, I; Miranda, J J

    2018-01-01

    The aim of this study was to design and develop a set of, short message service (SMS) to promote specialized mental health care seeking within the framework of the Allillanchu Project. The design phase consisted of 39 interviews with potential recipients of the SMS, about use of cellphones, and perceptions and motivations towards seeking mental health care. After the data collection, the research team developed a set of seven SMS for validation. The content validation phase consisted of 24 interviews. The participants answered questions regarding their understanding of the SMS contents and rated its appeal. The seven SMS subjected to content validation were tailored to the recipient using their name. The reminder message included the working hours of the psychology service at the patient's health center. The motivational messages addressed perceived barriers and benefits when seeking mental health services. The average appeal score of the seven SMS was 9.0 (SD±0.4) of 10 points. Participants did not make significant suggestions to change the wording of the messages. Five SMS were chosen to be used. This approach is likely to be applicable to other similar low-resource settings, and the methodology used can be adapted to develop SMS for other chronic conditions.

  10. Measuring Long-Distance Romantic Relationships: A Validity Study

    ERIC Educational Resources Information Center

    Pistole, M. Carole; Roberts, Amber

    2011-01-01

    This study investigated aspects of construct validity for the scores of a new long-distance romantic relationship measure. A single-factor structure of the long-distance romantic relationship index emerged, with convergent and discriminant evidence of external validity, high internal consistency reliability, and applied utility of the scores.…

  11. The Self-Consciousness Scale: A Discriminant Validity Study

    ERIC Educational Resources Information Center

    Carver, Charles S.; Glass, David C.

    1976-01-01

    A validity study is conducted of the Self-Consciousness Scale components with male undergraduates. The components, Private and Public Self Consciousness and Social Anxiety did not correlate with any other measures used to establish their validity and thus seem to be independent of other measures tested. (Author/DEP)

  12. Interactive design for self-study and developing students’ critical thinking skills in electromagnetic radiation topic

    NASA Astrophysics Data System (ADS)

    Ambarwati, D.; Suyatna, A.

    2018-01-01

    The purpose of this research are to create interactive electronic school books (ESB) for electromagnetic radiation topic that can be used for self-study and increasing students’ critical thinking skills. The research method was based on the design of research and development (R&D) model of ADDIE. The research procedure is used limited the design of the product has been validated. Data source at interactive requirement analysis phase of ESB is student and high school teacher of class XII in Lampung province. The validation of interactive ESB designs is performed by experts in science education. The data of ESB interactive needs were collected using questionnaires and analyzed using quantitative descriptive. The results of the questionnaire obtained by 97% of books that are often used in the form of printed books from schools have not been interactive and foster critical thinking of students, and 55% of students stating physics books are used not meet expectations. Expectations of students in physics learning, teachers must use interactive electronic books. The results of the validation experts pointed out, the design of ESB produced is interactive, can be used for self-study, and increasing students’ critical thinking skills, which contains instruction manuals, learning objectives, learning materials, sample questions and discussion, video illustrations, animations, summaries, as well as interactive quizzes incorporating feedback exam practice and preparation for college entrance.

  13. Optimization of critical quality attributes in continuous twin-screw wet granulation via design space validated with pilot scale experimental data.

    PubMed

    Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu

    2017-06-15

    In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Study design and statistical analysis of data in human population studies with the micronucleus assay.

    PubMed

    Ceppi, Marcello; Gallo, Fabio; Bonassi, Stefano

    2011-01-01

    The most common study design performed in population studies based on the micronucleus (MN) assay, is the cross-sectional study, which is largely performed to evaluate the DNA damaging effects of exposure to genotoxic agents in the workplace, in the environment, as well as from diet or lifestyle factors. Sample size is still a critical issue in the design of MN studies since most recent studies considering gene-environment interaction, often require a sample size of several hundred subjects, which is in many cases difficult to achieve. The control of confounding is another major threat to the validity of causal inference. The most popular confounders considered in population studies using MN are age, gender and smoking habit. Extensive attention is given to the assessment of effect modification, given the increasing inclusion of biomarkers of genetic susceptibility in the study design. Selected issues concerning the statistical treatment of data have been addressed in this mini-review, starting from data description, which is a critical step of statistical analysis, since it allows to detect possible errors in the dataset to be analysed and to check the validity of assumptions required for more complex analyses. Basic issues dealing with statistical analysis of biomarkers are extensively evaluated, including methods to explore the dose-response relationship among two continuous variables and inferential analysis. A critical approach to the use of parametric and non-parametric methods is presented, before addressing the issue of most suitable multivariate models to fit MN data. In the last decade, the quality of statistical analysis of MN data has certainly evolved, although even nowadays only a small number of studies apply the Poisson model, which is the most suitable method for the analysis of MN data.

  15. Examining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison with a Randomized Experiment

    ERIC Educational Resources Information Center

    St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly

    2014-01-01

    Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…

  16. Pinaverium Bromide: Development and Validation of Spectrophotometric Methods for Assay and Dissolution Studies.

    PubMed

    Martins, Danielly da Fonte Carvalho; Florindo, Lorena Coimbra; Machado, Anna Karolina Mouzer da Silva; Todeschini, Vítor; Sangoi, Maximiliano da Silva

    2017-11-01

    This study presents the development and validation of UV spectrophotometric methods for the determination of pinaverium bromide (PB) in tablet assay and dissolution studies. The methods were satisfactorily validated according to International Conference on Harmonization guidelines. The response was linear (r2 > 0.99) in the concentration ranges of 2-14 μg/mL at 213 nm and 10-70 μg/mL at 243 nm. The LOD and LOQ were 0.39 and 1.31 μg/mL, respectively, at 213 nm. For the 243 nm method, the LOD and LOQ were 2.93 and 9.77 μg/mL, respectively. Precision was evaluated by RSD, and the obtained results were lower than 2%. Adequate accuracy was also obtained. The methods proved to be robust using a full factorial design evaluation. For PB dissolution studies, the best conditions were achieved using a United States Pharmacopeia Dissolution Apparatus 2 (paddle) at 50 rpm and with 900 mL 0.1 M hydrochloric acid as the dissolution medium, presenting satisfactory results during the validation tests. In addition, the kinetic parameters of drug release were investigated using model-dependent methods, and the dissolution profiles were best described by the first-order model. Therefore, the proposed methods were successfully applied for the assay and dissolution analysis of PB in commercial tablets.

  17. Design and Validation of a Breathing Detection System for Scuba Divers.

    PubMed

    Altepe, Corentin; Egi, S Murat; Ozyigit, Tamer; Sinoplu, D Ruzgar; Marroni, Alessandro; Pierleoni, Paola

    2017-06-09

    Drowning is the major cause of death in self-contained underwater breathing apparatus (SCUBA) diving. This study proposes an embedded system with a live and light-weight algorithm which detects the breathing of divers through the analysis of the intermediate pressure (IP) signal of the SCUBA regulator. A system composed mainly of two pressure sensors and a low-power microcontroller was designed and programmed to record the pressure sensors signals and provide alarms in absence of breathing. An algorithm was developed to analyze the signals and identify inhalation events of the diver. A waterproof case was built to accommodate the system and was tested up to a depth of 25 m in a pressure chamber. To validate the system in the real environment, a series of dives with two different types of workload requiring different ranges of breathing frequencies were planned. Eight professional SCUBA divers volunteered to dive with the system to collect their IP data in order to participate to validation trials. The subjects underwent two dives, each of 52 min on average and a maximum depth of 7 m. The algorithm was optimized for the collected dataset and proved a sensitivity of inhalation detection of 97.5% and a total number of 275 false positives (FP) over a total recording time of 13.9 h. The detection algorithm presents a maximum delay of 5.2 s and requires only 800 bytes of random-access memory (RAM). The results were compared against the analysis of video records of the dives by two blinded observers and proved a sensitivity of 97.6% on the data set. The design includes a buzzer to provide audible alarms to accompanying dive buddies which will be triggered in case of degraded health conditions such as near drowning (absence of breathing), hyperventilation (breathing frequency too high) and skip-breathing (breathing frequency too low) measured by the improper breathing frequency. The system also measures the IP at rest before the dive and indicates with flashing light

  18. Designing Awe in Virtual Reality: An Experimental Study.

    PubMed

    Chirico, Alice; Ferrise, Francesco; Cordella, Lorenzo; Gaggioli, Andrea

    2017-01-01

    Awe is a little-studied emotion with a great transformative potential. Therefore, the interest toward the study of awe's underlying mechanisms has been increased. Specifically, researchers have been interested in how to reproduce intense feelings of awe within laboratory conditions. It has been proposed that the use of virtual reality (VR) could be an effective way to induce awe in controlled experimental settings, thanks to its ability of providing participants with a sense of "presence," that is, the subjective feeling of being displaced in another physical or imaginary place. However, the potential of VR as awe-inducing medium has not been fully tested yet. In the present study, we provided an evidence-based design and a validation of four immersive virtual environments (VEs) involving 36 participants in a within-subject design. Of these, three VEs were designed to induce awe, whereas the fourth VE was targeted as an emotionally neutral stimulus. Participants self-reported the extent to which they felt awe, general affect and sense of presence related to each environment. As expected, results showed that awe-VEs could induce significantly higher levels of awe and presence as compared to the neutral VE. Furthermore, these VEs induced significantly more positive than negative affect. These findings supported the potential of immersive VR for inducing awe and provide useful indications for the design of awe-inspiring virtual environments.

  19. Validation of design procedure and performance modeling of a heat and fluid transport field experiment in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Nir, A.; Doughty, C.; Tsang, C. F.

    Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no

  20. Experimental validation of an integrated controls-structures design methodology for a class of flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.

    1994-01-01

    This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.

  1. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas O.; Sheth, Rubik B.; Le,Hung

    2012-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the model development and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  2. Model Development and Experimental Validation of the Fusible Heat Sink Design for Exploration Vehicles

    NASA Technical Reports Server (NTRS)

    Cognata, Thomas J.; Leimkuehler, Thomas; Sheth, Rubik; Le, Hung

    2013-01-01

    The Fusible Heat Sink is a novel vehicle heat rejection technology which combines a flow through radiator with a phase change material. The combined technologies create a multi-function device able to shield crew members against Solar Particle Events (SPE), reduce radiator extent by permitting sizing to the average vehicle heat load rather than to the peak vehicle heat load, and to substantially absorb heat load excursions from the average while constantly maintaining thermal control system setpoints. This multi-function technology provides great flexibility for mission planning, making it possible to operate a vehicle in hot or cold environments and under high or low heat load conditions for extended periods of time. This paper describes the modeling and experimental validation of the Fusible Heat Sink technology. The model developed was intended to meet the radiation and heat rejection requirements of a nominal MMSEV mission. Development parameters and results, including sizing and model performance will be discussed. From this flight-sized model, a scaled test-article design was modeled, designed, and fabricated for experimental validation of the technology at Johnson Space Center thermal vacuum chamber facilities. Testing showed performance comparable to the model at nominal loads and the capability to maintain heat loads substantially greater than nominal for extended periods of time.

  3. Validation of new psychosocial factors questionnaires: a Colombian national study.

    PubMed

    Villalobos, Gloria H; Vargas, Angélica M; Rondón, Martin A; Felknor, Sarah A

    2013-01-01

    The study of workers' health problems possibly associated with stressful conditions requires valid and reliable tools for monitoring risk factors. The present study validates two questionnaires to assess psychosocial risk factors for stress-related illnesses within a sample of Colombian workers. The validation process was based on a representative sample survey of 2,360 Colombian employees, aged 18-70 years. Worker response rate was 90%; 46% of the responders were women. Internal consistency was calculated, construct validity was tested with factor analysis and concurrent validity was tested with Spearman correlations. The questionnaires demonstrated adequate reliability (0.88-0.95). Factor analysis confirmed the dimensions proposed in the measurement model. Concurrent validity resulted in significant correlations with stress and health symptoms. "Work and Non-work Psychosocial Factors Questionnaires" were found to be valid and reliable for the assessment of workers' psychosocial factors, and they provide information for research and intervention. Copyright © 2012 Wiley Periodicals, Inc.

  4. Updated Design Standards and Guidance from the What Works Clearinghouse: Regression Discontinuity Designs and Cluster Designs

    ERIC Educational Resources Information Center

    Cole, Russell; Deke, John; Seftor, Neil

    2016-01-01

    The What Works Clearinghouse (WWC) maintains design standards to identify rigorous, internally valid education research. As education researchers advance new methodologies, the WWC must revise its standards to include an assessment of the new designs. Recently, the WWC has revised standards for two emerging study designs: regression discontinuity…

  5. Design, assembly, and validation of a nose-only inhalation exposure system for studies of aerosolized viable influenza H5N1 virus in ferrets

    PubMed Central

    2010-01-01

    Background The routes by which humans acquire influenza H5N1 infections have not been fully elucidated. Based on the known biology of influenza viruses, four modes of transmission are most likely in humans: aerosol transmission, ingestion of undercooked contaminated infected poultry, transmission by large droplets and self-inoculation of the nasal mucosa by contaminated hands. In preparation of a study to resolve whether H5N1 viruses are transmissible by aerosol in an animal model that is a surrogate for humans, an inhalation exposure system for studies of aerosolized H5N1 viruses in ferrets was designed, assembled, and validated. Particular attention was paid towards system safety, efficacy of dissemination, the viability of aerosolized virus, and sampling methodology. Results An aerosol generation and delivery system, referred to as a Nose-Only Bioaerosol Exposure System (NBIES), was assembled and function tested. The NBIES passed all safety tests, met expected engineering parameters, required relatively small quantities of material to obtain the desired aerosol concentrations of influenza virus, and delivered doses with high-efficacy. Ferrets withstood a mock exposure trial without signs of stress. Conclusions The NBIES delivers doses of aerosolized influenza viruses with high efficacy, and uses less starting material than other similar designs. Influenza H5N1 and H3N2 viruses remain stable under the conditions used for aerosol generation and sample collection. The NBIES is qualified for studies of aerosolized H5N1 virus. PMID:20573226

  6. The development and evaluation of content validity of the Zambia Spina Bifida Functional Measure: Preliminary studies

    PubMed Central

    Amosun, Seyi L.; Shilalukey-Ngoma, Mary P.; Kafaar, Zuhayr

    2017-01-01

    Background Very little is known on outcome measures for children with spina bifida (SB) in Zambia. If rehabilitation professionals managing children with SB in Zambia and other parts of sub-Saharan Africa are to instigate measuring outcomes routinely, a tool has to be made available. The main objective of this study was to develop an appropriate and culturally sensitive instrument for evaluating the impact of the interventions on children with SB in Zambia. Methods A mixed design method was used for the study. Domains were identified retrospectively and confirmation was done through a systematic review study. Items were generated through semi-structured interviews and focus group discussions. Qualitative data were downloaded, translated into English, transcribed verbatim and presented. These were then placed into categories of the main domains of care deductively through the process of manifest content analysis. Descriptive statistics, alpha coefficient and index of content validity were calculated using SPSS. Results Self-care, mobility and social function were identified as main domains, while participation and communication were sub-domains. A total of 100 statements were generated and 78 items were selected deductively. An alpha coefficient of 0.98 was computed and experts judged the items. Conclusions The new functional measure with an acceptable level of content validity titled Zambia Spina Bifida Functional Measure (ZSBFM) was developed. It was designed to evaluate effectiveness of interventions given to children with SB from the age of 6 months to 5 years. Psychometric properties of reliability and construct validity were tested and are reported in another study. PMID:28951850

  7. Self-administered structured food record for measuring individual energy and nutrient intake in large cohorts: Design and validation.

    PubMed

    García, Silvia M; González, Claudio; Rucci, Enzo; Ambrosino, Cintia; Vidal, Julia; Fantuzzi, Gabriel; Prestes, Mariana; Kronsbein, Peter

    2018-06-05

    Several instruments developed to assess dietary intake of groups or populations have strengths and weaknesses that affect their specific application. No self-administered, closed-ended dietary survey was previously used in Argentina to assess current food and nutrient intake on a daily basis. To design and validate a self-administered, structured food record (NutriQuid, NQ) representative of the adult Argentine population's food consumption pattern to measure individual energy and nutrient intake. Records were loaded onto a database using software that checks a regional nutrition information system (SARA program), automatically quantifying energy and nutrient intake. NQ validation included two phases: (1) NQ construct validity comparing records kept simultaneously by healthy volunteers (45-75 years) and a nutritionist who provided meals (reference), and (2) verification of whether NQ reflected target population consumption (calories and nutrients), week consumption differences, respondent acceptability, and ease of data entry/analysis. Data analysis included descriptive statistics, repeated measures ANOVA, intraclass correlation coefficient, nonparametric regression, and cross-classification into quintiles. The first validation (study group vs. reference) showed an underestimation (10%) of carbohydrate, fat, and energy intake. Second validation: 109 volunteers (91% response) completed the NQ for seven consecutive days. Record completion took about 9min/day, and data entry 3-6min. Mean calorie intake was 2240±119kcal/day (42% carbohydrates, 17% protein, and 41% fat). Intake significantly increased in the weekend. NQ is a simple and efficient tool to assess dietary intake in large samples. Copyright © 2018 SEEN y SED. Publicado por Elsevier España, S.L.U. All rights reserved.

  8. LOX/hydrocarbon rocket engine analytical design methodology development and validation. Volume 2: Appendices

    NASA Technical Reports Server (NTRS)

    Niiya, Karen E.; Walker, Richard E.; Pieper, Jerry L.; Nguyen, Thong V.

    1993-01-01

    This final report includes a discussion of the work accomplished during the period from Dec. 1988 through Nov. 1991. The objective of the program was to assemble existing performance and combustion stability models into a usable design methodology capable of designing and analyzing high-performance and stable LOX/hydrocarbon booster engines. The methodology was then used to design a validation engine. The capabilities and validity of the methodology were demonstrated using this engine in an extensive hot fire test program. The engine used LOX/RP-1 propellants and was tested over a range of mixture ratios, chamber pressures, and acoustic damping device configurations. This volume contains time domain and frequency domain stability plots which indicate the pressure perturbation amplitudes and frequencies from approximately 30 tests of a 50K thrust rocket engine using LOX/RP-1 propellants over a range of chamber pressures from 240 to 1750 psia with mixture ratios of from 1.2 to 7.5. The data is from test configurations which used both bitune and monotune acoustic cavities and from tests with no acoustic cavities. The engine had a length of 14 inches and a contraction ratio of 2.0 using a 7.68 inch diameter injector. The data was taken from both stable and unstable tests. All combustion instabilities were spontaneous in the first tangential mode. Although stability bombs were used and generated overpressures of approximately 20 percent, no tests were driven unstable by the bombs. The stability instrumentation included six high-frequency Kistler transducers in the combustion chamber, a high-frequency Kistler transducer in each propellant manifold, and tri-axial accelerometers. Performance data is presented, both characteristic velocity efficiencies and energy release efficiencies, for those tests of sufficient duration to record steady state values.

  9. Reporting to Improve Reproducibility and Facilitate Validity Assessment for Healthcare Database Studies V1.0.

    PubMed

    Wang, Shirley V; Schneeweiss, Sebastian; Berger, Marc L; Brown, Jeffrey; de Vries, Frank; Douglas, Ian; Gagne, Joshua J; Gini, Rosa; Klungel, Olaf; Mullins, C Daniel; Nguyen, Michael D; Rassen, Jeremy A; Smeeth, Liam; Sturkenboom, Miriam

    2017-09-01

    Defining a study population and creating an analytic dataset from longitudinal healthcare databases involves many decisions. Our objective was to catalogue scientific decisions underpinning study execution that should be reported to facilitate replication and enable assessment of validity of studies conducted in large healthcare databases. We reviewed key investigator decisions required to operate a sample of macros and software tools designed to create and analyze analytic cohorts from longitudinal streams of healthcare data. A panel of academic, regulatory, and industry experts in healthcare database analytics discussed and added to this list. Evidence generated from large healthcare encounter and reimbursement databases is increasingly being sought by decision-makers. Varied terminology is used around the world for the same concepts. Agreeing on terminology and which parameters from a large catalogue are the most essential to report for replicable research would improve transparency and facilitate assessment of validity. At a minimum, reporting for a database study should provide clarity regarding operational definitions for key temporal anchors and their relation to each other when creating the analytic dataset, accompanied by an attrition table and a design diagram. A substantial improvement in reproducibility, rigor and confidence in real world evidence generated from healthcare databases could be achieved with greater transparency about operational study parameters used to create analytic datasets from longitudinal healthcare databases. © 2017 The Authors. Pharmacoepidemiology & Drug Safety Published by John Wiley & Sons Ltd.

  10. Development and Validation of a Rubric for Diagnosing Students’ Experimental Design Knowledge and Difficulties

    PubMed Central

    Dasgupta, Annwesa P.; Anderson, Trevor R.

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students’ responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students’ experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. PMID:26086658

  11. Helping Students Evaluate the Validity of a Research Study.

    ERIC Educational Resources Information Center

    Morgan, George A.; Gliner, Jeffrey A.

    Students often have difficulty in evaluating the validity of a study. A conceptually and linguistically meaningful framework for evaluating research studies is proposed that is based on the discussion of internal and external validity of T. D. Cook and D. T. Campbell (1979). The proposal includes six key dimensions, three related to internal…

  12. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  13. Medical student quality-of-life in the clerkships: a scale validation study.

    PubMed

    Brannick, Michael T; Horn, Gregory T; Schnaus, Michael J; Wahi, Monika M; Goldin, Steven B

    2015-04-01

    Many aspects of medical school are stressful for students. To empirically assess student reactions to clerkship programs, or to assess efforts to improve such programs, educators must measure the overall well-being of the students reliably and validly. The purpose of the study was to develop and validate a measure designed to achieve these goals. The authors developed a measure of quality of life for medical students by sampling (public domain) items tapping general happiness, fatigue, and anxiety. A quality-of-life scale was developed by factor analyzing responses to the items from students in two different clerkships from 2005 to 2008. Reliability was assessed using Cronbach's alpha. Validity was assessed by factor analysis, convergence with additional theoretically relevant scales, and sensitivity to change over time. The refined nine-item measure is a Likert scaled survey of quality-of-life items comprised of two domains: exhaustion and general happiness. The resulting scale demonstrated good reliability and factorial validity at two time points for each of the two samples. The quality-of-life measure also correlated with measures of depression and the amount of sleep reported during the clerkships. The quality-of-life measure appeared more sensitive to changes over time than did the depression measure. The measure is short and can be easily administered in a survey. The scale appears useful for program evaluation and more generally as an outcome variable in medical educational research.

  14. Development, validation and utilisation of food-frequency questionnaires - a review.

    PubMed

    Cade, Janet; Thompson, Rachel; Burley, Victoria; Warm, Daniel

    2002-08-01

    The purpose of this review is to provide guidance on the development, validation and use of food-frequency questionnaires (FFQs) for different study designs. It does not include any recommendations about the most appropriate method for dietary assessment (e.g. food-frequency questionnaire versus weighed record). A comprehensive search of electronic databases was carried out for publications from 1980 to 1999. Findings from the review were then commented upon and added to by a group of international experts. Recommendations have been developed to aid in the design, validation and use of FFQs. Specific details of each of these areas are discussed in the text. FFQs are being used in a variety of ways and different study designs. There is no gold standard for directly assessing the validity of FFQs. Nevertheless, the outcome of this review should help those wishing to develop or adapt an FFQ to validate it for its intended use.

  15. The Design and Evaluation of Class Exercises as Active Learning Tools in Software Verification and Validation

    ERIC Educational Resources Information Center

    Wu, Peter Y.; Manohar, Priyadarshan A.; Acharya, Sushil

    2016-01-01

    It is well known that interesting questions can stimulate thinking and invite participation. Class exercises are designed to make use of questions to engage students in active learning. In a project toward building a community skilled in software verification and validation (SV&V), we critically review and further develop course materials in…

  16. Design and validation of an open-source library of dynamic reference frames for research and education in optical tracking.

    PubMed

    Brown, Alisa; Uneri, Ali; Silva, Tharindu De; Manbachi, Amir; Siewerdsen, Jeffrey H

    2018-04-01

    Dynamic reference frames (DRFs) are a common component of modern surgical tracking systems; however, the limited number of commercially available DRFs poses a constraint in developing systems, especially for research and education. This work presents the design and validation of a large, open-source library of DRFs compatible with passive, single-face tracking systems, such as Polaris stereoscopic infrared trackers (NDI, Waterloo, Ontario). An algorithm was developed to create new DRF designs consistent with intra- and intertool design constraints and convert to computer-aided design (CAD) files suitable for three-dimensional printing. A library of 10 such groups, each with 6 to 10 DRFs, was produced and tracking performance was validated in comparison to a standard commercially available reference, including pivot calibration, fiducial registration error (FRE), and target registration error (TRE). Pivot tests showed calibration error [Formula: see text], indistinguishable from the reference. FRE was [Formula: see text], and TRE in a CT head phantom was [Formula: see text], both equivalent to the reference. The library of DRFs offers a useful resource for surgical navigation research and could be extended to other tracking systems and alternative design constraints.

  17. Neuroimaging Study Designs, Computational Analyses and Data Provenance Using the LONI Pipeline

    PubMed Central

    Dinov, Ivo; Lozev, Kamen; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Pierce, Jonathan; Zamanyan, Alen; Chakrapani, Shruthi; Van Horn, John; Parker, D. Stott; Magsipoc, Rico; Leung, Kelvin; Gutman, Boris; Woods, Roger; Toga, Arthur

    2010-01-01

    Modern computational neuroscience employs diverse software tools and multidisciplinary expertise to analyze heterogeneous brain data. The classical problems of gathering meaningful data, fitting specific models, and discovering appropriate analysis and visualization tools give way to a new class of computational challenges—management of large and incongruous data, integration and interoperability of computational resources, and data provenance. We designed, implemented and validated a new paradigm for addressing these challenges in the neuroimaging field. Our solution is based on the LONI Pipeline environment [3], [4], a graphical workflow environment for constructing and executing complex data processing protocols. We developed study-design, database and visual language programming functionalities within the LONI Pipeline that enable the construction of complete, elaborate and robust graphical workflows for analyzing neuroimaging and other data. These workflows facilitate open sharing and communication of data and metadata, concrete processing protocols, result validation, and study replication among different investigators and research groups. The LONI Pipeline features include distributed grid-enabled infrastructure, virtualized execution environment, efficient integration, data provenance, validation and distribution of new computational tools, automated data format conversion, and an intuitive graphical user interface. We demonstrate the new LONI Pipeline features using large scale neuroimaging studies based on data from the International Consortium for Brain Mapping [5] and the Alzheimer's Disease Neuroimaging Initiative [6]. User guides, forums, instructions and downloads of the LONI Pipeline environment are available at http://pipeline.loni.ucla.edu. PMID:20927408

  18. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for the detection of genotoxic carcinogens: I. Summary of pre-validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Burlinson, Brian; Escobar, Patricia A; Kraynak, Andrew R; Nakagawa, Yuzuki; Nakajima, Madoka; Pant, Kamala; Asano, Norihide; Lovell, David; Morita, Takeshi; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this validation effort was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The purpose of the pre-validation studies (i.e., Phase 1 through 3), conducted in four or five laboratories with extensive comet assay experience, was to optimize the protocol to be used during the definitive validation study. Copyright © 2015 Elsevier B.V. All rights reserved.

  19. The design organization test: further demonstration of reliability and validity as a brief measure of visuospatial ability.

    PubMed

    Killgore, William D S; Gogel, Hannah

    2014-01-01

    Neuropsychological assessments are frequently time-consuming and fatiguing for patients. Brief screening evaluations may reduce test duration and allow more efficient use of time by permitting greater attention toward neuropsychological domains showing probable deficits. The Design Organization Test (DOT) was initially developed as a 2-min paper-and-pencil alternative for the Block Design (BD) subtest of the Wechsler scales. Although initially validated for clinical neurologic patients, we sought to further establish the reliability and validity of this test in a healthy, more diverse population. Two alternate versions of the DOT and the Wechsler Abbreviated Scale of Intelligence (WASI) were administered to 61 healthy adult participants. The DOT showed high alternate forms reliability (r = .90-.92), and the two versions yielded equivalent levels of performance. The DOT was highly correlated with BD (r = .76-.79) and was significantly correlated with all subscales of the WASI. The DOT proved useful when used in lieu of BD in the calculation of WASI IQ scores. Findings support the reliability and validity of the DOT as a measure of visuospatial ability and suggest its potential worth as an efficient estimate of intellectual functioning in situations where lengthier tests may be inappropriate or unfeasible.

  20. Design, validation, and use of an evaluation instrument for monitoring systemic reform

    NASA Astrophysics Data System (ADS)

    Scantlebury, Kathryn; Boone, William; Butler Kahle, Jane; Fraser, Barry J.

    2001-08-01

    Over the past decade, state and national policymakers have promoted systemic reform as a way to achieve high-quality science education for all students. However, few instruments are available to measure changes in key dimensions relevant to systemic reform such as teaching practices, student attitudes, or home and peer support. Furthermore, Rasch methods of analysis are needed to permit valid comparison of different cohorts of students during different years of a reform effort. This article describes the design, development, validation, and use of an instrument that measures student attitudes and several environment dimensions (standards-based teaching, home support, and peer support) using a three-step process that incorporated expert opinion, factor analysis, and item response theory. The instrument was validated with over 8,000 science and mathematics students, taught by more than 1,000 teachers in over 200 schools as part of a comprehensive assessment of the effectiveness of Ohio's systemic reform initiative. When the new four-factor, 20-item questionnaire was used to explore the relative influence of the class, home, and peer environment on student achievement and attitudes, findings were remarkably consistent across 3 years and different units and methods of analysis. All three environments accounted for unique variance in student attitudes, but only the environment of the class accounted for unique variance in student achievement. However, the class environment (standards-based teaching practices) was the strongest independent predictor of both achievement and attitude, and appreciable amounts of the total variance in attitudes were common to the three environments.

  1. Validation of the Ottawa Knee Rule in Iran: a prospective study.

    PubMed

    Jalili, Mohammad; Gharebaghi, Hadi

    2010-11-01

    This study was designed to determine the accuracy of the Ottawa Knee Rule (OKR) when applied to patients with acute knee injury in the Iranian population of the Imam Hospital Emergency Department (ED) at. This prospective cohort validation study included a convenience sample of all patients with a blunt knee injury sustained in the preceding 7 days presenting to the ED of a tertiary care teaching hospital during the study period. Patients were assessed for the five variables comprising the OKR, and a standardised data form was completed for each patient. Standard knee radiographs were ordered on all patients irrespective of the determination of the rule. The rules were interpreted by the primary investigator on the basis of the data sheet and the final orthopaedist radiograph reading. Outcome measures of this study were: sensitivity, specificity, positive predictive value and negative predictive value of the OKR. A total of 283 patients were enrolled in the study. 22 fractures (7.77%) were detected. The decision rule had a sensitivity of 0.95 (95% CI 0.77 to 0.99), and a specificity of 0.44 (95% CI 0.37 to 0.50). The potential reduction in use of radiography was estimated to be 41%. The OKR missed only one fracture. Prospective validation has shown that the OKR is a highly sensitive tool for detecting knee fractures and has the potential to reduce the number of radiographs in patients with acute knee injuries.

  2. Rational design and validation of a vanilloid-sensitive TRPV2 ion channel

    PubMed Central

    Yang, Fan; Vu, Simon; Yarov-Yarovoy, Vladimir; Zheng, Jie

    2016-01-01

    Vanilloids activation of TRPV1 represents an excellent model system of ligand-gated ion channels. Recent studies using cryo-electron microcopy (cryo-EM), computational analysis, and functional quantification revealed the location of capsaicin-binding site and critical residues mediating ligand-binding and channel activation. Based on these new findings, here we have successfully introduced high-affinity binding of capsaicin and resiniferatoxin to the vanilloid-insensitive TRPV2 channel, using a rationally designed minimal set of four point mutations (F467S–S498F–L505T–Q525E, termed TRPV2_Quad). We found that binding of resiniferatoxin activates TRPV2_Quad but the ligand-induced open state is relatively unstable, whereas binding of capsaicin to TRPV2_Quad antagonizes resiniferatoxin-induced activation likely through competition for the same binding sites. Using Rosetta-based molecular docking, we observed a common structural mechanism underlying vanilloids activation of TRPV1 and TRPV2_Quad, where the ligand serves as molecular “glue” that bridges the S4–S5 linker to the S1–S4 domain to open these channels. Our analysis revealed that capsaicin failed to activate TRPV2_Quad likely due to structural constraints preventing such bridge formation. These results not only validate our current working model for capsaicin activation of TRPV1 but also should help guide the design of drug candidate compounds for this important pain sensor. PMID:27298359

  3. Verification, Validation and Sensitivity Studies in Computational Biomechanics

    PubMed Central

    Anderson, Andrew E.; Ellis, Benjamin J.; Weiss, Jeffrey A.

    2012-01-01

    Computational techniques and software for the analysis of problems in mechanics have naturally moved from their origins in the traditional engineering disciplines to the study of cell, tissue and organ biomechanics. Increasingly complex models have been developed to describe and predict the mechanical behavior of such biological systems. While the availability of advanced computational tools has led to exciting research advances in the field, the utility of these models is often the subject of criticism due to inadequate model verification and validation. The objective of this review is to present the concepts of verification, validation and sensitivity studies with regard to the construction, analysis and interpretation of models in computational biomechanics. Specific examples from the field are discussed. It is hoped that this review will serve as a guide to the use of verification and validation principles in the field of computational biomechanics, thereby improving the peer acceptance of studies that use computational modeling techniques. PMID:17558646

  4. Implementation and Initial Validation of the APS English Test [and] The APS English-Writing Test at Golden West College: Evidence for Predictive Validity.

    ERIC Educational Resources Information Center

    Isonio, Steven

    In May 1991, Golden West College (California) conducted a validation study of the English portion of the Assessment and Placement Services for Community Colleges (APS), followed by a predictive validity study in July 1991. The initial study was designed to aid in the implementation of the new test at GWC by comparing data on APS use at other…

  5. An assessment of the validity and discrimination of the intensive time-series design by monitoring learning differences between students with different cognitive tendencies

    NASA Astrophysics Data System (ADS)

    Farnsworth, Carolyn H.; Mayer, Victor J.

    Intensive time-series designs for classroom investigations have been under development since 1975. Studies have been conducted to determine their feasibility (Mayer & Lewis, 1979), their potential for monitoring knowledge acquisition (Mayer & Kozlow, 1980), and the potential threat to validity of the frequency of testing inherent in the design (Mayer & Rojas, 1982). This study, an extension of those previous studies, is an attempt to determine the degree of discrimination the design allows in collecting data on achievement. It also serves as a replication of the Mayer and Kozlow study, an attempt to determine design validity for collecting achievement data. The investigator used her eighth-grade earth science students, from a suburban Columbus (Ohio) junior high school. A multiple-group single intervention time-series design (Glass, Willson, & Gottman, 1975) was adapted to the collection of daily data on achievement in the topic of the intervention, a unit on plate tectonics. Single multiple-choice items were randomly assigned to each of three groups of students, identified on the basis of their ranking on a written test of cognitive level (Lawson, 1978). The top third, or those with formal cognitive tendencies, were compared on the basis of knowledge achievement and understanding achievement with the lowest third of the students, or those with concrete cognitive tendencies, to determine if the data collected in the design would discriminate between the two groups. Several studies (Goodstein & Howe, 1978; Lawson & Renner, 1975) indicated that students with formal cognitive tendencies should learn a formal concept such as plate tectonics with greater understanding than should students with concrete cognitive tendencies. Analyses used were a comparison of regression lines in each of the three study stages: baseline, intervention, and follow-up; t-tests of means of days summed across each stage; and a time-series analysis program. Statistically significant differences

  6. Validation of the Center for Epidemiological Studies Depression Scale among Korean Adolescents.

    PubMed

    Heo, Eun-Hye; Choi, Kyeong-Sook; Yu, Je-Chun; Nam, Ji-Ae

    2018-02-01

    The Center for Epidemiological Studies Depression Scale (CES-D) is designed to measure the current level of depressive symptomatology in the general population. However, no review has examined whether the scale is reliable and valid among children and adolescents in Korea. The purpose of this study was to test whether the Korean form of the CES-D is valid in adolescents. Data were obtained from 1,884 adolescents attending grades 1-3 in Korean middle schools. Reliability was evaluated by internal consistency (Cronbach's alpha). Concurrent validity was evaluated by a correlation analysis between the CES-D and other scales. Construct validity was evaluated by exploratory factor and confirmatory factor analyses. The internal consistency coefficient for the entire group was 0.88. The CES-D was positively correlated with scales that measure negative psychological constructs, such as the State Anxiety Inventory for Children, the Korean Social Anxiety Scale for Children and Adolescents, and the Reynold Suicidal Ideation Questionnaire, but it was negatively correlated with scales that measure positive psychological constructs, such as the Korean version of the Rosenberg Self-Esteem Scale and the Connor-Davidson Resilience Scale-2. The CES-D was examined by three-dimensional exploratory factor analysis, and the three-factor structure of the scale explained 53.165% of the total variance. The variance explained by factor I was 24.836%, that explained by factor II was 15.988%, and that explained by factor III was 12.341%. The construct validity of the CES-D was tested by confirmatory factor analysis, and we applied the entire group's data using a three-factor hierarchical model. The fit index showed a level similar to those of other countries' adolescent samples. The CES-D has high internal consistency and addresses psychological constructs similar to those addressed by other scales. The CES-D showed a three-factor structure in an exploratory factor analysis. The present

  7. The Math Essential Skills Screener--Upper Elementary Version (MESS-U): Studies of Reliability and Validity

    ERIC Educational Resources Information Center

    Erford, Bradley T.; Biddison, Amanda R.

    2006-01-01

    The Math Essential Skills Screener--Upper Elementary Version (MESS-U) is part of a series of screening tests designed to help identify students ages 9-11 who are at risk for mathematics failure. Internal consistency, test-retest reliability, item analysis, decision efficiency, convergent validity and factorial validity of the MESS-U were studied…

  8. California Diploma Project Technical Report III: Validity Study--Validity Study of the Health Sciences and Medical Technology Standards

    ERIC Educational Resources Information Center

    McGaughy, Charis; Bryck, Rick; de Gonzalez, Alicia

    2012-01-01

    This study is a validity study of the recently revised version of the Health Science Standards. The purpose of this study is to understand how the Health Science Standards relate to college and career readiness, as represented by survey ratings submitted by entry-level college instructors of health science courses and industry representatives. For…

  9. Design and Validation of High Date Rate Ka-Band Software Defined Radio for Small Satellite

    NASA Technical Reports Server (NTRS)

    Xia, Tian

    2016-01-01

    The Design and Validation of High Date Rate Ka- Band Software Defined Radio for Small Satellite project will develop a novel Ka-band software defined radio (SDR) that is capable of establishing high data rate inter-satellite links with a throughput of 500 megabits per second (Mb/s) and providing millimeter ranging precision. The system will be designed to operate with high performance and reliability that is robust against various interference effects and network anomalies. The Ka-band radio resulting from this work will improve upon state of the art Ka-band radios in terms of dimensional size, mass and power dissipation, which limit their use in small satellites.

  10. [Gender-determinant factors in contraception: design and validation of a questionnaire].

    PubMed

    Yago Simón, Teresa; Tomás Aznar, Concepción

    2013-10-01

    To design and validate a questionnaire for young women on gender-determinant factors in contraception. A questionnaire was developed from conversations with young women attending contraception clinic in the Health Promotion Municpal Centre, Zaragoza. A total of 200 young women between the ages of 13 and 24 self-completed the questionnaire, with only one no response. Several items were analysed: reliability, using Cronbach's alpha coefficient, and construct validity by analysis of the main components with eigenvalues above 1, and Quartimax rotation with Kaiser normalisation. The questionnaire contained 36 items and took 10minutes to self-complete. There was good internal consistency, with a Cronbach's alpha 0,853. Twelve factors were established with an explanation of 61.42% variance, and three descriptive lines: relationship dimension («submissive attitude», «blind attitude», «let go due to affection», «dominant partner»), gender identity («maternity as identity», «non-idealised maternity», «traditional role», «insecurity», «shame») and caring. This questionnaire enabled gender determinant-factors that take part in contraception to be identified, and will be useful to find out how the different ways of relating between the sexes influence the problems of sexual and reproductive health in young women in our environment. Copyright © 2013 Elsevier España, S.L. All rights reserved.

  11. Reliability and Validity Study of the Chamorro Assisted Gait Scale for People with Sprained Ankles, Walking with Forearm Crutches

    PubMed Central

    Ridao-Fernández, Carmen; Ojeda, Joaquín; Benítez-Lugo, Marisa; Sevillano, José Luis

    2016-01-01

    Objective The aim of this study was to design and validate a functional assessment scale for assisted gait with forearm crutches (Chamorro Assisted Gait Scale—CHAGS) and to assess its reliability in people with sprained ankles. Design Thirty subjects who suffered from sprained ankle (anterior talofibular ligament first and second degree) were included in the study. A modified Delphi technique was used to obtain the content validity. The selected items were: pelvic and scapular girdle dissociation(1), deviation of Center of Gravity(2), crutch inclination(3), steps rhythm(4), symmetry of step length(5), cross support(6), simultaneous support of foot and crutch(7), forearm off(8), facing forward(9) and fluency(10). Two raters twice visualized the gait of the sample subjects which were recorded. The criterion-related validity was determined by correlation between CHAGS and Coding of eight criteria of qualitative gait analysis (Viel Coding). Internal consistency and inter and intra-rater reliability were also tested. Results CHAGS obtained a high and negative correlation with Viel Coding. We obtained a good internal consistency and the intra-class correlation coefficients oscillated between 0.97 and 0.99, while the minimal detectable changes were acceptable. Conclusion CHAGS scale is a valid and reliable tool for assessing assisted gait with crutches in people with sprained ankles to perform partial relief of lower limbs. PMID:27168236

  12. [Design and validation of the scale for the detection of violence in courtship in young people in the Sevilla University (Spain)].

    PubMed

    García-Carpintero, María Ángeles; Rodríguez-Santero, Javier; Porcel-Gálvez, Ana María

    To design and validate a specific instrument to detect exercised and suffered in the relations of young couples in violence. Descriptive study of validation clinimetric. Stratified by sex and area of knowledge, which was adopted as inclusion criteria have or have had any relationship. The sample consisted of 447 subjects. We obtained the Multidimensional Scale Dating Violence (EMVN), 32 items with three dimensions: physical and sexual assault, behavior control (cyberbullying, surveillance and harassment) and abuse psicoemocional (disparagement and domination), as a victim or as aggressor. No statistically significant differences were found between the violence exerted and the violence suffered, but it was based on sex. The EMVN is a valid and reliable scale that measures the different elements of violence in couples of young people and you can suppose a resource for the comprehensive detection of violent behaviors in dating relationships that are established among young people. Copyright © 2017 SESPAS. Publicado por Elsevier España, S.L.U. All rights reserved.

  13. Bilateral, Misalignment-Compensating, Full-DOF Hip Exoskeleton: Design and Kinematic Validation

    PubMed Central

    Degelaen, Marc; Lefeber, Nina; Swinnen, Eva; Vanderborght, Bram; Lefeber, Dirk

    2017-01-01

    A shared design goal for most robotic lower limb exoskeletons is to reduce the metabolic cost of locomotion for the user. Despite this, only a limited amount of devices was able to actually reduce user metabolic consumption. Preservation of the natural motion kinematics was defined as an important requirement for a device to be metabolically beneficial. This requires the inclusion of all human degrees of freedom (DOF) in a design, as well as perfect alignment of the rotation axes. As perfect alignment is impossible, compensation for misalignment effects should be provided. A misalignment compensation mechanism for a 3-DOF system is presented in this paper. It is validated by the implementation in a bilateral hip exoskeleton, resulting in a compact and lightweight device that can be donned fast and autonomously, with a minimum of required adaptations. Extensive testing of the prototype has shown that hip range of motion of the user is maintained while wearing the device and this for all three hip DOFs. This allowed the users to maintain their natural motion patterns when they are walking with the novel hip exoskeleton. PMID:28790799

  14. Addressing Participant Validity in a Small Internet Health Survey (The Restore Study): Protocol and Recommendations for Survey Response Validation

    PubMed Central

    Dewitt, James; Capistrant, Benjamin; Kohli, Nidhi; Mitteldorf, Darryl; Merengwa, Enyinnaya; West, William

    2018-01-01

    Background While deduplication and cross-validation protocols have been recommended for large Web-based studies, protocols for survey response validation of smaller studies have not been published. Objective This paper reports the challenges of survey validation inherent in a small Web-based health survey research. Methods The subject population was North American, gay and bisexual, prostate cancer survivors, who represent an under-researched, hidden, difficult-to-recruit, minority-within-a-minority population. In 2015-2016, advertising on a large Web-based cancer survivor support network, using email and social media, yielded 478 completed surveys. Results Our manual deduplication and cross-validation protocol identified 289 survey submissions (289/478, 60.4%) as likely spam, most stemming from advertising on social media. The basic components of this deduplication and validation protocol are detailed. An unexpected challenge encountered was invalid survey responses evolving across the study period. This necessitated the static detection protocol be augmented with a dynamic one. Conclusions Five recommendations for validation of Web-based samples, especially with smaller difficult-to-recruit populations, are detailed. PMID:29691203

  15. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  16. Measuring Nutrition Literacy in Spanish-Speaking Latinos: An Exploratory Validation Study.

    PubMed

    Gibbs, Heather D; Camargo, Juliana M T B; Owens, Sarah; Gajewski, Byron; Cupertino, Ana Paula

    2017-11-21

    Nutrition is important for preventing and treating chronic diseases highly prevalent among Latinos, yet no tool exists for measuring nutrition literacy among Spanish speakers. This study aimed to adapt the validated Nutrition Literacy Assessment Instrument for Spanish-speaking Latinos. This study was developed in two phases: adaptation and validity testing. Adaptation included translation, expert item content review, and interviews with Spanish speakers. For validity testing, 51 participants completed the Short Assessment of Health Literacy-Spanish (SAHL-S), the Nutrition Literacy Assessment Instrument in Spanish (NLit-S), and socio-demographic questionnaire. Validity and reliability statistics were analyzed. Content validity was confirmed with a Scale Content Validity Index of 0.96. Validity testing demonstrated NLit-S scores were strongly correlated with SAHL-S scores (r = 0.52, p < 0.001). Entire reliability was substantial at 0.994 (CI 0.992-0.996) and internal consistency was excellent (Cronbach's α = 0.92). The NLit-S demonstrates validity and reliability for measuring nutrition literacy among Spanish-speakers.

  17. Validation, Edits, and Application Processing System Report: Phase I.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    Findings of phase 1 of a study of the 1979-1980 Basic Educational Opportunity Grants validation, edits, and application processing system are presented. The study was designed to: assess the impact of the validation effort and processing system edits on the correct award of Basic Grants; and assess the characteristics of students most likely to…

  18. Design and experimental validation of Unilateral Linear Halbach magnet arrays for single-sided magnetic resonance.

    PubMed

    Bashyam, Ashvin; Li, Matthew; Cima, Michael J

    2018-07-01

    Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR. Copyright © 2018 Elsevier Inc. All rights reserved.

  19. Design and experimental validation of Unilateral Linear Halbach magnet arrays for single-sided magnetic resonance

    NASA Astrophysics Data System (ADS)

    Bashyam, Ashvin; Li, Matthew; Cima, Michael J.

    2018-07-01

    Single-sided NMR has the potential for broad utility and has found applications in healthcare, materials analysis, food quality assurance, and the oil and gas industry. These sensors require a remote, strong, uniform magnetic field to perform high sensitivity measurements. We demonstrate a new permanent magnet geometry, the Unilateral Linear Halbach, that combines design principles from "sweet-spot" and linear Halbach magnets to achieve this goal through more efficient use of magnetic flux. We perform sensitivity analysis using numerical simulations to produce a framework for Unilateral Linear Halbach design and assess tradeoffs between design parameters. Additionally, the use of hundreds of small, discrete magnets within the assembly allows for a tunable design, improved robustness to variability in magnetization strength, and increased safety during construction. Experimental validation using a prototype magnet shows close agreement with the simulated magnetic field. The Unilateral Linear Halbach magnet increases the sensitivity, portability, and versatility of single-sided NMR.

  20. Psychometric instrumentation: reliability and validity of instruments used for clinical practice, evidence-based practice projects and research studies.

    PubMed

    Mayo, Ann M

    2015-01-01

    It is important for CNSs and other APNs to consider the reliability and validity of instruments chosen for clinical practice, evidence-based practice projects, or research studies. Psychometric testing uses specific research methods to evaluate the amount of error associated with any particular instrument. Reliability estimates explain more about how well the instrument is designed, whereas validity estimates explain more about scores that are produced by the instrument. An instrument may be architecturally sound overall (reliable), but the same instrument may not be valid. For example, if a specific group does not understand certain well-constructed items, then the instrument does not produce valid scores when used with that group. Many instrument developers may conduct reliability testing only once, yet continue validity testing in different populations over many years. All CNSs should be advocating for the use of reliable instruments that produce valid results. Clinical nurse specialists may find themselves in situations where reliability and validity estimates for some instruments that are being utilized are unknown. In such cases, CNSs should engage key stakeholders to sponsor nursing researchers to pursue this most important work.

  1. A Content Validity Study of AIMIT (Assessing Interpersonal Motivation in Transcripts).

    PubMed

    Fassone, Giovanni; Lo Reto, Floriana; Foggetti, Paola; Santomassimo, Chiara; D'Onofrio, Maria Rita; Ivaldi, Antonella; Liotti, Giovanni; Trincia, Valeria; Picardi, Angelo

    2016-07-01

    Multi-motivational theories of human relatedness state that different motivational systems with an evolutionary basis modulate interpersonal relationships. The reliable assessment of their dynamics may usefully inform the understanding of the therapeutic relationship. The coding system of the Assessing Interpersonal Motivation in Transcripts (AIMIT) allows to identify in the clinical the activity of five main interpersonal motivational systems (IMSs): attachment (care-seeking), caregiving, ranking, sexuality and peer cooperation. To assess whether the criteria currently used to score the AIMIT are consistently correlated with the conceptual formulation of the interpersonal multi-motivational theory, two different studies were designed. Study 1: Content validity as assessed by highly qualified independent raters. Study 2: Content validity as assessed by unqualified raters. Results of study 1 show that out of the total 60 AIMIT verbal criteria, 52 (86.7%) met the required minimum degree of correspondence. The average semantic correspondence scores between these items and the related IMSs were quite good (overall mean: 3.74, standard deviation: 0.61). In study 2, a group of 20 naïve raters had to identify each prevalent motivation (IMS) in a random sequence of 1000 utterances drawn from therapy sessions. Cohen's Kappa coefficient was calculated for each rater with reference to each IMS and then calculated the average Kappa for all raters for each IMS. All average Kappa values were satisfactory (>0.60) and ranged between 0.63 (ranking system) and 0.83 (sexuality system). Data confirmed the overall soundness of AIMIT's theoretical-applicative approach. Results are discussed, corroborating the hypothesis that the AIMIT possesses the required criteria for content validity. Copyright © 2015 John Wiley & Sons, Ltd. Assessing Interpersonal Motivations in psychotherapy transcripts as a useful tool to better understand links between motivational systems and intersubjectivity

  2. What to Do With "Moderate" Reliability and Validity Coefficients?

    PubMed

    Post, Marcel W

    2016-07-01

    Clinimetric studies may use criteria for test-retest reliability and convergent validity such that correlation coefficients as low as .40 are supportive of reliability and validity. It can be argued that moderate (.40-.60) correlations should not be interpreted in this way and that reliability coefficients <.70 should be considered as indicative of unreliability. Convergent validity coefficients in the .40 to .60 or .40 to .70 range should be considered as indications of validity problems, or as inconclusive at best. Studies on reliability and convergent should be designed in such a way that it is realistic to expect high reliability and validity coefficients. Multitrait multimethod approaches are preferred to study construct (convergent-divergent) validity. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  3. Trends in study design and the statistical methods employed in a leading general medicine journal.

    PubMed

    Gosho, M; Sato, Y; Nagashima, K; Takahashi, S

    2018-02-01

    Study design and statistical methods have become core components of medical research, and the methodology has become more multifaceted and complicated over time. The study of the comprehensive details and current trends of study design and statistical methods is required to support the future implementation of well-planned clinical studies providing information about evidence-based medicine. Our purpose was to illustrate study design and statistical methods employed in recent medical literature. This was an extension study of Sato et al. (N Engl J Med 2017; 376: 1086-1087), which reviewed 238 articles published in 2015 in the New England Journal of Medicine (NEJM) and briefly summarized the statistical methods employed in NEJM. Using the same database, we performed a new investigation of the detailed trends in study design and individual statistical methods that were not reported in the Sato study. Due to the CONSORT statement, prespecification and justification of sample size are obligatory in planning intervention studies. Although standard survival methods (eg Kaplan-Meier estimator and Cox regression model) were most frequently applied, the Gray test and Fine-Gray proportional hazard model for considering competing risks were sometimes used for a more valid statistical inference. With respect to handling missing data, model-based methods, which are valid for missing-at-random data, were more frequently used than single imputation methods. These methods are not recommended as a primary analysis, but they have been applied in many clinical trials. Group sequential design with interim analyses was one of the standard designs, and novel design, such as adaptive dose selection and sample size re-estimation, was sometimes employed in NEJM. Model-based approaches for handling missing data should replace single imputation methods for primary analysis in the light of the information found in some publications. Use of adaptive design with interim analyses is increasing

  4. Guidelines for experimental design protocol and validation procedure for the measurement of heat resistance of microorganisms in milk.

    PubMed

    Condron, Robin; Farrokh, Choreh; Jordan, Kieran; McClure, Peter; Ross, Tom; Cerf, Olivier

    2015-01-02

    Studies on the heat resistance of dairy pathogens are a vital part of assessing the safety of dairy products. However, harmonized methodology for the study of heat resistance of food pathogens is lacking, even though there is a need for such harmonized experimental design protocols and for harmonized validation procedures for heat treatment studies. Such an approach is of particular importance to allow international agreement on appropriate risk management of emerging potential hazards for human and animal health. This paper is working toward establishment of a harmonized protocol for the study of the heat resistance of pathogens, identifying critical issues for establishment of internationally agreed protocols, including a harmonized framework for reporting and interpretation of heat inactivation studies of potentially pathogenic microorganisms. Copyright © 2014 Elsevier B.V. All rights reserved.

  5. Classification Accuracy of MMPI-2 Validity Scales in the Detection of Pain-Related Malingering: A Known-Groups Study

    ERIC Educational Resources Information Center

    Bianchini, Kevin J.; Etherton, Joseph L.; Greve, Kevin W.; Heinly, Matthew T.; Meyers, John E.

    2008-01-01

    The purpose of this study was to determine the accuracy of "Minnesota Multiphasic Personality Inventory" 2nd edition (MMPI-2; Butcher, Dahlstrom, Graham, Tellegen, & Kaemmer, 1989) validity indicators in the detection of malingering in clinical patients with chronic pain using a hybrid clinical-known groups/simulator design. The…

  6. A mobile sensing system for structural health monitoring: design and validation

    NASA Astrophysics Data System (ADS)

    Zhu, Dapeng; Yi, Xiaohua; Wang, Yang; Lee, Kok-Meng; Guo, Jiajie

    2010-05-01

    This paper describes a new approach using mobile sensor networks for structural health monitoring. Compared with static sensors, mobile sensor networks offer flexible system architectures with adaptive spatial resolutions. The paper first describes the design of a mobile sensing node that is capable of maneuvering on structures built with ferromagnetic materials. The mobile sensing node can also attach/detach an accelerometer onto/from the structural surface. The performance of the prototype mobile sensor network has been validated through laboratory experiments. Two mobile sensing nodes are adopted for navigating on a steel portal frame and providing dense acceleration measurements. Transmissibility function analysis is conducted to identify structural damage using data collected by the mobile sensing nodes. This preliminary work is expected to spawn transformative changes in the use of mobile sensors for future structural health monitoring.

  7. The intelligent OR: design and validation of a context-aware surgical working environment.

    PubMed

    Franke, Stefan; Rockstroh, Max; Hofer, Mathias; Neumuth, Thomas

    2018-05-24

    Interoperability of medical devices based on standards starts to establish in the operating room (OR). Devices share their data and control functionalities. Yet, the OR technology rarely implements cooperative, intelligent behavior, especially in terms of active cooperation with the OR team. Technical context-awareness will be an essential feature of the next generation of medical devices to address the increasing demands to clinicians in information seeking, decision making, and human-machine interaction in complex surgical working environments. The paper describes the technical validation of an intelligent surgical working environment for endoscopic ear-nose-throat surgery. We briefly summarize the design of our framework for context-aware system's behavior in integrated OR and present example realizations of novel assistance functionalities. In a study on patient phantoms, twenty-four procedures were implemented in the proposed intelligent surgical working environment based on recordings of real interventions. Subsequently, the whole processing pipeline for context-awareness from workflow recognition to the final system's behavior is analyzed. Rule-based behavior that considers multiple perspectives on the procedure can partially compensate recognition errors. A considerable robustness could be achieved with a reasonable quality of the recognition. Overall, reliable reactive as well as proactive behavior of the surgical working environment can be implemented in the proposed environment. The obtained validation results indicate the suitability of the overall approach. The setup is a reliable starting point for a subsequent evaluation of the proposed context-aware assistance. The major challenge for future work will be to implement the complex approach in a cross-vendor setting.

  8. Design and experimental validation of linear and nonlinear vehicle steering control strategies

    NASA Astrophysics Data System (ADS)

    Menhour, Lghani; Lechner, Daniel; Charara, Ali

    2012-06-01

    This paper proposes the design of three control laws dedicated to vehicle steering control, two based on robust linear control strategies and one based on nonlinear control strategies, and presents a comparison between them. The two robust linear control laws (indirect and direct methods) are built around M linear bicycle models, each of these control laws is composed of two M proportional integral derivative (PID) controllers: one M PID controller to control the lateral deviation and the other M PID controller to control the vehicle yaw angle. The indirect control law method is designed using an oscillation method and a nonlinear optimisation subject to H ∞ constraint. The direct control law method is designed using a linear matrix inequality optimisation in order to achieve H ∞ performances. The nonlinear control method used for the correction of the lateral deviation is based on a continuous first-order sliding-mode controller. The different methods are designed using a linear bicycle vehicle model with variant parameters, but the aim is to simulate the nonlinear vehicle behaviour under high dynamic demands with a four-wheel vehicle model. These steering vehicle controls are validated experimentally using the data acquired using a laboratory vehicle, Peugeot 307, developed by National Institute for Transport and Safety Research - Department of Accident Mechanism Analysis Laboratory's (INRETS-MA) and their performance results are compared. Moreover, an unknown input sliding-mode observer is introduced to estimate the road bank angle.

  9. Preparation of the implementation plan of AASHTO Mechanistic-Empirical Pavement Design Guide (M-EPDG) in Connecticut : Phase II : expanded sensitivity analysis and validation with pavement management data.

    DOT National Transportation Integrated Search

    2017-02-08

    The study re-evaluates distress prediction models using the Mechanistic-Empirical Pavement Design Guide (MEPDG) and expands the sensitivity analysis to a wide range of pavement structures and soils. In addition, an extensive validation analysis of th...

  10. Design and validation of an intelligent wheelchair towards a clinically-functional outcome.

    PubMed

    Boucher, Patrice; Atrash, Amin; Kelouwani, Sousso; Honoré, Wormser; Nguyen, Hai; Villemure, Julien; Routhier, François; Cohen, Paul; Demers, Louise; Forget, Robert; Pineau, Joelle

    2013-06-17

    Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode.

  11. Design and validation of an intelligent wheelchair towards a clinically-functional outcome

    PubMed Central

    2013-01-01

    Background Many people with mobility impairments, who require the use of powered wheelchairs, have difficulty completing basic maneuvering tasks during their activities of daily living (ADL). In order to provide assistance to this population, robotic and intelligent system technologies have been used to design an intelligent powered wheelchair (IPW). This paper provides a comprehensive overview of the design and validation of the IPW. Methods The main contributions of this work are three-fold. First, we present a software architecture for robot navigation and control in constrained spaces. Second, we describe a decision-theoretic approach for achieving robust speech-based control of the intelligent wheelchair. Third, we present an evaluation protocol motivated by a meaningful clinical outcome, in the form of the Robotic Wheelchair Skills Test (RWST). This allows us to perform a thorough characterization of the performance and safety of the system, involving 17 test subjects (8 non-PW users, 9 regular PW users), 32 complete RWST sessions, 25 total hours of testing, and 9 kilometers of total running distance. Results User tests with the RWST show that the navigation architecture reduced collisions by more than 60% compared to other recent intelligent wheelchair platforms. On the tasks of the RWST, we measured an average decrease of 4% in performance score and 3% in safety score (not statistically significant), compared to the scores obtained with conventional driving model. This analysis was performed with regular users that had over 6 years of wheelchair driving experience, compared to approximately one half-hour of training with the autonomous mode. Conclusions The platform tested in these experiments is among the most experimentally validated robotic wheelchairs in realistic contexts. The results establish that proficient powered wheelchair users can achieve the same level of performance with the intelligent command mode, as with the conventional command mode

  12. Cyber Victim and Bullying Scale: A Study of Validity and Reliability

    ERIC Educational Resources Information Center

    Cetin, Bayram; Yaman, Erkan; Peker, Adem

    2011-01-01

    The purpose of this study is to develop a reliable and valid scale, which determines cyber victimization and bullying behaviors of high school students. Research group consisted of 404 students (250 male, 154 male) in Sakarya, in 2009-2010 academic years. In the study sample, mean age is 16.68. Content validity and face validity of the scale was…

  13. F-18 High Alpha Research Vehicle (HARV) parameter identification flight test maneuvers for optimal input design validation and lateral control effectiveness

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1995-01-01

    Flight test maneuvers are specified for the F-18 High Alpha Research Vehicle (HARV). The maneuvers were designed for open loop parameter identification purposes, specifically for optimal input design validation at 5 degrees angle of attack, identification of individual strake effectiveness at 40 and 50 degrees angle of attack, and study of lateral dynamics and lateral control effectiveness at 40 and 50 degrees angle of attack. Each maneuver is to be realized by applying square wave inputs to specific control effectors using the On-Board Excitation System (OBES). Maneuver descriptions and complete specifications of the time/amplitude points define each input are included, along with plots of the input time histories.

  14. Life Satisfaction Questionnaire (Lisat-9): Reliability and Validity for Patients with Acquired Brain Injury

    ERIC Educational Resources Information Center

    Boonstra, Anne M.; Reneman, Michiel F.; Stewart, Roy E.; Balk, Gerlof A.

    2012-01-01

    The aim of this study was to determine the reliability and discriminant validity of the Dutch version of the life satisfaction questionnaire (Lisat-9 DV) to assess patients with an acquired brain injury. The reliability study used a test-retest design, and the validity study used a cross-sectional design. The setting was the general rehabilitation…

  15. Computational identification of structural factors affecting the mutagenic potential of aromatic amines: study design and experimental validation.

    PubMed

    Slavov, Svetoslav H; Stoyanova-Slavova, Iva; Mattes, William; Beger, Richard D; Brüschweiler, Beat J

    2018-07-01

    A grid-based, alignment-independent 3D-SDAR (three-dimensional spectral data-activity relationship) approach based on simulated 13 C and 15 N NMR chemical shifts augmented with through-space interatomic distances was used to model the mutagenicity of 554 primary and 419 secondary aromatic amines. A robust modeling strategy supported by extensive validation including randomized training/hold-out test set pairs, validation sets, "blind" external test sets as well as experimental validation was applied to avoid over-parameterization and build Organization for Economic Cooperation and Development (OECD 2004) compliant models. Based on an experimental validation set of 23 chemicals tested in a two-strain Salmonella typhimurium Ames assay, 3D-SDAR was able to achieve performance comparable to 5-strain (Ames) predictions by Lhasa Limited's Derek and Sarah Nexus for the same set. Furthermore, mapping of the most frequently occurring bins on the primary and secondary aromatic amine structures allowed the identification of molecular features that were associated either positively or negatively with mutagenicity. Prominent structural features found to enhance the mutagenic potential included: nitrobenzene moieties, conjugated π-systems, nitrothiophene groups, and aromatic hydroxylamine moieties. 3D-SDAR was also able to capture "true" negative contributions that are particularly difficult to detect through alternative methods. These include sulphonamide, acetamide, and other functional groups, which not only lack contributions to the overall mutagenic potential, but are known to actively lower it, if present in the chemical structures of what otherwise would be potential mutagens.

  16. Optimum study designs.

    PubMed

    Gu, C; Rao, D C

    2001-01-01

    Because simplistic designs will lead to prohibitively large sample sizes, the optimization of genetic study designs is critical for successfully mapping genes for complex diseases. Creative designs are necessary for detecting and amplifying the usually weak signals for complex traits. Two important outcomes of a study design--power and resolution--are implicitly tied together by the principle of uncertainty. Overemphasis on either one may lead to suboptimal designs. To achieve optimality for a particular study, therefore, practical measures such as cost-effectiveness must be used to strike a balance between power and resolution. In this light, the myriad of factors involved in study design can be checked for their effects on the ultimate outcomes, and the popular existing designs can be sorted into building blocks that may be useful for particular situations. It is hoped that imaginative construction of novel designs using such building blocks will lead to enhanced efficiency in finding genes for complex human traits.

  17. Addressing Participant Validity in a Small Internet Health Survey (The Restore Study): Protocol and Recommendations for Survey Response Validation.

    PubMed

    Dewitt, James; Capistrant, Benjamin; Kohli, Nidhi; Rosser, B R Simon; Mitteldorf, Darryl; Merengwa, Enyinnaya; West, William

    2018-04-24

    While deduplication and cross-validation protocols have been recommended for large Web-based studies, protocols for survey response validation of smaller studies have not been published. This paper reports the challenges of survey validation inherent in a small Web-based health survey research. The subject population was North American, gay and bisexual, prostate cancer survivors, who represent an under-researched, hidden, difficult-to-recruit, minority-within-a-minority population. In 2015-2016, advertising on a large Web-based cancer survivor support network, using email and social media, yielded 478 completed surveys. Our manual deduplication and cross-validation protocol identified 289 survey submissions (289/478, 60.4%) as likely spam, most stemming from advertising on social media. The basic components of this deduplication and validation protocol are detailed. An unexpected challenge encountered was invalid survey responses evolving across the study period. This necessitated the static detection protocol be augmented with a dynamic one. Five recommendations for validation of Web-based samples, especially with smaller difficult-to-recruit populations, are detailed. ©James Dewitt, Benjamin Capistrant, Nidhi Kohli, B R Simon Rosser, Darryl Mitteldorf, Enyinnaya Merengwa, William West. Originally published in JMIR Research Protocols (http://www.researchprotocols.org), 24.04.2018.

  18. Design and validation of standardized clinical and functional remission criteria in schizophrenia

    PubMed Central

    Mosolov, Sergey N; Potapov, Andrey V; Ushakov, Uriy V; Shafarenko, Aleksey A; Kostyukova, Anastasiya B

    2014-01-01

    Background International Remission Criteria (IRC) for schizophrenia were developed recently by a group of internationally known experts. The IRC detect only 10%–30% of cases and do not cover the diversity of forms and social functioning. Our aim was to design a more applicable tool and validate its use – the Standardized Clinical and Functional Remission Criteria (SCFRC). Methods We used a 6-month follow-up study of 203 outpatients from two Moscow centers and another further sample of stable patients from a 1-year controlled trial of atypical versus typical medication. Diagnosis was confirmed by International Classification of Diseases Version 10 (ICD10) criteria and the Mini-International Neuropsychiatric Interview (MINI). Patients were assessed by the Positive and Negative Syndrome Scale, including intensity threshold, and further classified using the Russian domestic remission criteria and the level of social and personal functioning, according to the Personal and Social Performance Scale (PSP). The SCFRC were formulated and were validated by a data reanalysis on the first population sample and on a second independent sample (104 patients) and in an open-label prospective randomized 12-month comparative study of risperidone long-acting injectable (RLAI) versus olanzapine. Results Only 64 of the 203 outpatients (31.5%) initially met the IRC, and 53 patients (26.1%) met the IRC after 6 months, without a change in treatment. Patients who were in remission had episodic and progressive deficit (39.6%), or remittent (15%) paranoid schizophrenia, or schizoaffective disorder (17%). In addition, 105 patients of 139 (51.7%), who did not meet symptomatic IRC, remained stable within the period. Reanalysis of data revealed that 65.5% of the patients met the SCFRC. In the controlled trial, 70% of patients in the RLAI group met the SCFRC and only 19% the IRC. In the routine treatment group, 55.9% met the SCFRC and only 5.7% the IRC. Results of the further independent

  19. Development and community-based validation of the IDEA study Instrumental Activities of Daily Living (IDEA-IADL) questionnaire

    PubMed Central

    Collingwood, Cecilia; Paddick, Stella-Maria; Kisoli, Aloyce; Dotchin, Catherine L.; Gray, William K.; Mbowe, Godfrey; Mkenda, Sarah; Urasa, Sarah; Mushi, Declare; Chaote, Paul; Walker, Richard W.

    2014-01-01

    Background The dementia diagnosis gap in sub-Saharan Africa (SSA) is large, partly due to difficulties in assessing function, an essential step in diagnosis. Objectives As part of the Identification and Intervention for Dementia in Elderly Africans (IDEA) study, to develop, pilot, and validate an Instrumental Activities of Daily Living (IADL) questionnaire for use in a rural Tanzanian population to assist in the identification of people with dementia alongside cognitive screening. Design The questionnaire was developed at a workshop for rural primary healthcare workers, based on culturally appropriate roles and usual activities of elderly people in this community. It was piloted in 52 individuals under follow-up from a dementia prevalence study. Validation subsequently took place during a community dementia-screening programme. Construct validation against gold standard clinical dementia diagnosis using DSM-IV criteria was carried out on a stratified sample of the cohort and validity assessed using area under the receiver operating characteristic (AUROC) curve analysis. Results An 11-item questionnaire (IDEA-IADL) was developed after pilot testing. During formal validation on 130 community-dwelling elderly people who presented for screening, the AUROC curve was 0.896 for DSM-IV dementia when used in isolation and 0.937 when used in conjunction with the IDEA cognitive screen, previously validated in Tanzania. The internal consistency was 0.959. Performance on the IDEA-IADL was not biased with regard to age, gender or education level. Conclusions The IDEA-IADL questionnaire appears to be a useful aid to dementia screening in this setting. Further validation in other healthcare settings in SSA is required. PMID:25537940

  20. Validation test of 125 Ah advanced design IPV nickel-hydrogen flight cells

    NASA Technical Reports Server (NTRS)

    Smithrick, John J.; Hall, Stephen W.

    1993-01-01

    An update of validation test results confirming the advanced design nickel-hydrogen cell is presented. An advanced 125 Ah individual pressure vessel (IPV) nickel-hydrogen cell was designed. The primary function of the advanced cell is to store and deliver energy for long-term, Low-Earth-Orbit (LEO) spacecraft missions. The new features of this design, which are not incorporated in state-of-the-art design cells, are: (1) use of 26 percent rather than 31 percent potassium hydroxide (KOH) electrolyte; (2) use of a patented catalyzed wall wick; (3) use of serrated-edge separators to facilitate gaseous oxygen and hydrogen flow within the cell, while still maintaining physical contact with the wall wick for electrolyte management; and (4) use of a floating rather than a fixed stack (state-of-the-art) to accommodate nickel electrode expansion due to charge/discharge cycling. The significant improvements resulting from these innovations are extended cycle life; enhanced thermal, electrolyte, and oxygen management; and accommodation of nickel electrode expansion. Six 125 Ah flight cells based on this design were fabricated by Eagle-Picher. Three of the cells contain all of the advanced features (test cells) and three are the same as the test cells except they do not have catalyst on the wall wick (control cells). All six cells are in the process of being evaluated in a LEO cycle life test at the Naval Weapons Support Center, Crane, IN, under a NASA Lewis Research Center contract. The catalyzed wall wick cells have been cycled for over 19000 cycles with no cell failures in the continuing test. Two of the noncatalyzed wall wick cells failed (cycles 9588 and 13,900).

  1. JaCVAM-organized international validation study of the in vivo rodent alkaline comet assay for detection of genotoxic carcinogens: II. Summary of definitive validation study results.

    PubMed

    Uno, Yoshifumi; Kojima, Hajime; Omori, Takashi; Corvi, Raffaella; Honma, Masamistu; Schechtman, Leonard M; Tice, Raymond R; Beevers, Carol; De Boeck, Marlies; Burlinson, Brian; Hobbs, Cheryl A; Kitamoto, Sachiko; Kraynak, Andrew R; McNamee, James; Nakagawa, Yuzuki; Pant, Kamala; Plappert-Helbig, Ulla; Priestley, Catherine; Takasawa, Hironao; Wada, Kunio; Wirnitzer, Uta; Asano, Norihide; Escobar, Patricia A; Lovell, David; Morita, Takeshi; Nakajima, Madoka; Ohno, Yasuo; Hayashi, Makoto

    2015-07-01

    The in vivo rodent alkaline comet assay (comet assay) is used internationally to investigate the in vivo genotoxic potential of test chemicals. This assay, however, has not previously been formally validated. The Japanese Center for the Validation of Alternative Methods (JaCVAM), with the cooperation of the U.S. NTP Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM)/the Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), the European Centre for the Validation of Alternative Methods (ECVAM), and the Japanese Environmental Mutagen Society/Mammalian Mutagenesis Study Group (JEMS/MMS), organized an international validation study to evaluate the reliability and relevance of the assay for identifying genotoxic carcinogens, using liver and stomach as target organs. The ultimate goal of this exercise was to establish an Organisation for Economic Co-operation and Development (OECD) test guideline. The study protocol was optimized in the pre-validation studies, and then the definitive (4th phase) validation study was conducted in two steps. In the 1st step, assay reproducibility was confirmed among laboratories using four coded reference chemicals and the positive control ethyl methanesulfonate. In the 2nd step, the predictive capability was investigated using 40 coded chemicals with known genotoxic and carcinogenic activity (i.e., genotoxic carcinogens, genotoxic non-carcinogens, non-genotoxic carcinogens, and non-genotoxic non-carcinogens). Based on the results obtained, the in vivo comet assay is concluded to be highly capable of identifying genotoxic chemicals and therefore can serve as a reliable predictor of rodent carcinogenicity. Copyright © 2015 Elsevier B.V. All rights reserved.

  2. Construct Validation Theory Applied to the Study of Personality Dysfunction

    PubMed Central

    Zapolski, Tamika C. B.; Guller, Leila; Smith, Gregory T.

    2013-01-01

    The authors review theory validation and construct validation principles as related to the study of personality dysfunction. Historically, personality disorders have been understood to be syndromes of heterogeneous symptoms. The authors argue that the syndrome approach to description results in diagnoses of unclear meaning and constrained validity. The alternative approach of describing personality dysfunction in terms of homogeneous dimensions of functioning avoids the problems of the syndromal approach and has been shown to provide more valid description and diagnosis. The authors further argue that description based on homogeneous dimensions of personality function/dysfunction is more useful, because it provides direct connections to validated treatments. PMID:22321263

  3. Reliable Digit Span: A Systematic Review and Cross-Validation Study

    ERIC Educational Resources Information Center

    Schroeder, Ryan W.; Twumasi-Ankrah, Philip; Baade, Lyle E.; Marshall, Paul S.

    2012-01-01

    Reliable Digit Span (RDS) is a heavily researched symptom validity test with a recent literature review yielding more than 20 studies ranging in dates from 1994 to 2011. Unfortunately, limitations within some of the research minimize clinical generalizability. This systematic review and cross-validation study was conducted to address these…

  4. Design and validation of a questionnaire to measure the attitudes of hospital staff concerning pandemic influenza.

    PubMed

    Naghavi, Seyed Hamid Reza; Shabestari, Omid; Roudsari, Abdul V; Harrison, John

    2012-03-01

    When pandemics lead to a higher workload in the healthcare sector, the attitude of healthcare staff and, more importantly, the ability to predict the rate of absence due to sickness are crucial factors in emergency preparedness and resource allocation. The aim of this study was to design and validate a questionnaire to measure the attitude of hospital staff toward work attendance during an influenza pandemic. An online questionnaire was designed and electronically distributed to the staff of a teaching medical institution in the United Kingdom. The questionnaire was designed de novo following discussions with colleagues at Imperial College and with reference to the literature on the severe acute respiratory syndrome (SARS) epidemic. The questionnaire included 15 independent fact variables and 33 dependent measure variables. A total of 367 responses were received in this survey. The data from the measurement variables were not normally distributed. Three different methods (standardized residuals, Mahalanobis distance and Cook's distance) were used to identify the outliers. In all, 19 respondents (5.17%) were identified as outliers and were excluded. The responses to this questionnaire had a wide range of missing data, from 1 to 74 cases in the measured variables. To improve the quality of the data, missing value analysis, using Expectation Maximization Algorithm (EMA) with a non-normal distribution model, was applied to the responses. The collected data were checked for homoscedasticity and multicollinearity of the variables. These tests suggested that some of the questions should be merged. In the last step, the reliability of the questionnaire was evaluated. This process showed that three questions reduced the reliability of the questionnaire. Removing those questions helped to achieve the desired level of reliability. With the changes proposed in this article, the questionnaire for measuring staff attitudes concerning pandemic influenza can be converted to a

  5. [Design and validation of a questionnaire to assess dietary behavior in Mexican students in the area of health].

    PubMed

    Márquez-Sandoval, Yolanda Fabiola; Salazar-Ruiz, Erika Nohemi; Macedo-Ojeda, Gabriela; Altamirano-Martínez, Macedo-Ojeda; Bernal-Orozco, María Fernanda; Salas-Salvadó, Jordi; Vizmanos-Lamotte, Barbara

    2014-07-01

    The dietary behavior (DB) establishes the relationship between the human being and foods and has an influence on nutrient intake and, therefore, it contributes to the health or disease status of a population, even among college students. There exit some validated instruments to assess food and nutrients intake, but there are very few assessing DB. To design and validate a questionnaire to assess DB in Mexican college students. According to the literature and Reasoned Theory, a questionnaire assessing DB was designed. Its logic and content validity was determined by expert assessment. It was applied on two occasions with a 4-week interval to 333 students from the University of Guadalajara coursing the sixth semester of Medicine or Nutrition. The reproducibility was assessed by means of the interclass correlation coefficient. The construct validity and the internal consistency were calculated by Rasch analysis, for both the difficulty of the items and the subjects' capability. The questionnaire finally included 31 questions with multiple choice answers. The interclass correlation coefficient of the instrument was 0.76. The Cronbach alpha was 0.50 for the subjects' capability and 0.98 for the internal consistency of the items. 87.1% of the subjects and 89.8% of the items had INFIT and OUTFIT values within acceptable limits. The present questionnaire has the potentiality of measuring at low cost and in a practical way aspects related with DB in college student with the aim of establishing or following-up corrective or preventive actions. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  6. Sulfonamide-containing PTP 1B inhibitors: Docking studies, synthesis and model validation

    NASA Astrophysics Data System (ADS)

    Niu, Enli; Gan, Qiang; Chen, Xi; Feng, Changgen

    2017-01-01

    PTP 1B plays an important role in regulating insulin signaling pathway and is regarded as a valid target for curing diabetes and obesity. In this paper, two novel sulfonamide-containing PTP 1B inhibitors were designed, synthesized in mild condition, and characterized by FT-IR, 1H NMR, 13C NMR and elemental analysis. The single crystal of compounds 7 and 8 were obtained and their structures were determined by X-ray single crystal diffraction analysis. In addition, their inhibitory activity were predicted by genetic algorithm, and carried on in vitro enzyme activity test. Of which compound 8 showed good inhibitory activity, in consistent with docking studies.

  7. External validity of post-stroke interventional gait rehabilitation studies.

    PubMed

    Kafri, Michal; Dickstein, Ruth

    2017-01-01

    Gait rehabilitation is a major component of stroke rehabilitation, and is supported by extensive research. The objective of this review was to examine the external validity of intervention studies aimed at improving gait in individuals post-stroke. To that end, two aspects of these studies were assessed: subjects' exclusion criteria and the ecological validity of the intervention, as manifested by the intervention's technological complexity and delivery setting. Additionally, we examined whether the target population as inferred from the titles/abstracts is broader than the population actually represented by the reported samples. We systematically researched PubMed for intervention studies to improve gait post-stroke, working backwards from the beginning of 2014. Exclusion criteria, the technological complexity of the intervention (defined as either elaborate or simple), setting, and description of the target population in the titles/abstracts were recorded. Fifty-two studies were reviewed. The samples were exclusive, with recurrent stroke, co-morbidities, cognitive status, walking level, and residency being major reasons for exclusion. In one half of the studies, the intervention was elaborate. Descriptions of participants in the title/abstract in almost one half of the studies included only the diagnosis (stroke or comparable terms) and its stage (acute, subacute, and chronic). The external validity of a substantial number of intervention studies about rehabilitation of gait post-stroke appears to be limited by exclusivity of the samples as well as by deficiencies in ecological validity of the interventions. These limitations are not accurately reflected in the titles or abstracts of the studies.

  8. Foundation Heat Exchanger Final Report: Demonstration, Measured Performance, and Validated Model and Design Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hughes, Patrick; Im, Piljae

    2012-04-01

    exchanger (FHX) has been coined to refer exclusively to ground heat exchangers installed in the overcut around the basement walls. The primary technical challenge undertaken by this project was the development and validation of energy performance models and design tools for FHX. In terms of performance modeling and design, ground heat exchangers in other construction excavations (e.g., utility trenches) are no different from conventional HGHX, and models and design tools for HGHX already exist. This project successfully developed and validated energy performance models and design tools so that FHX or hybrid FHX/HGHX systems can be engineered with confidence, enabling this technology to be applied in residential and light commercial buildings. The validated energy performance model also addresses and solves another problem, the longstanding inadequacy in the way ground-building thermal interaction is represented in building energy models, whether or not there is a ground heat exchanger nearby. Two side-by-side, three-level, unoccupied research houses with walkout basements, identical 3,700 ft{sup 2} floor plans, and hybrid FHX/HGHX systems were constructed to provide validation data sets for the energy performance model and design tool. The envelopes of both houses are very energy efficient and airtight, and the HERS ratings of the homes are 44 and 45 respectively. Both houses are mechanically ventilated with energy recovery ventilators, with space conditioning provided by water-to-air heat pumps with 2 ton nominal capacities. Separate water-to-water heat pumps with 1.5 ton nominal capacities were used for water heating. In these unoccupied research houses, human impact on energy use (hot water draw, etc.) is simulated to match the national average. At House 1 the hybrid FHX/HGHX system was installed in 300 linear feet of excavation, and 60% of that was construction excavation (needed to construct the home). At House 2 the hybrid FHX/HGHX system was installed in 360 feet of

  9. Aircraft Wake Vortex Spacing System (AVOSS) Performance Update and Validation Study

    NASA Technical Reports Server (NTRS)

    Rutishauser, David K.; OConnor, Cornelius J.

    2001-01-01

    An analysis has been performed on data generated from the two most recent field deployments of the Aircraft Wake VOrtex Spacing System (AVOSS). The AVOSS provides reduced aircraft spacing criteria for wake vortex avoidance as compared to the FAA spacing applied under Instrument Flight Rules (IFR). Several field deployments culminating in a system demonstration at Dallas Fort Worth (DFW) International Airport in the summer of 2000 were successful in showing a sound operational concept and the system's potential to provide a significant benefit to airport operations. For DFW, a predicted average throughput increase of 6% was observed. This increase implies 6 or 7 more aircraft on the ground in a one-hour period for DFW operations. Several studies of performance correlations to system configuration options, design options, and system inputs are also reported. The studies focus on the validation performance of the system.

  10. Virtual screening studies on HIV-1 reverse transcriptase inhibitors to design potent leads.

    PubMed

    Vadivelan, S; Deeksha, T N; Arun, S; Machiraju, Pavan Kumar; Gundla, Rambabu; Sinha, Barij Nayan; Jagarlapudi, Sarma A R P

    2011-03-01

    The purpose of this study is to identify novel and potent inhibitors against HIV-1 reverse transcriptase (RT). The crystal structure of the most active ligand was converted into a feature-shaped query. This query was used to align molecules to generate statistically valid 3D-QSAR (r(2) = 0.873) and Pharmacophore models (HypoGen). The best HypoGen model consists of three Pharmacophore features (one hydrogen bond acceptor, one hydrophobic aliphatic and one ring aromatic) and further validated using known RT inhibitors. The designed novel inhibitors are further subjected to docking studies to reduce the number of false positives. We have identified and proposed some novel and potential lead molecules as reverse transcriptase inhibitors using analog and structure based studies. Copyright © 2011 Elsevier Masson SAS. All rights reserved.

  11. Model-based verification and validation of the SMAP uplink processes

    NASA Astrophysics Data System (ADS)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  12. X-33 Base Region Thermal Protection System Design Study

    NASA Technical Reports Server (NTRS)

    Lycans, Randal W.

    1998-01-01

    The X-33 is an advanced technology demonstrator for validating critical technologies and systems required for an operational Single-Stage-to-Orbit (SSTO) Reusuable Launch Vehicle (RLV). Currently under development by a unique contractor/government team led by Lockheed- Martin Skunk Works (LMSW), and managed by Marshall Space Flight Center (MSFC), the X-33 will be the prototype of the first new launch system developed by the United States since the advent of the space shuttle. This paper documents a design trade study of the X-33 base region thermal protection system (TPS). Two candidate designs were evaluated for thermal performance and weight. The first candidate was a fully reusable metallic TPS using Inconel honeycomb panels insulated with high temperature fibrous insulation, while the second was an ablator/insulator sprayed on the metallic skin of the vehicle. The TPS configurations and insulation thickness requirements were determined for the predicted main engine plume heating environments and base region entry aerothermal environments. In addition to thermal analysis of the design concepts, sensitivity studies were performed to investigate the effect of variations in key parameters of the base TPS analysis.

  13. A new adaptive videogame for training attention and executive functions: design principles and initial validation.

    PubMed

    Montani, Veronica; De Filippo De Grazia, Michele; Zorzi, Marco

    2014-01-01

    A growing body of evidence suggests that action videogames could enhance a variety of cognitive skills and more specifically attention skills. The aim of this study was to develop a novel adaptive videogame to support the rehabilitation of the most common consequences of traumatic brain injury (TBI), that is the impairment of attention and executive functions. TBI patients can be affected by psychomotor slowness and by difficulties in dealing with distraction, maintain a cognitive set for a long time, processing different simultaneously presented stimuli, and planning purposeful behavior. Accordingly, we designed a videogame that was specifically conceived to activate those functions. Playing involves visuospatial planning and selective attention, active maintenance of the cognitive set representing the goal, and error monitoring. Moreover, different game trials require to alternate between two tasks (i.e., task switching) or to perform the two tasks simultaneously (i.e., divided attention/dual-tasking). The videogame is controlled by a multidimensional adaptive algorithm that calibrates task difficulty on-line based on a model of user performance that is updated on a trial-by-trial basis. We report simulations of user performance designed to test the adaptive game as well as a validation study with healthy participants engaged in a training protocol. The results confirmed the involvement of the cognitive abilities that the game is supposed to enhance and suggested that training improved attentional control during play.

  14. A new adaptive videogame for training attention and executive functions: design principles and initial validation

    PubMed Central

    Montani, Veronica; De Filippo De Grazia, Michele; Zorzi, Marco

    2014-01-01

    A growing body of evidence suggests that action videogames could enhance a variety of cognitive skills and more specifically attention skills. The aim of this study was to develop a novel adaptive videogame to support the rehabilitation of the most common consequences of traumatic brain injury (TBI), that is the impairment of attention and executive functions. TBI patients can be affected by psychomotor slowness and by difficulties in dealing with distraction, maintain a cognitive set for a long time, processing different simultaneously presented stimuli, and planning purposeful behavior. Accordingly, we designed a videogame that was specifically conceived to activate those functions. Playing involves visuospatial planning and selective attention, active maintenance of the cognitive set representing the goal, and error monitoring. Moreover, different game trials require to alternate between two tasks (i.e., task switching) or to perform the two tasks simultaneously (i.e., divided attention/dual-tasking). The videogame is controlled by a multidimensional adaptive algorithm that calibrates task difficulty on-line based on a model of user performance that is updated on a trial-by-trial basis. We report simulations of user performance designed to test the adaptive game as well as a validation study with healthy participants engaged in a training protocol. The results confirmed the involvement of the cognitive abilities that the game is supposed to enhance and suggested that training improved attentional control during play. PMID:24860529

  15. [Turkish validity and reliability study of fear of pain questionnaire-III].

    PubMed

    Ünver, Seher; Turan, Fatma Nesrin

    2018-01-01

    This study aimed to develop a Turkish version of the Fear of Pain Questionnaire-III developed by McNeil and Rainwater (1998) and examine its validity and reliability indicators. The study was conducted with 459 university students studying in the nursing department. The Turkish translation of the scale was conducted by language experts and the original scale owner. Expert opinions were taken for language validity, and the Lawshe's content validity ratio formula was used to calculate the content validity. Exploratory factor analysis was used to assess the construct validity. The factors were rotated using the Varimax rotation (orthogonal) method. For reliability indicators of the questionnaire, the internal consistency coefficient and test re-test reliability were utilized. Explanatory factor analyses using the three-factor model (explaining 50.5% of the total variance) revealed that the item factor loads varied were above the limit value of 0.30 which indicated that the questionnaire had good construct validity. The Cronbach's alpha value for the total questionnaire was 0.938, and test re-test value was 0.846 for the total scale. The Turkish version of the Fear of Pain Questionnaire-III had sufficiently high reliability and validity to be used as a tool in evaluating the fear of pain among the young Turkish population.

  16. Virtual reality as a human factors design analysis tool: Macro-ergonomic application validation and assessment of the Space Station Freedom payload control area

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1994-01-01

    A virtual reality (VR) Applications Program has been under development at MSFC since 1989. Its objectives are to develop, assess, validate, and utilize VR in hardware development, operations development and support, missions operations training, and science training. A variety of activities are under way within many of these areas. One ongoing macro-ergonomic application of VR relates to the design of the Space Station Freedom Payload Control Area (PCA), the control room from which onboard payload operations are managed. Several preliminary conceptual PCA layouts have been developed and modeled in VR. Various managers and potential end users have virtually 'entered' these rooms and provided valuable feedback. Before VR can be used with confidence in a particular application, it must be validated, or calibrated, for that class of applications. Two associated validation studies for macro-ergonomic applications are under way to help characterize possible distortions of filtering of relevant perceptions in a virtual world. In both studies, existing control rooms and their 'virtual counterparts will be empirically compared using distance and heading estimations to objects and subjective assessments. Approaches and findings of the PCA activities and details of the studies are presented.

  17. Methodological convergence of program evaluation designs.

    PubMed

    Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa

    2014-01-01

    Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.

  18. Game Coaching System Design and Development: A Retrospective Case Study of FPS Trainer

    ERIC Educational Resources Information Center

    Tan, Wee Hoe

    2013-01-01

    This paper is a retrospective case study of a game-based learning (GBL) researcher who cooperated with a professional gamer and a team of game developers to design and develop a coaching system for First-Person Shooter (FPS) players. The GBL researcher intended to verify the ecological validity of a model of cooperation; the developers wanted to…

  19. Methodology for the nuclear design validation of an Alternate Emergency Management Centre (CAGE)

    NASA Astrophysics Data System (ADS)

    Hueso, César; Fabbri, Marco; de la Fuente, Cristina; Janés, Albert; Massuet, Joan; Zamora, Imanol; Gasca, Cristina; Hernández, Héctor; Vega, J. Ángel

    2017-09-01

    The methodology is devised by coupling different codes. The study of weather conditions as part of the data of the site will determine the relative concentrations of radionuclides in the air using ARCON96. The activity in the air is characterized depending on the source and release sequence specified in NUREG-1465 by RADTRAD code, which provides results of the inner cloud source term contribution. Known activities, energy spectra are inferred using ORIGEN-S, which are used as input for the models of the outer cloud, filters and containment generated with MCNP5. The sum of the different contributions must meet the conditions of habitability specified by the CSN (Spanish Nuclear Regulatory Body) (TEDE <50 mSv and equivalent dose to the thyroid <500 mSv within 30 days following the accident doses) so that the dose is optimized by varying parameters such as CAGE location, flow filtering need for recirculation, thicknesses and compositions of the walls, etc. The results for the most penalizing area meet the established criteria, and therefore the CAGE building design based on the methodology presented is radiologically validated.

  20. Ocean power technology design optimization

    DOE PAGES

    van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen; ...

    2017-07-18

    For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less

  1. Ocean power technology design optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen

    For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less

  2. Application of computational methods for the design of BACE-1 inhibitors: validation of in silico modelling.

    PubMed

    Bajda, Marek; Jończyk, Jakub; Malawska, Barbara; Filipek, Sławomir

    2014-03-24

    β-Secretase (BACE-1) constitutes an important target for search of anti-Alzheimer's drugs. The first inhibitors of this enzyme were peptidic compounds with high molecular weight and low bioavailability. Therefore, the search for new efficient non-peptidic inhibitors has been undertaken by many scientific groups. We started our work from the development of in silico methodology for the design of novel BACE-1 ligands. It was validated on the basis of crystal structures of complexes with inhibitors, redocking, cross-docking and training/test sets of reference ligands. The presented procedure of assessment of the novel compounds as β-secretase inhibitors could be widely used in the design process.

  3. Does a Multi-Media Program Enhance Job Matching for a Population with Intellectual Disabilities? A Social Validity Study

    ERIC Educational Resources Information Center

    Michaud, Kim M.

    2017-01-01

    This dissertation describes a mixed method design study on the social validity of a multi-media job search tool, the YES tool, at a four-year Comprehensive Transition Program at an East Coast University. The participants included twelve students, randomly selected from those who, with their parents' assent, agreed to volunteer for this study…

  4. Statistical Design for Biospecimen Cohort Size in Proteomics-based Biomarker Discovery and Verification Studies

    PubMed Central

    Skates, Steven J.; Gillette, Michael A.; LaBaer, Joshua; Carr, Steven A.; Anderson, N. Leigh; Liebler, Daniel C.; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L.; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R.; Rodriguez, Henry; Boja, Emily S.

    2014-01-01

    Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC), with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance, and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step towards building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research. PMID:24063748

  5. Statistical design for biospecimen cohort size in proteomics-based biomarker discovery and verification studies.

    PubMed

    Skates, Steven J; Gillette, Michael A; LaBaer, Joshua; Carr, Steven A; Anderson, Leigh; Liebler, Daniel C; Ransohoff, David; Rifai, Nader; Kondratovich, Marina; Težak, Živana; Mansfield, Elizabeth; Oberg, Ann L; Wright, Ian; Barnes, Grady; Gail, Mitchell; Mesri, Mehdi; Kinsinger, Christopher R; Rodriguez, Henry; Boja, Emily S

    2013-12-06

    Protein biomarkers are needed to deepen our understanding of cancer biology and to improve our ability to diagnose, monitor, and treat cancers. Important analytical and clinical hurdles must be overcome to allow the most promising protein biomarker candidates to advance into clinical validation studies. Although contemporary proteomics technologies support the measurement of large numbers of proteins in individual clinical specimens, sample throughput remains comparatively low. This problem is amplified in typical clinical proteomics research studies, which routinely suffer from a lack of proper experimental design, resulting in analysis of too few biospecimens to achieve adequate statistical power at each stage of a biomarker pipeline. To address this critical shortcoming, a joint workshop was held by the National Cancer Institute (NCI), National Heart, Lung, and Blood Institute (NHLBI), and American Association for Clinical Chemistry (AACC) with participation from the U.S. Food and Drug Administration (FDA). An important output from the workshop was a statistical framework for the design of biomarker discovery and verification studies. Herein, we describe the use of quantitative clinical judgments to set statistical criteria for clinical relevance and the development of an approach to calculate biospecimen sample size for proteomic studies in discovery and verification stages prior to clinical validation stage. This represents a first step toward building a consensus on quantitative criteria for statistical design of proteomics biomarker discovery and verification research.

  6. An empirical assessment of validation practices for molecular classifiers

    PubMed Central

    Castaldi, Peter J.; Dahabreh, Issa J.

    2011-01-01

    Proposed molecular classifiers may be overfit to idiosyncrasies of noisy genomic and proteomic data. Cross-validation methods are often used to obtain estimates of classification accuracy, but both simulations and case studies suggest that, when inappropriate methods are used, bias may ensue. Bias can be bypassed and generalizability can be tested by external (independent) validation. We evaluated 35 studies that have reported on external validation of a molecular classifier. We extracted information on study design and methodological features, and compared the performance of molecular classifiers in internal cross-validation versus external validation for 28 studies where both had been performed. We demonstrate that the majority of studies pursued cross-validation practices that are likely to overestimate classifier performance. Most studies were markedly underpowered to detect a 20% decrease in sensitivity or specificity between internal cross-validation and external validation [median power was 36% (IQR, 21–61%) and 29% (IQR, 15–65%), respectively]. The median reported classification performance for sensitivity and specificity was 94% and 98%, respectively, in cross-validation and 88% and 81% for independent validation. The relative diagnostic odds ratio was 3.26 (95% CI 2.04–5.21) for cross-validation versus independent validation. Finally, we reviewed all studies (n = 758) which cited those in our study sample, and identified only one instance of additional subsequent independent validation of these classifiers. In conclusion, these results document that many cross-validation practices employed in the literature are potentially biased and genuine progress in this field will require adoption of routine external validation of molecular classifiers, preferably in much larger studies than in current practice. PMID:21300697

  7. Computer-aided design of liposomal drugs: In silico prediction and experimental validation of drug candidates for liposomal remote loading.

    PubMed

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-10

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs' structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al., J. Control. Release 160 (2012) 147-157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-Nearest Neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used by us in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. © 2013.

  8. Computer-aided design of liposomal drugs: in silico prediction and experimental validation of drug candidates for liposomal remote loading

    PubMed Central

    Cern, Ahuva; Barenholz, Yechezkel; Tropsha, Alexander; Goldblum, Amiram

    2014-01-01

    Previously we have developed and statistically validated Quantitative Structure Property Relationship (QSPR) models that correlate drugs’ structural, physical and chemical properties as well as experimental conditions with the relative efficiency of remote loading of drugs into liposomes (Cern et al, Journal of Controlled Release, 160(2012) 14–157). Herein, these models have been used to virtually screen a large drug database to identify novel candidate molecules for liposomal drug delivery. Computational hits were considered for experimental validation based on their predicted remote loading efficiency as well as additional considerations such as availability, recommended dose and relevance to the disease. Three compounds were selected for experimental testing which were confirmed to be correctly classified by our previously reported QSPR models developed with Iterative Stochastic Elimination (ISE) and k-nearest neighbors (kNN) approaches. In addition, 10 new molecules with known liposome remote loading efficiency that were not used in QSPR model development were identified in the published literature and employed as an additional model validation set. The external accuracy of the models was found to be as high as 82% or 92%, depending on the model. This study presents the first successful application of QSPR models for the computer-model-driven design of liposomal drugs. PMID:24184343

  9. Quasi-experimental study designs series-paper 7: assessing the assumptions.

    PubMed

    Bärnighausen, Till; Oldenburg, Catherine; Tugwell, Peter; Bommer, Christian; Ebert, Cara; Barreto, Mauricio; Djimeu, Eric; Haber, Noah; Waddington, Hugh; Rockers, Peter; Sianesi, Barbara; Bor, Jacob; Fink, Günther; Valentine, Jeffrey; Tanner, Jeffrey; Stanley, Tom; Sierra, Eduardo; Tchetgen, Eric Tchetgen; Atun, Rifat; Vollmer, Sebastian

    2017-09-01

    Quasi-experimental designs are gaining popularity in epidemiology and health systems research-in particular for the evaluation of health care practice, programs, and policy-because they allow strong causal inferences without randomized controlled experiments. We describe the concepts underlying five important quasi-experimental designs: Instrumental Variables, Regression Discontinuity, Interrupted Time Series, Fixed Effects, and Difference-in-Differences designs. We illustrate each of the designs with an example from health research. We then describe the assumptions required for each of the designs to ensure valid causal inference and discuss the tests available to examine the assumptions. Copyright © 2017 Elsevier Inc. All rights reserved.

  10. External model validation of binary clinical risk prediction models in cardiovascular and thoracic surgery.

    PubMed

    Hickey, Graeme L; Blackstone, Eugene H

    2016-08-01

    Clinical risk-prediction models serve an important role in healthcare. They are used for clinical decision-making and measuring the performance of healthcare providers. To establish confidence in a model, external model validation is imperative. When designing such an external model validation study, thought must be given to patient selection, risk factor and outcome definitions, missing data, and the transparent reporting of the analysis. In addition, there are a number of statistical methods available for external model validation. Execution of a rigorous external validation study rests in proper study design, application of suitable statistical methods, and transparent reporting. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  11. Design, construction and validation of a portable care system for the daily telerehabiliatation of gait.

    PubMed

    Giansanti, Daniele; Morelli, Sandra; Maccioni, Giovanni; Brocco, Monica

    2013-10-01

    When designing a complete system of daily-telerehabilitation it should be borne in mind that properly designed methodologies should be furnished for patients to execute specific motion tasks and for care givers to assess the relevant parameters. Whether in hospital or at home, the system should feature two basic elements: (a) instrumented and walking aids or supports, (b) equipment for the assessment of parameters. Being gait the focus, the idea was to design, construct and validate - as an alternative to the complex and expensive instruments currently used - a simple, portable kit that may be easily interfaced/integrated with the most common mechanical tools used in motion rehabilitation (instrumented walkways, aids, supports), with feedback to both patient for self-monitoring and trainer/therapist (present or remote) for clinical reporting. The proposed system consists of: one step-counter, three couples of photo-emitter detectors, one central unit for collecting and processing the telemetrically transmitted data; a software interface on a dedicated PC and a network adapter. The system has been successfully validated in a clinical application on two groups of 16 subjects at the 1st and 2nd level of the Tinetti test. The degree of acceptance by subjects and care-givers was high. The system was also successfully compared with an Inertial Measurement Unit, a de facto standard. The portable kit can be used with different rehabilitation tools and different ground rugosity. The advantages are: (a) very low costs when compared with optoelectronic solutions and other portable solutions; (b) very high accuracy, also for subjects with imbalance problems; (c) good compatibility with any rehabilitative tool. Copyright © 2013 Elsevier Ireland Ltd. All rights reserved.

  12. Instrument development and validation of a quality scale for historical research papers (QSHRP): a pilot study.

    PubMed

    Kelly, Jacinta; Watson, Roger

    2014-12-01

    To report a pilot study for the development and validation of an instrument to measure quality in historical research papers. There are no set criteria to assess historical papers published in nursing journals. A three phase mixed method sequential confirmatory design. In 2012, we used a three-phase approach to item generation and content evaluation. In phase 1, we consulted nursing historians using an online survey comprising three open-ended questions and revised the items. In phase 2, we evaluated the revised items for relevance with expert historians using a 4-point Likert scale and Content Validity Index calculation. In phase 3, we conducted reliability testing of the instrument using a 3-point Likert scale. In phase 1, 121 responses were generated via the online survey and revised to 40 interrogatively phrased items. In phase 2, five items with an Item Content Validity Index score of ≥0·7 remained. In phase 3, responses from historians resulted in 100% agreement to questions 1, 2 and 4 and 89% and 78%, respectively, to questions 3 and 5. Items for the QSHRP have been identified, content validated and reliability tested. This scale improves on previous scales, which over-emphasized source criticism. However, a full-scale study is needed with nursing historians to increase its robustness. © 2014 John Wiley & Sons Ltd.

  13. Preliminary Validation of the Perceived Locus of Causality Scale for Academic Motivation in the Context of University Studies (PLOC-U)

    ERIC Educational Resources Information Center

    Sánchez de Miguel, Manuel; Lizaso, Izarne; Hermosilla, Daniel; Alcover, Carlos-Maria; Goudas, Marios; Arranz-Freijó, Enrique

    2017-01-01

    Background: Research has shown that self-determination theory can be useful in the study of motivation in sport and other forms of physical activity. The Perceived Locus of Causality (PLOC) scale was originally designed to study both. Aim: The current research presents and validates the new PLOC-U scale to measure academic motivation in the…

  14. 40 CFR 152.93 - Citation of a previously submitted valid study.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... Data Submitters' Rights § 152.93 Citation of a previously submitted valid study. An applicant may demonstrate compliance for a data requirement by citing a valid study previously submitted to the Agency. The... the original data submitter, the applicant may cite the study only in accordance with paragraphs (b...

  15. Turkish Version of Kolcaba's Immobilization Comfort Questionnaire: A Validity and Reliability Study.

    PubMed

    Tosun, Betül; Aslan, Özlem; Tunay, Servet; Akyüz, Aygül; Özkan, Hüseyin; Bek, Doğan; Açıksöz, Semra

    2015-12-01

    The purpose of this study was to determine the validity and reliability of the Turkish version of the Immobilization Comfort Questionnaire (ICQ). The sample used in this methodological study consisted of 121 patients undergoing lower extremity arthroscopy in a training and research hospital. The validity study of the questionnaire assessed language validity, structural validity and criterion validity. Structural validity was evaluated via exploratory factor analysis. Criterion validity was evaluated by assessing the correlation between the visual analog scale (VAS) scores (i.e., the comfort and pain VAS scores) and the ICQ scores using Spearman's correlation test. The Kaiser-Meyer-Olkin coefficient and Bartlett's test of sphericity were used to determine the suitability of the data for factor analysis. Internal consistency was evaluated to determine reliability. The data were analyzed with SPSS version 15.00 for Windows. Descriptive statistics were presented as frequencies, percentages, means and standard deviations. A p value ≤ .05 was considered statistically significant. A moderate positive correlation was found between the ICQ scores and the VAS comfort scores; a moderate negative correlation was found between the ICQ and the VAS pain measures in the criterion validity analysis. Cronbach α values of .75 and .82 were found for the first and second measurements, respectively. The findings of this study reveal that the ICQ is a valid and reliable tool for assessing the comfort of patients in Turkey who are immobilized because of lower extremity orthopedic problems. Copyright © 2015. Published by Elsevier B.V.

  16. Preliminary validation study of the Russian Birmingham Cognitive Screen.

    PubMed

    Kuzmina, E; Humphreys, G W; Riddoch, M J; Skvortsov, A A; Weekes, B S

    2018-02-01

    The Birmingham Cognitive Screen (BCoS) is designed for use with individuals who have acquired language impairment following stroke. Our goal was to develop a Russian version of the BCoS (Rus-BCoS) by translating the battery following cultural and linguistic adaptations and establishing preliminary data on its psychometric properties. Fifty patients with left-hemisphere stroke were recruited, of whom 98% were diagnosed with mild to moderate aphasia. To check whether the Rus-BCoS provides stable and consistent scores, internal consistency, test-retest, and interrater types of reliability were determined. Eight participants with stroke and 20 neurologically intact participants were assessed twice. To inspect the discriminative power of the battery, 63 participants without brain impairment were tested with the Rus-BCoS. Additionally, the Russian version of the Montreal Cognitive Assessment (MoCA), Quantitative Assessment of Speech in Aphasia, and Luria's Neuropsychological Assessment Battery were used to examine convergent validity, sensitivity, and specificity of the Rus-BCoS. The internal consistency as well as test-retest and interrater reliability of the Rus-BCoS satisfied criteria for the research use. Performance on a majority of tasks in the battery correlated significantly with independently validated tests that putatively measure similar cognitive processes. Critically, all patients with aphasia returned nonzero scores in at least one task in all the Rus-BCoS sections, with the exception of the Controlled Attention section where two patients with severe executive control deficits could not perform. The Rus-BCoS shows promise as a comprehensive cognitive screening tool that can be used by clinicians working with Russian-speaking persons experiencing poststroke aphasia after much further validation and development of reliable normative standards. Given a lack of quantitative neuropsychological assessment tools in Russia, however, we contend the Rus-BCoS offers

  17. Development and Validation of a Rubric for Diagnosing Students' Experimental Design Knowledge and Difficulties.

    PubMed

    Dasgupta, Annwesa P; Anderson, Trevor R; Pelaez, Nancy

    2014-01-01

    It is essential to teach students about experimental design, as this facilitates their deeper understanding of how most biological knowledge was generated and gives them tools to perform their own investigations. Despite the importance of this area, surprisingly little is known about what students actually learn from designing biological experiments. In this paper, we describe a rubric for experimental design (RED) that can be used to measure knowledge of and diagnose difficulties with experimental design. The development and validation of the RED was informed by a literature review and empirical analysis of undergraduate biology students' responses to three published assessments. Five areas of difficulty with experimental design were identified: the variable properties of an experimental subject; the manipulated variables; measurement of outcomes; accounting for variability; and the scope of inference appropriate for experimental findings. Our findings revealed that some difficulties, documented some 50 yr ago, still exist among our undergraduate students, while others remain poorly investigated. The RED shows great promise for diagnosing students' experimental design knowledge in lecture settings, laboratory courses, research internships, and course-based undergraduate research experiences. It also shows potential for guiding the development and selection of assessment and instructional activities that foster experimental design. © 2014 A. P. Dasgupta et al. CBE—Life Sciences Education © 2014 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  18. World Workshop on Oral Medicine VI: an international validation study of clinical competencies for advanced training in oral medicine.

    PubMed

    Steele, John C; Clark, Hadleigh J; Hong, Catherine H L; Jurge, Sabine; Muthukrishnan, Arvind; Kerr, A Ross; Wray, David; Prescott-Clements, Linda; Felix, David H; Sollecito, Thomas P

    2015-08-01

    To explore international consensus for the validation of clinical competencies for advanced training in Oral Medicine. An electronic survey of clinical competencies was designed. The survey was sent to and completed by identified international stakeholders during a 10-week period. To be validated, an individual competency had to achieve 90% or greater consensus to keep it in its current format. Stakeholders from 31 countries responded. High consensus agreement was achieved with 93 of 101 (92%) competencies exceeding the benchmark for agreement. Only 8 warranted further attention and were reviewed by a focus group. No additional competencies were suggested. This is the first international validated study of clinical competencies for advanced training in Oral Medicine. These validated clinical competencies could provide a model for countries developing an advanced training curriculum for Oral Medicine and also inform review of existing curricula. Copyright © 2015 Elsevier Inc. All rights reserved.

  19. Validity of a Newly-Designed Rectilinear Stepping Ergometer Submaximal Exercise Test to Assess Cardiorespiratory Fitness

    PubMed Central

    Zhang, Rubin; Zhan, Likui; Sun, Shaoming; Peng, Wei; Sun, Yining

    2017-01-01

    The maximum oxygen uptake (V̇O2 max), determined from graded maximal or submaximal exercise tests, is used to classify the cardiorespiratory fitness level of individuals. The purpose of this study was to examine the validity and reliability of the YMCA submaximal exercise test protocol performed on a newly-designed rectilinear stepping ergometer (RSE) that used up and down reciprocating vertical motion in place of conventional circular motion and giving precise measurement of workload, to determine V̇O2 max in young healthy male adults. Thirty-two young healthy male adults (32 males; age range: 20-35 years; height: 1.75 ± 0.05 m; weight: 67.5 ± 8.6 kg) firstly participated in a maximal-effort graded exercise test using a cycle ergometer (CE) to directly obtain measured V̇O2 max. Subjects then completed the progressive multistage test on the RSE beginning at 50W and including additional stages of 70, 90, 110, 130, and 150W, and the RSE YMCA submaximal test consisting of a workload increase every 3 minutes until the termination criterion was reached. A metabolic equation was derived from the RSE multistage exercise test to predict oxygen consumption (V̇O2) from power output (W) during the submaximal exercise test (V̇O2 (mL·min-1 )=12.4 ×W(watts)+3.5 mL·kg-1·min-1×M+160mL·min-1, R2= 0.91, standard error of the estimate (SEE) = 134.8mL·min-1). A high correlation was observed between the RSE YMCA estimated V̇O2 max and the CE measured V̇O2 max (r=0.87). The mean difference between estimated and measured V̇O2 max was 2.5 mL·kg-1·min-1, with an SEE of 3.55 mL·kg-1·min-1. The data suggest that the RSE YMCA submaximal exercise test is valid for predicting V̇O2 max in young healthy male adults. The findings show that the rectilinear stepping exercise is an effective submaximal exercise for predicting V̇O2 max. The newly-designed RSE may be potentially further developed as an alternative ergometer for assessing cardiorespiratory fitness and the

  20. Design study of dedicated brain PET with polyhedron geometry.

    PubMed

    Shi, Han; Du, Dong; Xu, JianFeng; Su, Zhihong; Peng, Qiyu

    2015-01-01

    Despite being the conventional choice, whole body PET cameras with a 76 cm diameter ring are not the optimal means of human brain imaging. In fact, a dedicated brain PET with a better geometrical structure has the potential to achieve a higher sensitivity, a higher signal-to-noise ratio, and a better imaging performance. In this study, a polyhedron geometrical dedicated brain PET (a dodecahedron design) is compared to three other candidates via their geometrical efficiencies by calculating the Solid Angle Fractions (SAF); the three other candidates include a spherical cap design, a cylindrical design, and the conventional whole body PET. The spherical cap and the dodecahedron have an identical SAF that is 58.4% higher than that of a 30 cm diameter cylinder and 5.44 times higher than that of a 76 cm diameter cylinder. The conceptual polygon-shape detectors (including pentagon and hexagon detectors based on the PMT-light-sharing scheme instead of the conventional square-shaped block detector module) are presented for the polyhedron PET design. Monte Carlo simulations are performed in order to validate the detector decoding. The results show that crystals in a pentagon-shape detector can be successfully decoded by Anger Logic. The new detector designs support the polyhedron PET investigation.

  1. A validation study of public health knowledge, skills, social responsibility and applied learning.

    PubMed

    Vackova, Dana; Chen, Coco K; Lui, Juliana N M; Johnston, Janice M

    2018-06-22

    To design and validate a questionnaire to measure medical students' Public Health (PH) knowledge, skills, social responsibility and applied learning as indicated in the four domains recommended by the Association of Schools & Programmes of Public Health (ASPPH). A cross-sectional study was conducted to develop an evaluation tool for PH undergraduate education through item generation, reduction, refinement and validation. The 74 preliminary items derived from the existing literature were reduced to 55 items based on expert panel review which included those with expertise in PH, psychometrics and medical education, as well as medical students. Psychometric properties of the preliminary questionnaire were assessed as follows: frequency of endorsement for item variance; principal component analysis (PCA) with varimax rotation for item reduction and factor estimation; Cronbach's Alpha, item-total correlation and test-retest validity for internal consistency and reliability. PCA yielded five factors: PH Learning Experience (6 items); PH Risk Assessment and Communication (5 items); Future Use of Evidence in Practice (6 items); Recognition of PH as a Scientific Discipline (4 items); and PH Skills Development (3 items), explaining 72.05% variance. Internal consistency and reliability tests were satisfactory (Cronbach's Alpha ranged from 0.87 to 0.90; item-total correlation > 0.59). Lower paired test-retest correlations reflected instability in a social science environment. An evaluation tool for community-centred PH education has been developed and validated. The tool measures PH knowledge, skills, social responsibilities and applied learning as recommended by the internationally recognised Association of Schools & Programmes of Public Health (ASPPH).

  2. Quantitative impurity analysis of monoclonal antibody size heterogeneity by CE-LIF: example of development and validation through a quality-by-design framework.

    PubMed

    Michels, David A; Parker, Monica; Salas-Solano, Oscar

    2012-03-01

    This paper describes the framework of quality by design applied to the development, optimization and validation of a sensitive capillary electrophoresis-sodium dodecyl sulfate (CE-SDS) assay for monitoring impurities that potentially impact drug efficacy or patient safety produced in the manufacture of therapeutic MAb products. Drug substance or drug product samples are derivatized with fluorogenic 3-(2-furoyl)quinoline-2-carboxaldehyde and nucleophilic cyanide before separation by CE-SDS coupled to LIF detection. Three design-of-experiments enabled critical labeling parameters to meet method requirements for detecting minor impurities while building precision and robustness into the assay during development. The screening design predicted optimal conditions to control labeling artifacts while two full factorial designs demonstrated method robustness through control of temperature and cyanide parameters within the normal operating range. Subsequent validation according to the guidelines of the International Committee of Harmonization showed the CE-SDS/LIF assay was specific, accurate, and precise (RSD ≤ 0.8%) for relative peak distribution and linear (R > 0.997) between the range of 0.5-1.5 mg/mL with LOD and LOQ of 10 ng/mL and 35 ng/mL, respectively. Validation confirmed the system suitability criteria used as a level of control to ensure reliable method performance. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  3. Gathering Validity Evidence for Surgical Simulation: A Systematic Review.

    PubMed

    Borgersen, Nanna Jo; Naur, Therese M H; Sørensen, Stine M D; Bjerrum, Flemming; Konge, Lars; Subhi, Yousif; Thomsen, Ann Sofia S

    2018-06-01

    To identify current trends in the use of validity frameworks in surgical simulation, to provide an overview of the evidence behind the assessment of technical skills in all surgical specialties, and to present recommendations and guidelines for future validity studies. Validity evidence for assessment tools used in the evaluation of surgical performance is of paramount importance to ensure valid and reliable assessment of skills. We systematically reviewed the literature by searching 5 databases (PubMed, EMBASE, Web of Science, PsycINFO, and the Cochrane Library) for studies published from January 1, 2008, to July 10, 2017. We included original studies evaluating simulation-based assessments of health professionals in surgical specialties and extracted data on surgical specialty, simulator modality, participant characteristics, and the validity framework used. Data were synthesized qualitatively. We identified 498 studies with a total of 18,312 participants. Publications involving validity assessments in surgical simulation more than doubled from 2008 to 2010 (∼30 studies/year) to 2014 to 2016 (∼70 to 90 studies/year). Only 6.6% of the studies used the recommended contemporary validity framework (Messick). The majority of studies used outdated frameworks such as face validity. Significant differences were identified across surgical specialties. The evaluated assessment tools were mostly inanimate or virtual reality simulation models. An increasing number of studies have gathered validity evidence for simulation-based assessments in surgical specialties, but the use of outdated frameworks remains common. To address the current practice, this paper presents guidelines on how to use the contemporary validity framework when designing validity studies.

  4. The Cambridge Otology Quality of Life Questionnaire: an otology-specific patient-recorded outcome measure. A paper describing the instrument design and a report of preliminary reliability and validity.

    PubMed

    Martin, T P C; Moualed, D; Paul, A; Ronan, N; Tysome, J R; Donnelly, N P; Cook, R; Axon, P R

    2015-04-01

    The Cambridge Otology Quality of Life Questionnaire (COQOL) is a patient-recorded outcome measurement (PROM) designed to quantify the quality of life of patients attending otology clinics. Item-reduction model. A systematically designed long-form version (74 items) was tested with patient focus groups before being presented to adult otology patients (n. 137). Preliminary item analysis tested reliability, reducing the COQOL to 24 questions. This was then presented in conjunction with the SF-36 (V1) questionnaire to a total of 203 patients. Subsequently, these were re-presented at T + 3 months, and patients recorded whether they felt their condition had improved, deteriorated or remained the same. Non-responders were contacted by post. A correlation between COQOL scores and patient perception of change was examined to analyse content validity. Teaching hospital and university psychology department. Adult patients attending otology clinics with a wide range of otological conditions. Item reliability measured by item–total correlation, internal consistency and test– retest reliability. Validity measured by correlation between COQOL scores and patient-reported symptom change. Reliability: the COQOL showed excellent internal consistency at both initial presentation (a = 0.90) and 3 months later (a = 0.93). Validity: One-way analysis of variance showed a significant difference between groups reporting change and those reporting no change in quality of life (F(2, 80) = 5.866, P < 0.01). The COQOL is the first otology-specific PROM. Initial studies demonstrate excellent reliability and encouraging preliminary criterion validity: further studies will allow a deeper validation of the instrument.

  5. PLCO Ovarian Phase III Validation Study — EDRN Public Portal

    Cancer.gov

    Our preliminary data indicate that the performance of CA 125 as a screening test for ovarian cancer can be improved upon by additional biomarkers. With completion of one additional validation step, we will be ready to test the performance of a consensus marker panel in a phase III validation study. Given the original aims of the PLCO trial, we believe that the PLCO represents an ideal longitudinal cohort offering specimens for phase III validation of ovarian cancer biomarkers.

  6. Validating a Fidelity Scale to Understand Intervention Effects in Classroom-Based Studies

    ERIC Educational Resources Information Center

    Buckley, Pamela; Moore, Brooke; Boardman, Alison G.; Arya, Diana J.; Maul, Andrew

    2017-01-01

    K-12 intervention studies often include fidelity of implementation (FOI) as a mediating variable, though most do not report the validity of fidelity measures. This article discusses the critical need for validated FOI scales. To illustrate our point, we describe the development and validation of the Implementation Validity Checklist (IVC-R), an…

  7. Validation study of human figure drawing test in a Colombian school children population.

    PubMed

    Vélez van Meerbeke, Alberto; Sandoval-Garcia, Carolina; Ibáñez, Milciades; Talero-Gutiérrez, Claudia; Fiallo, Dolly; Halliday, Karen

    2011-05-01

    The aim of this article was to assess the validity of the emotional and developmental components of the Koppitz human figure drawing test. 2420 children's drawings available in a database resulting from a previous cross sectional study designed to determine the prevalence of neurological diseases in children between 0 and 12 years old in Bogota schools were evaluated. They were scored using the criteria proposed by Koppitz, and classified into 16 groups according to age, gender, and presence/absence of learning or attention problems. The overall results were then compared with the normative study to assess whether descriptive parameters of the two populations were significantly different. There were no significant differences associated with presence/absence of learning and attention disorders or school attended within the overall sample. An Interrater reliability test has been made to assure the homogeneity of scoring by the evaluator team. There were significant differences between this population and that of the original study. New scoring tables contextualized for our population based on the frequency of appearance in this sample are presented. We can conclude that various ethnic, social, and cultural factors can influence the way children draw the human figure. It is thus important to establish local reference values to adequately distinguish between normality and abnormality. The new scoring tables proposed here should be followed up with a clinical study to corroborate their validity.

  8. Dimensions of Intuition: First-Round Validation Studies

    ERIC Educational Resources Information Center

    Vrugtman, Rosanne

    2009-01-01

    This study utilized confirmatory factor analysis (CFA), canonical correlation analysis (CCA), regression analysis (RA), and correlation analysis (CA) for first-round validation of the researcher's Dimensions of Intuition (DOI) instrument. The DOI examined 25 personal characteristics and situations purportedly predictive of intuition. Data was…

  9. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method

    PubMed Central

    2017-01-01

    Background The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Objective Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. Methods A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Results Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant

  10. When Educational Material Is Delivered: A Mixed Methods Content Validation Study of the Information Assessment Method.

    PubMed

    Badran, Hani; Pluye, Pierre; Grad, Roland

    2017-03-14

    The Information Assessment Method (IAM) allows clinicians to report the cognitive impact, clinical relevance, intention to use, and expected patient health benefits associated with clinical information received by email. More than 15,000 Canadian physicians and pharmacists use the IAM in continuing education programs. In addition, information providers can use IAM ratings and feedback comments from clinicians to improve their products. Our general objective was to validate the IAM questionnaire for the delivery of educational material (ecological and logical content validity). Our specific objectives were to measure the relevance and evaluate the representativeness of IAM items for assessing information received by email. A 3-part mixed methods study was conducted (convergent design). In part 1 (quantitative longitudinal study), the relevance of IAM items was measured. Participants were 5596 physician members of the Canadian Medical Association who used the IAM. A total of 234,196 ratings were collected in 2012. The relevance of IAM items with respect to their main construct was calculated using descriptive statistics (relevance ratio R). In part 2 (qualitative descriptive study), the representativeness of IAM items was evaluated. A total of 15 family physicians completed semistructured face-to-face interviews. For each construct, we evaluated the representativeness of IAM items using a deductive-inductive thematic qualitative data analysis. In part 3 (mixing quantitative and qualitative parts), results from quantitative and qualitative analyses were reviewed, juxtaposed in a table, discussed with experts, and integrated. Thus, our final results are derived from the views of users (ecological content validation) and experts (logical content validation). Of the 23 IAM items, 21 were validated for content, while 2 were removed. In part 1 (quantitative results), 21 items were deemed relevant, while 2 items were deemed not relevant (R=4.86% [N=234,196] and R=3.04% [n

  11. Design and simulation of novel laparoscopic renal denervation system: a feasibility study.

    PubMed

    Ye, Eunbi; Baik, Jinhwan; Lee, Seunghyun; Ryu, Seon Young; Yang, Sunchoel; Choi, Eue-Keun; Song, Won Hoon; Yuk, Hyeong Dong; Jeong, Chang Wook; Park, Sung-Min

    2018-05-18

    In this study, we propose a novel laparoscopy-based renal denervation (RDN) system for treating patients with resistant hypertension. In this feasibility study, we investigated whether our proposed surgical instrument can ablate renal nerves from outside of the renal artery safely and effectively and can overcome the depth-related limitations of the previous catheter-based system with less damage to the arterial walls. We designed a looped bipolar electrosurgical instrument to be used with laparoscopy-based RDN system. The tip of instrument wraps around the renal artery and delivers the radio-frequency (RF) energy. We evaluated the thermal distribution via simulation study on a numerical model designed using histological data and validated the results by the in vitro study. Finally, to show the effectiveness of this system, we compared the performance of our system with that of catheter-based RDN system through simulations. Simulation results were within the 95% confidence intervals of the in vitro experimental results. The validated results demonstrated that the proposed laparoscopy-based RDN system produces an effective thermal distribution for the removal of renal sympathetic nerves without damaging the arterial wall and addresses the depth limitation of catheter-based RDN system. We developed a novel laparoscope-based electrosurgical RDN method for hypertension treatment. The feasibility of our system was confirmed through a simulation study as well as in vitro experiments. Our proposed method could be an effective treatment for resistant hypertension as well as central nervous system diseases.

  12. Validation study of a Chinese version of Partners in Health in Hong Kong (C-PIH HK).

    PubMed

    Chiu, Teresa Mei Lee; Tam, Katharine Tai Wo; Siu, Choi Fong; Chau, Phyllis Wai Ping; Battersby, Malcolm

    2017-01-01

    The Partners in Health (PIH) scale is a measure designed to assess the generic knowledge, attitudes, behaviors, and impacts of self-management. A cross-cultural adaptation of the PIH for use in Hong Kong was evaluated in this study. This paper reports the validity and reliability of the Chinese version of PIH (C-PIH[HK]). A 12-item PIH was translated using forward-backward translation technique and reviewed by individuals with chronic diseases and health professionals. A total of 209 individuals with chronic diseases completed the scale. The construct validity, internal consistency, and test-retest reliability were evaluated in two waves. The findings in Wave 1 (n = 73) provided acceptable psychometric properties of the C-PIH(HK) but supported the adaptation of question 5 to improve the cultural relevance, validity, and reliability of the scale. An adapted version of C-PIH(HK) was evaluated in Wave 2. The findings in Wave 2 (n = 136) demonstrated good construct validity and internal consistency of C-PIH(HK). A principal component analysis with Oblimin rotation yielded a 3-factor solution, and the Cronbach's alphas of the subscales ranged from 0.773 to 0.845. Participants were asked whether they perceived the self-management workshops they attended and education provided by health professionals as useful or not. The results showed that the C-PIH(HK) was able to discriminate those who agreed and those who disagreed related to the usefulness of individual health education (p < 0.0001 in all subscales) and workshops (p < 0.001 in the knowledge subscale) as hypothesized. The test-retest reliability was high (ICC = 0.818). A culturally adapted version of PIH for use in Hong Kong was evaluated. The study supported good construct validity, discriminate validity, internal consistency, and test-retest reliability of the C-PIH(HK).

  13. Applied virtual reality in aerospace design

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1995-01-01

    A virtual reality (VR) applications program has been under development at the Marshall Space Flight Center (MSFC) since 1989. The objectives of the MSFC VR Applications Program are to develop, assess, validate, and utilize VR in hardware development, operations development and support, mission operations training and science training. Before VR can be used with confidence in a particular application, VR must be validated for that class of applications. For that reason, specific validation studies for selected classes of applications have been proposed and are currently underway. These include macro-ergonomic 'control room class' design analysis, Spacelab stowage reconfiguration training, a full-body microgravity functional reach simulator, a gross anatomy teaching simulator, and micro-ergonomic design analysis. This paper describes the MSFC VR Applications Program and the validation studies.

  14. Priority issues, study designs and geographical distribution in nutrition journals.

    PubMed

    Ortiz-Moncada, R; González-Zapata, L; Ruiz-Cantero, M T; Clemente-Gómez, V

    2011-01-01

    The increased number of articles published in nutrition is a reflection of the relevance to scientific community. The characteristics and quality of nutritional studies determine whether readers can obtain valid conclusions from them, as well as their usefulness for evidence-based strategic policies. To determine the characteristics of papers published in nutrition journals. Descriptive study design. We reviewed 330 original papers published between January-June 2007. From: American Journal of Clinical Nutrition (AJCN), Journal of Nutrition, European Journal Nutrition, European Journal of Clinical Nutrition and Public Health Nutrition. We classified them according to the subjects studied; risk factors, study design and country of origin. Almost half the papers studied healthy people (53.3%). The most frequent illness was obesity (13.9%). Food consumption is the most frequent risk factor (63.3%). Social factors appear exclusively only in 3.6% of the papers. Clinical trials were the most common analytical design (31.8%), mainly in the AJCN (45.6%). Cross-sectional studies were the most frequent type of observational design (37.9%). Ten countries produced over half of the papers (51.3%). The US publishes the highest number of papers (20.6%), whilst developing countries make only scarce contributions to scientific literature on nutrition. Most of the papers had inferential power. They generally studied both healthy and sick subjects, coinciding with the aims of international scientific policies. However, the topics covered reflect a clear bias, prioritizing problems pertaining to developed countries. Social determinants of health should also be considered, along with behavioral and biological risk factors.

  15. Reversed phase HPLC for strontium ranelate: Method development and validation applying experimental design.

    PubMed

    Kovács, Béla; Kántor, Lajos Kristóf; Croitoru, Mircea Dumitru; Kelemen, Éva Katalin; Obreja, Mona; Nagy, Előd Ernő; Székely-Szentmiklósi, Blanka; Gyéresi, Árpád

    2018-06-01

    A reverse-phase HPLC (RP-HPLC) method was developed for strontium ranelate using a full factorial, screening experimental design. The analytical procedure was validated according to international guidelines for linearity, selectivity, sensitivity, accuracy and precision. A separate experimental design was used to demonstrate the robustness of the method. Strontium ranelate was eluted at 4.4 minutes and showed no interference with the excipients used in the formulation, at 321 nm. The method is linear in the range of 20-320 μg mL-1 (R2 = 0.99998). Recovery, tested in the range of 40-120 μg mL-1, was found to be 96.1-102.1 %. Intra-day and intermediate precision RSDs ranged from 1.0-1.4 and 1.2-1.4 %, resp. The limit of detection and limit of quantitation were 0.06 and 0.20 μg mL-1, resp. The proposed technique is fast, cost-effective, reliable and reproducible, and is proposed for the routine analysis of strontium ranelate.

  16. Enabling Large-Scale Design, Synthesis and Validation of Small Molecule Protein-Protein Antagonists

    PubMed Central

    Koes, David; Khoury, Kareem; Huang, Yijun; Wang, Wei; Bista, Michal; Popowicz, Grzegorz M.; Wolf, Siglinde; Holak, Tad A.; Dömling, Alexander; Camacho, Carlos J.

    2012-01-01

    Although there is no shortage of potential drug targets, there are only a handful known low-molecular-weight inhibitors of protein-protein interactions (PPIs). One problem is that current efforts are dominated by low-yield high-throughput screening, whose rigid framework is not suitable for the diverse chemotypes present in PPIs. Here, we developed a novel pharmacophore-based interactive screening technology that builds on the role anchor residues, or deeply buried hot spots, have in PPIs, and redesigns these entry points with anchor-biased virtual multicomponent reactions, delivering tens of millions of readily synthesizable novel compounds. Application of this approach to the MDM2/p53 cancer target led to high hit rates, resulting in a large and diverse set of confirmed inhibitors, and co-crystal structures validate the designed compounds. Our unique open-access technology promises to expand chemical space and the exploration of the human interactome by leveraging in-house small-scale assays and user-friendly chemistry to rationally design ligands for PPIs with known structure. PMID:22427896

  17. Theoretical design and analysis of multivolume digital assays with wide dynamic range validated experimentally with microfluidic digital PCR.

    PubMed

    Kreutz, Jason E; Munson, Todd; Huynh, Toan; Shen, Feng; Du, Wenbin; Ismagilov, Rustem F

    2011-11-01

    This paper presents a protocol using theoretical methods and free software to design and analyze multivolume digital PCR (MV digital PCR) devices; the theory and software are also applicable to design and analysis of dilution series in digital PCR. MV digital PCR minimizes the total number of wells required for "digital" (single molecule) measurements while maintaining high dynamic range and high resolution. In some examples, multivolume designs with fewer than 200 total wells are predicted to provide dynamic range with 5-fold resolution similar to that of single-volume designs requiring 12,000 wells. Mathematical techniques were utilized and expanded to maximize the information obtained from each experiment and to quantify performance of devices and were experimentally validated using the SlipChip platform. MV digital PCR was demonstrated to perform reliably, and results from wells of different volumes agreed with one another. No artifacts due to different surface-to-volume ratios were observed, and single molecule amplification in volumes ranging from 1 to 125 nL was self-consistent. The device presented here was designed to meet the testing requirements for measuring clinically relevant levels of HIV viral load at the point-of-care (in plasma, <500 molecules/mL to >1,000,000 molecules/mL), and the predicted resolution and dynamic range was experimentally validated using a control sequence of DNA. This approach simplifies digital PCR experiments, saves space, and thus enables multiplexing using separate areas for each sample on one chip, and facilitates the development of new high-performance diagnostic tools for resource-limited applications. The theory and software presented here are general and are applicable to designing and analyzing other digital analytical platforms including digital immunoassays and digital bacterial analysis. It is not limited to SlipChip and could also be useful for the design of systems on platforms including valve-based and droplet

  18. Development and Validation of the Motivations for Selection of Medical Study (MSMS) Questionnaire in India.

    PubMed

    Goel, Sonu; Angeli, Federica; Singla, Neetu; Ruwaard, Dirk

    2016-01-01

    Understanding medical students' motivation to select medical studies is particularly salient to inform practice and policymaking in countries-such as India-where shortage of medical personnel poses crucial and chronical challenges to healthcare systems. This study aims to develop and validate a questionnaire to assess the motivation of medical students to select medical studies. A Motivation for Selection of Medical Study (MSMS) questionnaire was developed using extensive literature review followed by Delphi technique. The scale consisted of 12 items, 5 measuring intrinsic dimensions of motivations and 7 measuring extrinsic dimensions. Exploratory factor analysis (EFA), confirmatory factor analysis (CFA), validity, reliability and data quality checks were conducted on a sample of 636 medical students from six medical colleges of three North Indian states. The MSMS questionnaire consisted of 3 factors (subscales) and 8 items. The three principal factors that emerged after EFA were the scientific factor (e.g. research opportunities and the ability to use new cutting edge technologies), the societal factor (e.g. job security) and the humanitarian factor (e.g. desire to help others). The CFA conducted showed goodness-of-fit indices supporting the 3-factor model. The three extracted factors cut across the traditional dichotomy between intrinsic and extrinsic motivation and uncover a novel three-faceted motivation construct based on scientific factors, societal expectations and humanitarian needs. This validated instrument can be used to evaluate the motivational factors of medical students to choose medical study in India and similar settings and constitutes a powerful tool for policymakers to design measures able to increase selection of medical curricula.

  19. Parental infant jaundice colour card design successfully validated by comparing it with total serum bilirubin.

    PubMed

    Xue, Guo-Chang; Ren, Ming-Xing; Shen, Lin-Na; Zhang, Li-Wen

    2016-12-01

    We designed a jaundice colour card that could be used by the parents of neonates and validated it by comparing it with total serum bilirubin levels. There were 106 term Chinese neonates in the study. The majority weighed between 2500 g and 3499 g (63%) and had a gestational age of 37-40 weeks (77%). The jaundice colour card and photometric determination were used to screen for neonatal jaundice and compared with serum bilirubin. The bilirubin levels were measured by mothers using the jaundice colour card, and 67% of the measurements were taken at 11-20 days (range 3-30). The measurements at the infant's forehead, cheek and sternum showed strong correlations with total serum bilirubin. The mean differences between the total serum bilirubin and the jaundice colour card measurements from the forehead, cheek and sternum were 1.9 mg/dL, 0.3 mg/dL and 1.5 mg/dL, respectively. When total serum bilirubin >13 mg/dL was used as the cut-off point, the areas under the receiver operating characteristics curves were 0.934 for the forehead, 0.985 for the cheek and 0.966 for the sternum. We established the validity of the jaundice colour card as a parental measurement tool for jaundice in Chinese neonates, and the cheek was the best measurement site. ©2016 Foundation Acta Paediatrica. Published by John Wiley & Sons Ltd.

  20. Development and validation of the Alcohol Myopia Scale.

    PubMed

    Lac, Andrew; Berger, Dale E

    2013-09-01

    Alcohol myopia theory conceptualizes the ability of alcohol to narrow attention and how this demand on mental resources produces the impairments of self-inflation, relief, and excess. The current research was designed to develop and validate a scale based on this framework. People who were alcohol users rated items representing myopic experiences arising from drinking episodes in the past month. In Study 1 (N = 260), the preliminary 3-factor structure was supported by exploratory factor analysis. In Study 2 (N = 289), the 3-factor structure was substantiated with confirmatory factor analysis, and it was superior in fit to an empirically indefensible 1-factor structure. The final 14-item scale was evaluated with internal consistency reliability, discriminant validity, convergent validity, criterion validity, and incremental validity. The alcohol myopia scale (AMS) illuminates conceptual underpinnings of this theory and yields insights for understanding the tunnel vision that arises from intoxication.

  1. Reducing Bias and Increasing Precision by Adding Either a Pretest Measure of the Study Outcome or a Nonequivalent Comparison Group to the Basic Regression Discontinuity Design: An Example from Education

    ERIC Educational Resources Information Center

    Tang, Yang; Cook, Thomas D.; Kisbu-Sakarya, Yasemin

    2015-01-01

    Regression discontinuity design (RD) has been widely used to produce reliable causal estimates. Researchers have validated the accuracy of RD design using within study comparisons (Cook, Shadish & Wong, 2008; Cook & Steiner, 2010; Shadish et al, 2011). Within study comparisons examines the validity of a quasi-experiment by comparing its…

  2. Test of Creative Imagination: Validity and Reliability Study

    ERIC Educational Resources Information Center

    Gundogan, Aysun; Ari, Meziyet; Gonen, Mubeccel

    2013-01-01

    The purpose of this study was to investigate validity and reliability of the test of creative imagination. This study was conducted with the participation of 1000 children, aged between 9-14 and were studying in six primary schools in the city center of Denizli Province, chosen by cluster ratio sampling. In the study, it was revealed that the…

  3. Development of an Independent Global Land Cover Validation Dataset

    NASA Astrophysics Data System (ADS)

    Sulla-Menashe, D. J.; Olofsson, P.; Woodcock, C. E.; Holden, C.; Metcalfe, M.; Friedl, M. A.; Stehman, S. V.; Herold, M.; Giri, C.

    2012-12-01

    Accurate information related to the global distribution and dynamics in global land cover is critical for a large number of global change science questions. A growing number of land cover products have been produced at regional to global scales, but the uncertainty in these products and the relative strengths and weaknesses among available products are poorly characterized. To address this limitation we are compiling a database of high spatial resolution imagery to support international land cover validation studies. Validation sites were selected based on a probability sample, and may therefore be used to estimate statistically defensible accuracy statistics and associated standard errors. Validation site locations were identified using a stratified random design based on 21 strata derived from an intersection of Koppen climate classes and a population density layer. In this way, the two major sources of global variation in land cover (climate and human activity) are explicitly included in the stratification scheme. At each site we are acquiring high spatial resolution (< 1-m) satellite imagery for 5-km x 5-km blocks. The response design uses an object-oriented hierarchical legend that is compatible with the UN FAO Land Cover Classification System. Using this response design, we are classifying each site using a semi-automated algorithm that blends image segmentation with a supervised RandomForest classification algorithm. In the long run, the validation site database is designed to support international efforts to validate land cover products. To illustrate, we use the site database to validate the MODIS Collection 4 Land Cover product, providing a prototype for validating the VIIRS Surface Type Intermediate Product scheduled to start operational production early in 2013. As part of our analysis we evaluate sources of error in coarse resolution products including semantic issues related to the class definitions, mixed pixels, and poor spectral separation between

  4. On Internal Validity in Multiple Baseline Designs

    ERIC Educational Resources Information Center

    Pustejovsky, James E.

    2014-01-01

    Single-case designs are a class of research designs for evaluating intervention effects on individual cases. The designs are widely applied in certain fields, including special education, school psychology, clinical psychology, social work, and applied behavior analysis. The multiple baseline design (MBD) is the most frequently used single-case…

  5. Development and Validation of Targeted Next-Generation Sequencing Panels for Detection of Germline Variants in Inherited Diseases.

    PubMed

    Santani, Avni; Murrell, Jill; Funke, Birgit; Yu, Zhenming; Hegde, Madhuri; Mao, Rong; Ferreira-Gonzalez, Andrea; Voelkerding, Karl V; Weck, Karen E

    2017-06-01

    - The number of targeted next-generation sequencing (NGS) panels for genetic diseases offered by clinical laboratories is rapidly increasing. Before an NGS-based test is implemented in a clinical laboratory, appropriate validation studies are needed to determine the performance characteristics of the test. - To provide examples of assay design and validation of targeted NGS gene panels for the detection of germline variants associated with inherited disorders. - The approaches used by 2 clinical laboratories for the development and validation of targeted NGS gene panels are described. Important design and validation considerations are examined. - Clinical laboratories must validate performance specifications of each test prior to implementation. Test design specifications and validation data are provided, outlining important steps in validation of targeted NGS panels by clinical diagnostic laboratories.

  6. In vitro validation of self designed "universal human Influenza A siRNA".

    PubMed

    Jain, Bhawana; Jain, Amita; Prakash, Om; Singh, Ajay Kr; Dangi, Tanushree; Singh, Mastan; Singh, K P

    2015-08-01

    The genomic variability of Influenza A virus (IAV) makes it difficult for the existing vaccines or anti-influenza drugs to control. The siRNA targeting viral gene induces RNAi mechanism in the host and silent the gene by cleaving mRNA. In this study, we developed an universal siRNA and validated its efficiency in vitro. The siRNA was designed rationally, targeting the most conserved region (delineated with the help of multiple sequence alignment) of M gene of IAV strains. Three level screening method was adopted, and the most efficient one was selected on the basis of its unique position in the conserved region. The siRNA efficacy was confirmed in vitro with the Madin Darby Canine Kidney (MDCK) cell line for IAV propagation using two clinical isolates i.e., Influenza A/H3N2 and Influenza A/pdmH1N1. Of the total 168 strains worldwide and 33 strains from India, 97 bp long (position 137-233) conserved region was identified. The longest ORF of matrix gene was targeted by the selected siRNA, which showed 73.6% inhibition in replication of Influenza A/pdmH1N1 and 62.1% inhibition in replication of Influenza A/H3N2 at 48 h post infection on MDCK cell line. This study provides a basis for the development of siRNA which can be used as universal anti-IAV therapeutic agent.

  7. Cross-cultural adaptation, evaluation and validation of the Spouse Response Inventory: a study protocol.

    PubMed

    Kaiser, Ulrike; Steinmetz, Dorit; Scharnagel, Rüdiger; Jensen, Mark P; Balck, Friedrich; Sabatowski, Rainer

    2014-10-14

    Since the response of spouses has been proven to be an important reinforcement of pain behaviour and disability it has been addressed in research and therapy. Fordyce suggested pain behaviour and well behaviour be considered in explaining suffering in chronic pain patients. Among existing instruments concerning spouse's responses the aspect of well behaviour has not been examined so far. The SRI (Spouse Response Inventory) tries to consider pain behaviour and well behaviour and appears to be acceptable because of its brevity and close proximity to daily language. The aim of the study is the translation into German, followed by evaluation and validation, of the SRI on a German sample of patients with chronic pain. The study is comprehensively designed: initially, the focus will lie on the translation of the instrument following the guidelines for cross-cultural translation and adaptation and evaluation of the German version according to the source study. Subsequently, a validation referring to predictive, incremental and construct validation will be conducted using instruments based on similar or close but different constructs. Evaluation of the resulting SRI-G (SRI-German) will be conducted on a sample of at least 30 patients with chronic pain attending a comprehensive pain centre. For validation at least 120 patients with chronic headache, back pain, cancer related pain and somatoform pain disorder shall be included, for a total of 480 patients. Separate analyses according to specific pain diagnoses will be performed to ensure psychometric property, interpretability and control of diagnosis of specific limitations. Analyses will include comprehensive investigation of psychometric property of the scale by hierarchical regression analyses, correlation analyses, multivariate analysis of variance and exploratory factor analyses (SPSS). The study protocol was approved by the Ethics Committee of the University of Dresden (EK 335 122008) based on the Helsinki declaration

  8. [Computerized system validation of clinical researches].

    PubMed

    Yan, Charles; Chen, Feng; Xia, Jia-lai; Zheng, Qing-shan; Liu, Daniel

    2015-11-01

    Validation is a documented process that provides a high degree of assurance. The computer system does exactly and consistently what it is designed to do in a controlled manner throughout the life. The validation process begins with the system proposal/requirements definition, and continues application and maintenance until system retirement and retention of the e-records based on regulatory rules. The objective to do so is to clearly specify that each application of information technology fulfills its purpose. The computer system validation (CSV) is essential in clinical studies according to the GCP standard, meeting product's pre-determined attributes of the specifications, quality, safety and traceability. This paper describes how to perform the validation process and determine relevant stakeholders within an organization in the light of validation SOPs. Although a specific accountability in the implementation of the validation process might be outsourced, the ultimate responsibility of the CSV remains on the shoulder of the business process owner-sponsor. In order to show that the compliance of the system validation has been properly attained, it is essential to set up comprehensive validation procedures and maintain adequate documentations as well as training records. Quality of the system validation should be controlled using both QC and QA means.

  9. Evaluation of an Adaptive Game that Uses EEG Measures Validated during the Design Process as Inputs to a Biocybernetic Loop.

    PubMed

    Ewing, Kate C; Fairclough, Stephen H; Gilleade, Kiel

    2016-01-01

    Biocybernetic adaptation is a form of physiological computing whereby real-time data streaming from the brain and body is used by a negative control loop to adapt the user interface. This article describes the development of an adaptive game system that is designed to maximize player engagement by utilizing changes in real-time electroencephalography (EEG) to adjust the level of game demand. The research consists of four main stages: (1) the development of a conceptual framework upon which to model the interaction between person and system; (2) the validation of the psychophysiological inference underpinning the loop; (3) the construction of a working prototype; and (4) an evaluation of the adaptive game. Two studies are reported. The first demonstrates the sensitivity of EEG power in the (frontal) theta and (parietal) alpha bands to changing levels of game demand. These variables were then reformulated within the working biocybernetic control loop designed to maximize player engagement. The second study evaluated the performance of an adaptive game of Tetris with respect to system behavior and user experience. Important issues for the design and evaluation of closed-loop interfaces are discussed.

  10. Evaluation of an Adaptive Game that Uses EEG Measures Validated during the Design Process as Inputs to a Biocybernetic Loop

    PubMed Central

    Ewing, Kate C.; Fairclough, Stephen H.; Gilleade, Kiel

    2016-01-01

    Biocybernetic adaptation is a form of physiological computing whereby real-time data streaming from the brain and body is used by a negative control loop to adapt the user interface. This article describes the development of an adaptive game system that is designed to maximize player engagement by utilizing changes in real-time electroencephalography (EEG) to adjust the level of game demand. The research consists of four main stages: (1) the development of a conceptual framework upon which to model the interaction between person and system; (2) the validation of the psychophysiological inference underpinning the loop; (3) the construction of a working prototype; and (4) an evaluation of the adaptive game. Two studies are reported. The first demonstrates the sensitivity of EEG power in the (frontal) theta and (parietal) alpha bands to changing levels of game demand. These variables were then reformulated within the working biocybernetic control loop designed to maximize player engagement. The second study evaluated the performance of an adaptive game of Tetris with respect to system behavior and user experience. Important issues for the design and evaluation of closed-loop interfaces are discussed. PMID:27242486

  11. A reliability and validity study of the Palliative Performance Scale

    PubMed Central

    Ho, Francis; Lau, Francis; Downing, Michael G; Lesperance, Mary

    2008-01-01

    Background The Palliative Performance Scale (PPS) was first introduced in1996 as a new tool for measurement of performance status in palliative care. PPS has been used in many countries and has been translated into other languages. Methods This study evaluated the reliability and validity of PPS. A web-based, case scenarios study with a test-retest format was used to determine reliability. Fifty-three participants were recruited and randomly divided into two groups, each evaluating 11 cases at two time points. The validity study was based on the content validation of 15 palliative care experts conducted over telephone interviews, with discussion on five themes: PPS as clinical assessment tool, the usefulness of PPS, PPS scores affecting decision making, the problems in using PPS, and the adequacy of PPS instruction. Results The intraclass correlation coefficients for absolute agreement were 0.959 and 0.964 for Group 1, at Time-1 and Time-2; 0.951 and 0.931 for Group 2, at Time-1 and Time-2 respectively. Results showed that the participants were consistent in their scoring over the two times, with a mean Cohen's kappa of 0.67 for Group 1 and 0.71 for Group 2. In the validity study, all experts agreed that PPS is a valuable clinical assessment tool in palliative care. Many of them have already incorporated PPS as part of their practice standard. Conclusion The results of the reliability study demonstrated that PPS is a reliable tool. The validity study found that most experts did not feel a need to further modify PPS and, only two experts requested that some performance status measures be defined more clearly. Areas of PPS use include prognostication, disease monitoring, care planning, hospital resource allocation, clinical teaching and research. PPS is also a good communication tool between palliative care workers. PMID:18680590

  12. Non-clinical studies required for new drug development - Part I: early in silico and in vitro studies, new target discovery and validation, proof of principles and robustness of animal studies.

    PubMed

    Andrade, E L; Bento, A F; Cavalli, J; Oliveira, S K; Freitas, C S; Marcon, R; Schwanke, R C; Siqueira, J M; Calixto, J B

    2016-10-24

    This review presents a historical overview of drug discovery and the non-clinical stages of the drug development process, from initial target identification and validation, through in silico assays and high throughput screening (HTS), identification of leader molecules and their optimization, the selection of a candidate substance for clinical development, and the use of animal models during the early studies of proof-of-concept (or principle). This report also discusses the relevance of validated and predictive animal models selection, as well as the correct use of animal tests concerning the experimental design, execution and interpretation, which affect the reproducibility, quality and reliability of non-clinical studies necessary to translate to and support clinical studies. Collectively, improving these aspects will certainly contribute to the robustness of both scientific publications and the translation of new substances to clinical development.

  13. [Pre-randomisation in study designs: getting past the taboo].

    PubMed

    Schellings, R; Kessels, A G; Sturmans, F

    2008-09-20

    In October 2006 the Dutch Ministry of Health, Welfare and Sport announced that the use of pre-randomisation in study designs is admissible and not in conflict with the Dutch Medical Research in Human Subjects Act. With pre-randomisation, the conventional sequence of obtaining informed consent followed by randomisation is reversed. According to the original pre-randomisation design (Zelen design), participants are randomised before they are asked to consent; after randomisation, only participants in the experimental group are asked to consent to treatment and effect measurement. In the past, pre-randomisation has seldom been used, and when it was, it was often under the wrong circumstances. Awareness regarding the ethical, legal and methodological objections to pre-randomisation is increasing. About a decade ago, we illustrated the applicability and acceptability of pre-randomisation by means of a fictitious heroin provision trial. In general, pre-randomisation is justified if valid evaluation of the effects of an intervention is impossible using a conventional randomised design, e.g., if knowledge of the intervention may lead to non-compliance or drop-out in the control group, or when the intervention is an educational programme. Other requirements for pre-randomisation include the following: the study has a clinically relevant objective, it is likely that the study will lead to important new insights, the informed consent procedure bears no potential harm to participants, at least standard care is offered to participants in the control group, and the approval of an independent research ethics committee is obtained.

  14. Integrating Materials, Manufacturing, Design and Validation for Sustainability in Future Transport Systems

    NASA Astrophysics Data System (ADS)

    Price, M. A.; Murphy, A.; Butterfield, J.; McCool, R.; Fleck, R.

    2011-05-01

    The predictive methods currently used for material specification, component design and the development of manufacturing processes, need to evolve beyond the current `metal centric' state of the art, if advanced composites are to realise their potential in delivering sustainable transport solutions. There are however, significant technical challenges associated with this process. Deteriorating environmental, political, economic and social conditions across the globe have resulted in unprecedented pressures to improve the operational efficiency of the manufacturing sector generally and to change perceptions regarding the environmental credentials of transport systems in particular. There is a need to apply new technologies and develop new capabilities to ensure commercial sustainability in the face of twenty first century economic and climatic conditions as well as transport market demands. A major technology gap exists between design, analysis and manufacturing processes in both the OEMs, and the smaller companies that make up the SME based supply chain. As regulatory requirements align with environmental needs, manufacturers are increasingly responsible for the broader lifecycle aspects of vehicle performance. These include not only manufacture and supply but disposal and re-use or re-cycling. In order to make advances in the reduction of emissions coupled with improved economic efficiency through the provision of advanced lightweight vehicles, four key challenges are identified as follows: Material systems, Manufacturing systems, Integrated design methods using digital manufacturing tools and Validation systems. This paper presents a project which has been designed to address these four key issues, using at its core, a digital framework for the creation and management of key parameters related to the lifecycle performance of thermoplastic composite parts and structures. It aims to provide capability for the proposition, definition, evaluation and demonstration of

  15. Effort, symptom validity testing, performance validity testing and traumatic brain injury.

    PubMed

    Bigler, Erin D

    2014-01-01

    To understand the neurocognitive effects of brain injury, valid neuropsychological test findings are paramount. This review examines the research on what has been referred to a symptom validity testing (SVT). Above a designated cut-score signifies a 'passing' SVT performance which is likely the best indicator of valid neuropsychological test findings. Likewise, substantially below cut-point performance that nears chance or is at chance signifies invalid test performance. Significantly below chance is the sine qua non neuropsychological indicator for malingering. However, the interpretative problems with SVT performance below the cut-point yet far above chance are substantial, as pointed out in this review. This intermediate, border-zone performance on SVT measures is where substantial interpretative challenges exist. Case studies are used to highlight the many areas where additional research is needed. Historical perspectives are reviewed along with the neurobiology of effort. Reasons why performance validity testing (PVT) may be better than the SVT term are reviewed. Advances in neuroimaging techniques may be key in better understanding the meaning of border zone SVT failure. The review demonstrates the problems with rigidity in interpretation with established cut-scores. A better understanding of how certain types of neurological, neuropsychiatric and/or even test conditions may affect SVT performance is needed.

  16. Degradation Kinetics Study of Alogliptin Benzoate in Alkaline Medium by Validated Stability-Indicating HPTLC Method.

    PubMed

    Bodiwala, Kunjan Bharatkumar; Shah, Shailesh; Thakor, Jeenal; Marolia, Bhavin; Prajapati, Pintu

    2016-11-01

    A rapid, sensitive, and stability-indicating high-performance thin-layer chromatographic method was developed and validated to study degradation kinetics of Alogliptin benzoate (ALG) in an alkaline medium. ALG was degraded under acidic, alkaline, oxidative, and thermal stress conditions. The degraded samples were chromatographed on silica gel 60F254-TLC plates, developed using a quaternary-solvent system (chloroform-methanol-ethyl acetate-triethyl amine, 9+1+1+0.5, v/v/v/v), and scanned at 278 nm. The developed method was validated per International Conference on Harmonization guidelines using validation parameters such as specificity, linearity and range, precision, accuracy, LOD, and LOQ. The linearity range for ALG was 100-500 ng/band (correlation coefficient = 0.9997) with an average recovery of 99.47%. The LOD and LOQ for ALG were 9.8 and 32.7 ng/band, respectively. The developed method was successfully applied for the quantitative estimation of ALG in its synthetic mixture with common excipients. Degradation kinetics of ALG in an alkaline medium was studied by degrading it under three different temperatures and three different concentrations of alkali. Degradation of ALG in the alkaline medium was found to follow first-order kinetics. Contour plots have been generated to predict degradation rate constant, half-life, and shelf life of ALG in various combinations of temperature and concentration of alkali using Design Expert software.

  17. Design and validation of key text messages (Tonsil-Text-To-Me) to improve parent and child perioperative tonsillectomy experience: A modified Delphi study.

    PubMed

    Song, Jin Soo A; Wozney, Lori; Chorney, Jill; Ishman, Stacey L; Hong, Paul

    2017-11-01

    Parents can struggle while providing perioperative tonsillectomy care for their children at home. Short message service (SMS) technology is an accessible and direct modality to communicate timely, evidence-based recommendations to parents across the perioperative period. This study focused on validating a SMS protocol, Tonsil-Text-To-Me (TTTM), for parents of children undergoing tonsillectomy. This study used a modified Delphi expert consensus method. Participants were an international sample of 27 clinicians/researchers. Participants rated level of agreement with recommendations across seven perioperative domains, derived systematically from scientific and lay literature. A priori consensus analysis was conducted using threshold criterion. A multidisciplinary team of local clinicians were also individually interviewed to consolidate text messages and implement recurrent suggestions. In the modified Delphi panel, 30 statements reached threshold agreement (>3.0 of 4.0); recommendations surrounding diet (3.87) and hygiene (3.83) had the highest level of consensus, while recommendations regarding activity (3.42) and non-pharmacologic pain management (3.55) had the lowest consensus. The 30 statements reconfigured into 12 concise text messages. After further interviews with local clinicians, 14 final text messages were included in the SMS protocol to be sent two weeks preoperatively to one week postoperatively. This study illustrates the development of TTTM which is designed to deliver key sequential text messages at the optimal time during the perioperative setting to parents caring for their children who are undergoing tonsillectomy. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. The cross-cultural adaptation, reliability, and validity of the Copenhagen Neck Functional Disability Scale in patients with chronic neck pain: Turkish version study.

    PubMed

    Yapali, Gökmen; Günel, Mintaze Kerem; Karahan, Sevilay

    2012-05-15

    The study design was cross-cultural adaptation and investigation of reliability and validity of the Copenhagen Neck Functional Disability Scale (CNFDS). The aim of this study was to translate the CNFDS into Turkish language and assess its reliability and validity among patients with neck pain in Turkish population. The CNFDS is a reliable and valid evaluation instrument for disability, but there is no published the Turkish version of the CNFDS. One hundred one subjects who had chronic neck pain were included in this study. The CNFDS, Neck Pain and Disability Scale, and visual analogue scale were administered to all subjects. For investigating test-retest reliability, correlation between CNFDS scores, applied at 1-week interval, intraclass correlation coefficient score for test-retest reliability was 0.86 (95% confidence interval = 0.679-0.935). There was no difference between test-retest scores (P < 0.001). For investigating concurrent validity, correlation between total score of the CNFDS and the mean visual analogue scale was r = 0.73 (P < 0.001). Concurrent validity of the CNFDS was very good. For investigating construct validity, correlation between total score of the CNFDS and the Neck Pain and Disability Scale was r = 0.78 (P < 0.001). Construct validity of the CNFDS was also very good. Our results suggest that the Turkish version of the CNFDS is a reliable and valid instrument for Turkish people.

  19. Can We Study Autonomous Driving Comfort in Moving-Base Driving Simulators? A Validation Study.

    PubMed

    Bellem, Hanna; Klüver, Malte; Schrauf, Michael; Schöner, Hans-Peter; Hecht, Heiko; Krems, Josef F

    2017-05-01

    To lay the basis of studying autonomous driving comfort using driving simulators, we assessed the behavioral validity of two moving-base simulator configurations by contrasting them with a test-track setting. With increasing level of automation, driving comfort becomes increasingly important. Simulators provide a safe environment to study perceived comfort in autonomous driving. To date, however, no studies were conducted in relation to comfort in autonomous driving to determine the extent to which results from simulator studies can be transferred to on-road driving conditions. Participants ( N = 72) experienced six differently parameterized lane-change and deceleration maneuvers and subsequently rated the comfort of each scenario. One group of participants experienced the maneuvers on a test-track setting, whereas two other groups experienced them in one of two moving-base simulator configurations. We could demonstrate relative and absolute validity for one of the two simulator configurations. Subsequent analyses revealed that the validity of the simulator highly depends on the parameterization of the motion system. Moving-base simulation can be a useful research tool to study driving comfort in autonomous vehicles. However, our results point at a preference for subunity scaling factors for both lateral and longitudinal motion cues, which might be explained by an underestimation of speed in virtual environments. In line with previous studies, we recommend lateral- and longitudinal-motion scaling factors of approximately 50% to 60% in order to obtain valid results for both active and passive driving tasks.

  20. Dimensionality and construct validity of an instrument designed to measure the metacognitive orientation of science classroom learning environments.

    PubMed

    Thomas, Gregory P

    2004-01-01

    The purpose of this study was to establish the factorial construct validity and dimensionality of the Metacognitive Orientation Learning Environment Scale-Science (MOLES-S) which was designed to measure the metacognitive orientation of science classroom learning environments. The metacognitive orientation of a science classroom learning environment is the extent to which psychosocial conditions that are known to enhance students' metacognition are evident within that classroom. The development of items comprising this scale was based on a theoretical understanding of metacognition, learning environments and the development of previous learning environments instruments. Four possible hypothesized structure models, each consistent with the literature, were reviewed and their merits were compared on the basis of empirical data drawn from two populations of 1026 and 1223 Hong Kong secondary school students using confirmatory factor analysis procedures. The scale was calibrated using the Rasch rating scale model using data from the 1223 student sample. The results suggest that there is strong evidence to support the factorial construct validity of the MOLES-S but that, on the basis of the Rasch analysis, there are still suggestions for further refinement and improvement of the MOLES-S.