Sample records for design validation spdv

  1. Pathogenesis and immune response in Atlantic salmon (Salmo salar L.) parr experimentally infected with salmon pancreas disease virus (SPDV).

    PubMed

    Desvignes, L; Quentel, C; Lamour, F; le, Ven A

    2002-01-01

    Atlantic salmon parr were injected intraperitoneally with salmon pancreas disease virus (SPDV) grown on CHSE-214 cells. The viraemia, the histopathological changes in target organs and some immune parameters were taken at intervals up to 30 days post-infection (dpi). The earliest kind of lesion was necrosis of exocrine pancreas, appearing as soon as 2 dpi. It progressed towards complete tissue breakdown at 9 dpi before resolving gradually. Concurrent to this necrosis, a strong inflammatory response was in evidence from 9 dpi in the pancreatic area for a majority of fish. A necrosis of the myocardial cells of the ventricle occurred in infected fish mainly at 16 dpi and it faded thereafter. The monitoring of the plasma viral load showed a rapid haematogenous spreading of SPDV, peaking at 4 dpi, but also the absence of a secondary viraemia. No interferon (IFN) was detected following the infection of parr with SPDV, probably owing to an IFN activity in Atlantic salmon below the detection level of the technique. Neutralising antibodies against SPDV were in evidence from 16 dpi and they showed a time-related increasing titre and prevalence. The phagocytic activity in head-kidney leucocytes was always significantly higher in the infected fish than in the control fish, being particularly high by 9 dpi. Lysozyme and complement levels were both increased and they peaked significantly in the infected fish at 9 and 16 dpi respectively. These results demonstrated that an experimental infection of Atlantic salmon parr with SPDV provoked a stimulation of both specific and non-specific immunity with regards to the viraemia and the histopathology.

  2. Development of Freshwater Grout Subsequent to the Bell Canyon Tests (BCT).

    DTIC Science & Technology

    1986-04-01

    specimens of those grouts cured and studied in the SL, to three-years age. Selected data from earlier tests of related fresh-water grouts are...specimens were either coated with a strippable plastic momn;rane, or sealed in plastic cylinders with tightly fitting lids. Sealed in plastiC habs in...for expansion prisms, the strippable coating applied to SPDV specimens did not prevent water loss. Lower strength gain may be attributable to partial

  3. Fiber-channel audio video standard for military and commercial aircraft product lines

    NASA Astrophysics Data System (ADS)

    Keller, Jack E.

    2002-08-01

    Fibre channel is an emerging high-speed digital network technology that combines to make inroads into the avionics arena. The suitability of fibre channel for such applications is largely due to its flexibility in these key areas: Network topologies can be configured in point-to-point, arbitrated loop or switched fabric connections. The physical layer supports either copper or fiber optic implementations with a Bit Error Rate of less than 10-12. Multiple Classes of Service are available. Multiple Upper Level Protocols are supported. Multiple high speed data rates offer open ended growth paths providing speed negotiation within a single network. Current speeds supported by commercially available hardware are 1 and 2 Gbps providing effective data rates of 100 and 200 MBps respectively. Such networks lend themselves well to the transport of digital video and audio data. This paper summarizes an ANSI standard currently in the final approval cycle of the InterNational Committee for Information Technology Standardization (INCITS). This standard defines a flexible mechanism whereby digital video, audio and ancillary data are systematically packaged for transport over a fibre channel network. The basic mechanism, called a container, houses audio and video content functionally grouped as elements of the container called objects. Featured in this paper is a specific container mapping called Simple Parametric Digital Video (SPDV) developed particularly to address digital video in avionics systems. SPDV provides pixel-based video with associated ancillary data typically sourced by various sensors to be processed and/or distributed in the cockpit for presentation via high-resolution displays. Also highlighted in this paper is a streamlined Upper Level Protocol (ULP) called Frame Header Control Procedure (FHCP) targeted for avionics systems where the functionality of a more complex ULP is not required.

  4. 42 CFR 71.3 - Designation of yellow fever vaccination centers; Validation stamps.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ...; Validation stamps. 71.3 Section 71.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN... Designation of yellow fever vaccination centers; Validation stamps. (a) Designation of yellow fever... health department, may revoke designation. (b) Validation stamps. International Certificates of...

  5. 42 CFR 71.3 - Designation of yellow fever vaccination centers; Validation stamps.

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ...; Validation stamps. 71.3 Section 71.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN... Designation of yellow fever vaccination centers; Validation stamps. (a) Designation of yellow fever... health department, may revoke designation. (b) Validation stamps. International Certificates of...

  6. 42 CFR 71.3 - Designation of yellow fever vaccination centers; Validation stamps.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ...; Validation stamps. 71.3 Section 71.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN... Designation of yellow fever vaccination centers; Validation stamps. (a) Designation of yellow fever... health department, may revoke designation. (b) Validation stamps. International Certificates of...

  7. 42 CFR 71.3 - Designation of yellow fever vaccination centers; Validation stamps.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ...; Validation stamps. 71.3 Section 71.3 Public Health PUBLIC HEALTH SERVICE, DEPARTMENT OF HEALTH AND HUMAN... Designation of yellow fever vaccination centers; Validation stamps. (a) Designation of yellow fever... health department, may revoke designation. (b) Validation stamps. International Certificates of...

  8. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study

    PubMed Central

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Background Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules’ performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Methods Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. Results A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2–4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Conclusion Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved. PMID:26730980

  9. Design Characteristics Influence Performance of Clinical Prediction Rules in Validation: A Meta-Epidemiological Study.

    PubMed

    Ban, Jong-Wook; Emparanza, José Ignacio; Urreta, Iratxe; Burls, Amanda

    2016-01-01

    Many new clinical prediction rules are derived and validated. But the design and reporting quality of clinical prediction research has been less than optimal. We aimed to assess whether design characteristics of validation studies were associated with the overestimation of clinical prediction rules' performance. We also aimed to evaluate whether validation studies clearly reported important methodological characteristics. Electronic databases were searched for systematic reviews of clinical prediction rule studies published between 2006 and 2010. Data were extracted from the eligible validation studies included in the systematic reviews. A meta-analytic meta-epidemiological approach was used to assess the influence of design characteristics on predictive performance. From each validation study, it was assessed whether 7 design and 7 reporting characteristics were properly described. A total of 287 validation studies of clinical prediction rule were collected from 15 systematic reviews (31 meta-analyses). Validation studies using case-control design produced a summary diagnostic odds ratio (DOR) 2.2 times (95% CI: 1.2-4.3) larger than validation studies using cohort design and unclear design. When differential verification was used, the summary DOR was overestimated by twofold (95% CI: 1.2 -3.1) compared to complete, partial and unclear verification. The summary RDOR of validation studies with inadequate sample size was 1.9 (95% CI: 1.2 -3.1) compared to studies with adequate sample size. Study site, reliability, and clinical prediction rule was adequately described in 10.1%, 9.4%, and 7.0% of validation studies respectively. Validation studies with design shortcomings may overestimate the performance of clinical prediction rules. The quality of reporting among studies validating clinical prediction rules needs to be improved.

  10. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Validation of designation. 597.402 Section 597.402 Housing and Urban Development Regulations Relating to Housing and Urban Development... DESIGNATIONS Post-Designation Requirements § 597.402 Validation of designation. (a) Reevaluation of...

  11. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Validation of designation. 597.402 Section 597.402 Housing and Urban Development Regulations Relating to Housing and Urban Development... DESIGNATIONS Post-Designation Requirements § 597.402 Validation of designation. (a) Reevaluation of...

  12. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Validation of designation. 597.402 Section 597.402 Housing and Urban Development Regulations Relating to Housing and Urban Development... DESIGNATIONS Post-Designation Requirements § 597.402 Validation of designation. (a) Reevaluation of...

  13. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Validation of designation. 597.402 Section 597.402 Housing and Urban Development Regulations Relating to Housing and Urban Development... DESIGNATIONS Post-Designation Requirements § 597.402 Validation of designation. (a) Reevaluation of...

  14. The Universal Design for Play Tool: Establishing Validity and Reliability

    ERIC Educational Resources Information Center

    Ruffino, Amy Goetz; Mistrett, Susan G.; Tomita, Machiko; Hajare, Poonam

    2006-01-01

    The Universal Design for Play (UDP) Tool is an instrument designed to evaluate the presence of universal design (UD) features in toys. This study evaluated its psychometric properties, including content validity, construct validity, and test-retest reliability. The UDP tool was designed to assist in selecting toys most appropriate for children…

  15. Selecting and Improving Quasi-Experimental Designs in Effectiveness and Implementation Research.

    PubMed

    Handley, Margaret A; Lyles, Courtney R; McCulloch, Charles; Cattamanchi, Adithya

    2018-04-01

    Interventional researchers face many design challenges when assessing intervention implementation in real-world settings. Intervention implementation requires holding fast on internal validity needs while incorporating external validity considerations (such as uptake by diverse subpopulations, acceptability, cost, and sustainability). Quasi-experimental designs (QEDs) are increasingly employed to achieve a balance between internal and external validity. Although these designs are often referred to and summarized in terms of logistical benefits, there is still uncertainty about (a) selecting from among various QEDs and (b) developing strategies to strengthen the internal and external validity of QEDs. We focus here on commonly used QEDs (prepost designs with nonequivalent control groups, interrupted time series, and stepped-wedge designs) and discuss several variants that maximize internal and external validity at the design, execution and implementation, and analysis stages.

  16. 24 CFR 598.425 - Validation of designation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 24 Housing and Urban Development 3 2011-04-01 2010-04-01 true Validation of designation. 598.425 Section 598.425 Housing and Urban Development Regulations Relating to Housing and Urban Development...-Designation Requirements § 598.425 Validation of designation. (a) On the basis of the periodic progress...

  17. 24 CFR 598.425 - Validation of designation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 24 Housing and Urban Development 3 2012-04-01 2012-04-01 false Validation of designation. 598.425 Section 598.425 Housing and Urban Development Regulations Relating to Housing and Urban Development...-Designation Requirements § 598.425 Validation of designation. (a) On the basis of the periodic progress...

  18. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 7 Agriculture 1 2011-01-01 2011-01-01 false Validation of designation. 25.404 Section 25.404 Agriculture Office of the Secretary of Agriculture RURAL EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES Post-Designation Requirements § 25.404 Validation of designation. (a) Maintaining the principles of the program...

  19. 24 CFR 598.425 - Validation of designation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 24 Housing and Urban Development 3 2014-04-01 2013-04-01 true Validation of designation. 598.425 Section 598.425 Housing and Urban Development Regulations Relating to Housing and Urban Development...-Designation Requirements § 598.425 Validation of designation. (a) On the basis of the periodic progress...

  20. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... 7 Agriculture 1 2014-01-01 2014-01-01 false Validation of designation. 25.404 Section 25.404 Agriculture Office of the Secretary of Agriculture RURAL EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES Post-Designation Requirements § 25.404 Validation of designation. (a) Maintaining the principles of the program...

  1. 24 CFR 598.425 - Validation of designation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 24 Housing and Urban Development 3 2013-04-01 2013-04-01 false Validation of designation. 598.425 Section 598.425 Housing and Urban Development Regulations Relating to Housing and Urban Development...-Designation Requirements § 598.425 Validation of designation. (a) On the basis of the periodic progress...

  2. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... 7 Agriculture 1 2012-01-01 2012-01-01 false Validation of designation. 25.404 Section 25.404 Agriculture Office of the Secretary of Agriculture RURAL EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES Post-Designation Requirements § 25.404 Validation of designation. (a) Maintaining the principles of the program...

  3. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... 7 Agriculture 1 2013-01-01 2013-01-01 false Validation of designation. 25.404 Section 25.404 Agriculture Office of the Secretary of Agriculture RURAL EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES Post-Designation Requirements § 25.404 Validation of designation. (a) Maintaining the principles of the program...

  4. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    PubMed

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. Issues in developing valid assessments of speech pathology students' performance in the workplace.

    PubMed

    McAllister, Sue; Lincoln, Michelle; Ferguson, Alison; McAllister, Lindy

    2010-01-01

    Workplace-based learning is a critical component of professional preparation in speech pathology. A validated assessment of this learning is seen to be 'the gold standard', but it is difficult to develop because of design and validation issues. These issues include the role and nature of judgement in assessment, challenges in measuring quality, and the relationship between assessment and learning. Valid assessment of workplace-based performance needs to capture the development of competence over time and account for both occupation specific and generic competencies. This paper reviews important conceptual issues in the design of valid and reliable workplace-based assessments of competence including assessment content, process, impact on learning, measurement issues, and validation strategies. It then goes on to share what has been learned about quality assessment and validation of a workplace-based performance assessment using competency-based ratings. The outcomes of a four-year national development and validation of an assessment tool are described. A literature review of issues in conceptualizing, designing, and validating workplace-based assessments was conducted. Key factors to consider in the design of a new tool were identified and built into the cycle of design, trialling, and data analysis in the validation stages of the development process. This paper provides an accessible overview of factors to consider in the design and validation of workplace-based assessment tools. It presents strategies used in the development and national validation of a tool COMPASS, used in an every speech pathology programme in Australia, New Zealand, and Singapore. The paper also describes Rasch analysis, a model-based statistical approach which is useful for establishing validity and reliability of assessment tools. Through careful attention to conceptual and design issues in the development and trialling of workplace-based assessments, it has been possible to develop the world's first valid and reliable national assessment tool for the assessment of performance in speech pathology.

  6. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  7. Design and validation of instruments to measure knowledge.

    PubMed

    Elliott, T E; Regal, R R; Elliott, B A; Renier, C M

    2001-01-01

    Measuring health care providers' learning after they have participated in educational interventions that use experimental designs requires valid, reliable, and practical instruments. A literature review was conducted. In addition, experience gained from designing and validating instruments for measuring the effect of an educational intervention informed this process. The eight main steps for designing, validating, and testing the reliability of instruments for measuring learning outcomes are presented. The key considerations and rationale for this process are discussed. Methods for critiquing and adapting existent instruments and creating new ones are offered. This study may help other investigators in developing valid, reliable, and practical instruments for measuring the outcomes of educational activities.

  8. The Validity of the Comparative Interrupted Time Series Design for Evaluating the Effect of School-Level Interventions.

    PubMed

    Jacob, Robin; Somers, Marie-Andree; Zhu, Pei; Bloom, Howard

    2016-06-01

    In this article, we examine whether a well-executed comparative interrupted time series (CITS) design can produce valid inferences about the effectiveness of a school-level intervention. This article also explores the trade-off between bias reduction and precision loss across different methods of selecting comparison groups for the CITS design and assesses whether choosing matched comparison schools based only on preintervention test scores is sufficient to produce internally valid impact estimates. We conduct a validation study of the CITS design based on the federal Reading First program as implemented in one state using results from a regression discontinuity design as a causal benchmark. Our results contribute to the growing base of evidence regarding the validity of nonexperimental designs. We demonstrate that the CITS design can, in our example, produce internally valid estimates of program impacts when multiple years of preintervention outcome data (test scores in the present case) are available and when a set of reasonable criteria are used to select comparison organizations (schools in the present case). © The Author(s) 2016.

  9. Designing and Validating a Measure of Teacher Knowledge of Universal Design for Assessment (UDA)

    ERIC Educational Resources Information Center

    Jamgochian, Elisa Megan

    2010-01-01

    The primary purpose of this study was to design and validate a measure of teacher knowledge of Universal Design for Assessment (TK-UDA). Guided by a validity framework, a number of inferences, assumptions, and evidences supported this investigation. By addressing a series of research questions, evidence was garnered for the use of the measure to…

  10. Design and validation of general biology learning program based on scientific inquiry skills

    NASA Astrophysics Data System (ADS)

    Cahyani, R.; Mardiana, D.; Noviantoro, N.

    2018-03-01

    Scientific inquiry is highly recommended to teach science. The reality in the schools and colleges is that many educators still have not implemented inquiry learning because of their lack of understanding. The study aims to1) analyze students’ difficulties in learning General Biology, 2) design General Biology learning program based on multimedia-assisted scientific inquiry learning, and 3) validate the proposed design. The method used was Research and Development. The subjects of the study were 27 pre-service students of general elementary school/Islamic elementary schools. The workflow of program design includes identifying learning difficulties of General Biology, designing course programs, and designing instruments and assessment rubrics. The program design is made for four lecture sessions. Validation of all learning tools were performed by expert judge. The results showed that: 1) there are some problems identified in General Biology lectures; 2) the designed products include learning programs, multimedia characteristics, worksheet characteristics, and, scientific attitudes; and 3) expert validation shows that all program designs are valid and can be used with minor revisions. The first section in your paper.

  11. Development and Validation of Targeted Next-Generation Sequencing Panels for Detection of Germline Variants in Inherited Diseases.

    PubMed

    Santani, Avni; Murrell, Jill; Funke, Birgit; Yu, Zhenming; Hegde, Madhuri; Mao, Rong; Ferreira-Gonzalez, Andrea; Voelkerding, Karl V; Weck, Karen E

    2017-06-01

    - The number of targeted next-generation sequencing (NGS) panels for genetic diseases offered by clinical laboratories is rapidly increasing. Before an NGS-based test is implemented in a clinical laboratory, appropriate validation studies are needed to determine the performance characteristics of the test. - To provide examples of assay design and validation of targeted NGS gene panels for the detection of germline variants associated with inherited disorders. - The approaches used by 2 clinical laboratories for the development and validation of targeted NGS gene panels are described. Important design and validation considerations are examined. - Clinical laboratories must validate performance specifications of each test prior to implementation. Test design specifications and validation data are provided, outlining important steps in validation of targeted NGS panels by clinical diagnostic laboratories.

  12. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  13. Regression Discontinuity and Beyond: Options for Studying External Validity in an Internally Valid Design

    ERIC Educational Resources Information Center

    Wing, Coady; Bello-Gomez, Ricardo A.

    2018-01-01

    Treatment effect estimates from a "regression discontinuity design" (RDD) have high internal validity. However, the arguments that support the design apply to a subpopulation that is narrower and usually different from the population of substantive interest in evaluation research. The disconnect between RDD population and the…

  14. Students' Initial Knowledge State and Test Design: Towards a Valid and Reliable Test Instrument

    ERIC Educational Resources Information Center

    CoPo, Antonio Roland I.

    2015-01-01

    Designing a good test instrument involves specifications, test construction, validation, try-out, analysis and revision. The initial knowledge state of forty (40) tertiary students enrolled in Business Statistics course was determined and the same test instrument undergoes validation. The designed test instrument did not only reveal the baseline…

  15. A Design to Improve Internal Validity of Assessments of Teaching Demonstrations

    ERIC Educational Resources Information Center

    Bartsch, Robert A.; Engelhardt Bittner, Wendy M.; Moreno, Jesse E., Jr.

    2008-01-01

    Internal validity is important in assessing teaching demonstrations both for one's knowledge and for quality assessment demanded by outside sources. We describe a method to improve the internal validity of assessments of teaching demonstrations: a 1-group pretest-posttest design with alternative forms. This design is often more practical and…

  16. Establishing the Validity and Reliability of Course Evaluation Questionnaires

    ERIC Educational Resources Information Center

    Kember, David; Leung, Doris Y. P.

    2008-01-01

    This article uses the case of designing a new course questionnaire to discuss the issues of validity, reliability and diagnostic power in good questionnaire design. Validity is often not well addressed in course questionnaire design as there are no straightforward tests that can be applied to an individual instrument. The authors propose the…

  17. Validation of a Low-Thrust Mission Design Tool Using Operational Navigation Software

    NASA Technical Reports Server (NTRS)

    Englander, Jacob A.; Knittel, Jeremy M.; Williams, Ken; Stanbridge, Dale; Ellison, Donald H.

    2017-01-01

    Design of flight trajectories for missions employing solar electric propulsion requires a suitably high-fidelity design tool. In this work, the Evolutionary Mission Trajectory Generator (EMTG) is presented as a medium-high fidelity design tool that is suitable for mission proposals. EMTG is validated against the high-heritage deep-space navigation tool MIRAGE, demonstrating both the accuracy of EMTG's model and an operational mission design and navigation procedure using both tools. The validation is performed using a benchmark mission to the Jupiter Trojans.

  18. Creating and validating GIS measures of urban design for health research.

    PubMed

    Purciel, Marnie; Neckerman, Kathryn M; Lovasi, Gina S; Quinn, James W; Weiss, Christopher; Bader, Michael D M; Ewing, Reid; Rundle, Andrew

    2009-12-01

    Studies relating urban design to health have been impeded by the unfeasibility of conducting field observations across large areas and the lack of validated objective measures of urban design. This study describes measures for five dimensions of urban design - imageability, enclosure, human scale, transparency, and complexity - created using public geographic information systems (GIS) data from the US Census and city and state government. GIS measures were validated for a sample of 588 New York City block faces using a well-documented field observation protocol. Correlations between GIS and observed measures ranged from 0.28 to 0.89. Results show valid urban design measures can be constructed from digital sources.

  19. Creating and validating GIS measures of urban design for health research

    PubMed Central

    Purciel, Marnie; Neckerman, Kathryn M.; Lovasi, Gina S.; Quinn, James W.; Weiss, Christopher; Bader, Michael D.M.; Ewing, Reid; Rundle, Andrew

    2012-01-01

    Studies relating urban design to health have been impeded by the unfeasibility of conducting field observations across large areas and the lack of validated objective measures of urban design. This study describes measures for five dimensions of urban design – imageability, enclosure, human scale, transparency, and complexity – created using public geographic information systems (GIS) data from the US Census and city and state government. GIS measures were validated for a sample of 588 New York City block faces using a well-documented field observation protocol. Correlations between GIS and observed measures ranged from 0.28 to 0.89. Results show valid urban design measures can be constructed from digital sources. PMID:22956856

  20. Design of capability measurement instruments pedagogic content knowledge (PCK) for prospective mathematics teachers

    NASA Astrophysics Data System (ADS)

    Aminah, N.; Wahyuni, I.

    2018-05-01

    The purpose of this study is to find out how the process of designing a tool of measurement Pedagogical Content Knowledge (PCK) capabilities, especially for prospective mathematics teachers are valid and practical. The design study of this measurement appliance uses modified Plomp development step, which consists of (1) initial assessment stage, (2) design stage at this stage, the researcher designs the measuring grille of PCK capability, (3) realization stage that is making measurement tool ability of PCK, (4) test phase, evaluation, and revision that is testing validation of measurement tools conducted by experts. Based on the results obtained that the design of PCK capability measurement tool is valid as indicated by the assessment of expert validator, and the design of PCK capability measurement tool, shown based on the assessment of teachers and lecturers as users of states strongly agree the design of PCK measurement tools can be used.

  1. Validation of the procedures. [integrated multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Mantay, Wayne R.

    1989-01-01

    Validation strategies are described for procedures aimed at improving the rotor blade design process through a multidisciplinary optimization approach. Validation of the basic rotor environment prediction tools and the overall rotor design are discussed.

  2. Performance Ratings: Designs for Evaluating Their Validity and Accuracy.

    DTIC Science & Technology

    1986-07-01

    ratees with substantial validity and with little bias due to the ethod for rating. Convergent validity and discriminant validity account for approximately...The expanded research design suggests that purpose for the ratings has little influence on the multitrait-multimethod properties of the ratings...Convergent and discriminant validity again account for substantial differences in the ratings of performance. Little method bias is present; both methods of

  3. The Role of Structural Models in the Solar Sail Flight Validation Process

    NASA Technical Reports Server (NTRS)

    Johnston, John D.

    2004-01-01

    NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.

  4. Identification and Validation of ESP Teacher Competencies: A Research Design

    ERIC Educational Resources Information Center

    Venkatraman, G.; Prema, P.

    2013-01-01

    The paper presents the research design used for identifying and validating a set of competencies required of ESP (English for Specific Purposes) teachers. The identification of the competencies and the three-stage validation process are also discussed. The observation of classes of ESP teachers for field-testing the validated competencies and…

  5. The Validity and Precision of the Comparative Interrupted Time-Series Design: Three Within-Study Comparisons

    ERIC Educational Resources Information Center

    St. Clair, Travis; Hallberg, Kelly; Cook, Thomas D.

    2016-01-01

    We explore the conditions under which short, comparative interrupted time-series (CITS) designs represent valid alternatives to randomized experiments in educational evaluations. To do so, we conduct three within-study comparisons, each of which uses a unique data set to test the validity of the CITS design by comparing its causal estimates to…

  6. Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)

    DTIC Science & Technology

    2015-05-27

    Address: 10 Moulton Street, Cambridge, MA 02138 Title of the Project: Seaworthy Quantum Key Distribution Design and Validation (SEAKEY...Technologies Kathryn Carson Program Manager Quantum Information Processing Report Documentation Page Form ApprovedOMB No. 0704-0188 Public...2016 4. TITLE AND SUBTITLE Seaworthy Quantum Key Distribution Design and Validation (SEAKEY) 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM

  7. Automatic control system generation for robot design validation

    NASA Technical Reports Server (NTRS)

    Bacon, James A. (Inventor); English, James D. (Inventor)

    2012-01-01

    The specification and drawings present a new method, system and software product for and apparatus for generating a robotic validation system for a robot design. The robotic validation system for the robot design of a robotic system is automatically generated by converting a robot design into a generic robotic description using a predetermined format, then generating a control system from the generic robotic description and finally updating robot design parameters of the robotic system with an analysis tool using both the generic robot description and the control system.

  8. Resource Conservation and Recovery Act, Part B Permit Application [for the Waste Isolation Pilot Plant (WIPP)]. Volume 5, Chapter D, Appendix D1 (conclusion), Revision 3

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Cook, Neville G.W.; Heuze, Francois E.; Miller, Hamish D.S.

    1993-03-01

    The reference design for the underground facilities at the Waste Isolation Pilot Plant was developed using the best criteria available at initiation of the detailed design effort. These design criteria are contained in the US Department of Energy document titled Design Criteria, Waste Isolation Pilot Plant (WIPP). Revised Mission Concept-IIA (RMC-IIA), Rev. 4, dated February 1984. The validation process described in the Design Validation Final Report has resulted in validation of the reference design of the underground openings based on these criteria. Future changes may necessitate modification of the Design Criteria document and/or the reference design. Validation of the referencemore » design as presented in this report permits the consideration of future design or design criteria modifications necessitated by these changes or by experience gained at the WIPP. Any future modifications to the design criteria and/or the reference design will be governed by a DOE Standard Operation Procedure (SOP) covering underground design changes. This procedure will explain the process to be followed in describing, evaluating and approving the change.« less

  9. LATUX: An Iterative Workflow for Designing, Validating, and Deploying Learning Analytics Visualizations

    ERIC Educational Resources Information Center

    Martinez-Maldonado, Roberto; Pardo, Abelardo; Mirriahi, Negin; Yacef, Kalina; Kay, Judy; Clayphan, Andrew

    2015-01-01

    Designing, validating, and deploying learning analytics tools for instructors or students is a challenge that requires techniques and methods from different disciplines, such as software engineering, human-computer interaction, computer graphics, educational design, and psychology. Whilst each has established its own design methodologies, we now…

  10. Bayesian cross-entropy methodology for optimal design of validation experiments

    NASA Astrophysics Data System (ADS)

    Jiang, X.; Mahadevan, S.

    2006-07-01

    An important concern in the design of validation experiments is how to incorporate the mathematical model in the design in order to allow conclusive comparisons of model prediction with experimental output in model assessment. The classical experimental design methods are more suitable for phenomena discovery and may result in a subjective, expensive, time-consuming and ineffective design that may adversely impact these comparisons. In this paper, an integrated Bayesian cross-entropy methodology is proposed to perform the optimal design of validation experiments incorporating the computational model. The expected cross entropy, an information-theoretic distance between the distributions of model prediction and experimental observation, is defined as a utility function to measure the similarity of two distributions. A simulated annealing algorithm is used to find optimal values of input variables through minimizing or maximizing the expected cross entropy. The measured data after testing with the optimum input values are used to update the distribution of the experimental output using Bayes theorem. The procedure is repeated to adaptively design the required number of experiments for model assessment, each time ensuring that the experiment provides effective comparison for validation. The methodology is illustrated for the optimal design of validation experiments for a three-leg bolted joint structure and a composite helicopter rotor hub component.

  11. The Validity and Precision of the Comparative Interrupted Time Series Design and the Difference-in-Difference Design in Educational Evaluation

    ERIC Educational Resources Information Center

    Somers, Marie-Andrée; Zhu, Pei; Jacob, Robin; Bloom, Howard

    2013-01-01

    In this paper, we examine the validity and precision of two nonexperimental study designs (NXDs) that can be used in educational evaluation: the comparative interrupted time series (CITS) design and the difference-in-difference (DD) design. In a CITS design, program impacts are evaluated by looking at whether the treatment group deviates from its…

  12. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  13. Accounting for Test Variability through Sizing Local Domains in Sequential Design Optimization with Concurrent Calibration-Based Model Validation

    DTIC Science & Technology

    2013-08-01

    in Sequential Design Optimization with Concurrent Calibration-Based Model Validation Dorin Drignei 1 Mathematics and Statistics Department...Validation 5a. CONTRACT NUMBER 5b. GRANT NUMBER 5c. PROGRAM ELEMENT NUMBER 6. AUTHOR(S) Dorin Drignei; Zissimos Mourelatos; Vijitashwa Pandey

  14. Designing the Nuclear Energy Attitude Scale.

    ERIC Educational Resources Information Center

    Calhoun, Lawrence; And Others

    1988-01-01

    Presents a refined method for designing a valid and reliable Likert-type scale to test attitudes toward the generation of electricity from nuclear energy. Discusses various tests of validity that were used on the nuclear energy scale. Reports results of administration and concludes that the test is both reliable and valid. (CW)

  15. 24 CFR 597.402 - Validation of designation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... URBAN DEVELOPMENT COMMUNITY FACILITIES URBAN EMPOWERMENT ZONES AND ENTERPRISE COMMUNITIES: ROUND ONE... eligibility for and the validity of the designation of any Empowerment Zone or Enterprise Community. Determinations of whether any designated Empowerment Zone or Enterprise Community remains in good standing shall...

  16. Methodology Series Module 9: Designing Questionnaires and Clinical Record Forms - Part II.

    PubMed

    Setia, Maninder Singh

    2017-01-01

    This article is a continuation of the previous module on designing questionnaires and clinical record form in which we have discussed some basic points about designing the questionnaire and clinical record forms. In this section, we will discuss the reliability and validity of questionnaires. The different types of validity are face validity, content validity, criterion validity, and construct validity. The different types of reliability are test-retest reliability, inter-rater reliability, and intra-rater reliability. Some of these parameters are assessed by subject area experts. However, statistical tests should be used for evaluation of other parameters. Once the questionnaire has been designed, the researcher should pilot test the questionnaire. The items in the questionnaire should be changed based on the feedback from the pilot study participants and the researcher's experience. After the basic structure of the questionnaire has been finalized, the researcher should assess the validity and reliability of the questionnaire or the scale. If an existing standard questionnaire is translated in the local language, the researcher should assess the reliability and validity of the translated questionnaire, and these values should be presented in the manuscript. The decision to use a self- or interviewer-administered, paper- or computer-based questionnaire depends on the nature of the questions, literacy levels of the target population, and resources.

  17. Methodology Series Module 9: Designing Questionnaires and Clinical Record Forms – Part II

    PubMed Central

    Setia, Maninder Singh

    2017-01-01

    This article is a continuation of the previous module on designing questionnaires and clinical record form in which we have discussed some basic points about designing the questionnaire and clinical record forms. In this section, we will discuss the reliability and validity of questionnaires. The different types of validity are face validity, content validity, criterion validity, and construct validity. The different types of reliability are test-retest reliability, inter-rater reliability, and intra-rater reliability. Some of these parameters are assessed by subject area experts. However, statistical tests should be used for evaluation of other parameters. Once the questionnaire has been designed, the researcher should pilot test the questionnaire. The items in the questionnaire should be changed based on the feedback from the pilot study participants and the researcher's experience. After the basic structure of the questionnaire has been finalized, the researcher should assess the validity and reliability of the questionnaire or the scale. If an existing standard questionnaire is translated in the local language, the researcher should assess the reliability and validity of the translated questionnaire, and these values should be presented in the manuscript. The decision to use a self- or interviewer-administered, paper- or computer-based questionnaire depends on the nature of the questions, literacy levels of the target population, and resources. PMID:28584367

  18. 78 FR 17680 - Information Collection Request; Chemical Facility Anti-Terrorism Standards Personnel Surety Program

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-03-22

    ... visitors with access to restricted areas or critical assets, including, (i) Measures designed to verify and validate identity; (ii) Measures designed to check criminal history; (iii) Measures designed to verify and validate legal authorization to work; and (iv) Measures designed to identify people with terrorist ties...

  19. Development and validation of a building design waste reduction model.

    PubMed

    Llatas, C; Osmani, M

    2016-10-01

    Reduction in construction waste is a pressing need in many countries. The design of building elements is considered a pivotal process to achieve waste reduction at source, which enables an informed prediction of their wastage reduction levels. However the lack of quantitative methods linking design strategies to waste reduction hinders designing out waste practice in building projects. Therefore, this paper addresses this knowledge gap through the design and validation of a Building Design Waste Reduction Strategies (Waste ReSt) model that aims to investigate the relationships between design variables and their impact on onsite waste reduction. The Waste ReSt model was validated in a real-world case study involving 20 residential buildings in Spain. The validation process comprises three stages. Firstly, design waste causes were analyzed. Secondly, design strategies were applied leading to several alternative low waste building elements. Finally, their potential source reduction levels were quantified and discussed within the context of the literature. The Waste ReSt model could serve as an instrumental tool to simulate designing out strategies in building projects. The knowledge provided by the model could help project stakeholders to better understand the correlation between the design process and waste sources and subsequently implement design practices for low-waste buildings. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Comparison Between Predicted and Experimentally Measured Flow Fields at the Exit of the SSME HPFTP Impeller

    NASA Technical Reports Server (NTRS)

    Bache, George

    1993-01-01

    Validation of CFD codes is a critical first step in the process of developing CFD design capability. The MSFC Pump Technology Team has recognized the importance of validation and has thus funded several experimental programs designed to obtain CFD quality validation data. The first data set to become available is for the SSME High Pressure Fuel Turbopump Impeller. LDV Data was taken at the impeller inlet (to obtain a reliable inlet boundary condition) and three radial positions at the impeller discharge. Our CFD code, TASCflow, is used within the Propulsion and Commercial Pump industry as a tool for pump design. The objective of this work, therefore, is to further validate TASCflow for application in pump design. TASCflow was used to predict flow at the impeller discharge for flowrates of 80, 100 and 115 percent of design flow. Comparison to data has been made with encouraging results.

  1. Psychometric testing on the NLN Student Satisfaction and Self-Confidence in Learning, Simulation Design Scale, and Educational Practices Questionnaire using a sample of pre-licensure novice nurses.

    PubMed

    Franklin, Ashley E; Burns, Paulette; Lee, Christopher S

    2014-10-01

    In 2006, the National League for Nursing published three measures related to novice nurses' beliefs about self-confidence, scenario design, and educational practices associated with simulation. Despite the extensive use of these measures, little is known about their reliability and validity. The psychometric properties of the Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire were studied among a sample of 2200 surveys completed by novice nurses from a liberal arts university in the southern United States. Psychometric tests included item analysis, confirmatory and exploratory factor analyses in randomly-split subsamples, concordant and discordant validity, and internal consistency. All three measures have sufficient reliability and validity to be used in education research. There is room for improvement in content validity with the Student Satisfaction and Self-Confidence in Learning and Simulation Design Scale. This work provides robust evidence to ensure that judgments made about self-confidence after simulation, simulation design and educational practices are valid and reliable. Copyright © 2014 Elsevier Ltd. All rights reserved.

  2. RF Systems in Space. Volume I. Space Antennas Frequency (SARF) Simulation.

    DTIC Science & Technology

    1983-04-01

    lens SBR designs were investigated. The survivability of an SBR system was analyzed. The design of ground based SBR validation experiments for large...aperture SBR concepts were investigated. SBR designs were investigated for ground target detection. N1’IS GRAMI DTIC TAB E Unannounced E Justificat... designs :~~.~...: .-..:. ->.. - . *.* . ..- . . .. . -. . ..- . .4. To analyze the survivability of space radar 5. To design ground-based validation

  3. Designing Interactive Electronic Module in Chemistry Lessons

    NASA Astrophysics Data System (ADS)

    Irwansyah, F. S.; Lubab, I.; Farida, I.; Ramdhani, M. A.

    2017-09-01

    This research aims to design electronic module (e-module) oriented to the development of students’ chemical literacy on the solution colligative properties material. This research undergoes some stages including concept analysis, discourse analysis, storyboard design, design development, product packaging, validation, and feasibility test. Overall, this research undertakes three main stages, namely, Define (in the form of preliminary studies); Design (designing e-module); Develop (including validation and model trial). The concept presentation and visualization used in this e-module is oriented to chemical literacy skills. The presentation order carries aspects of scientific context, process, content, and attitude. Chemists and multi media experts have done the validation to test the initial quality of the products and give a feedback for the product improvement. The feasibility test results stated that the content presentation and display are valid and feasible to be used with the value of 85.77% and 87.94%. These values indicate that this e-module oriented to students’ chemical literacy skills for the solution colligative properties material is feasible to be used.

  4. A NASA Perspective and Validation and Testing of Design Hardening for the Natural Space Radiation Environment (GOMAC Tech 03)

    NASA Technical Reports Server (NTRS)

    Day, John H. (Technical Monitor); LaBel, Kenneth A.; Howard, James W.; Carts, Martin A.; Seidleck, Christine

    2003-01-01

    With the dearth of dedicated radiation hardened foundries, new and novel techniques are being developed for hardening designs using non-dedicated foundry services. In this paper, we will discuss the implications of validating these methods for the natural space radiation environment issues: total ionizing dose (TID) and single event effects (SEE). Topics of discussion include: Types of tests that are required, Design coverage (i.e., design libraries: do they need validating for each application?) A new task within NASA to compare existing design. This latter task is a new effort in FY03 utilizing a 8051 microcontroller core from multiple design hardening developers as a test vehicle to evaluate each mitigative technique.

  5. Experimental design methodologies in the optimization of chiral CE or CEC separations: an overview.

    PubMed

    Dejaegher, Bieke; Mangelings, Debby; Vander Heyden, Yvan

    2013-01-01

    In this chapter, an overview of experimental designs to develop chiral capillary electrophoresis (CE) and capillary electrochromatographic (CEC) methods is presented. Method development is generally divided into technique selection, method optimization, and method validation. In the method optimization part, often two phases can be distinguished, i.e., a screening and an optimization phase. In method validation, the method is evaluated on its fit for purpose. A validation item, also applying experimental designs, is robustness testing. In the screening phase and in robustness testing, screening designs are applied. During the optimization phase, response surface designs are used. The different design types and their application steps are discussed in this chapter and illustrated by examples of chiral CE and CEC methods.

  6. Threats to the Internal Validity of Experimental and Quasi-Experimental Research in Healthcare.

    PubMed

    Flannelly, Kevin J; Flannelly, Laura T; Jankowski, Katherine R B

    2018-01-01

    The article defines, describes, and discusses the seven threats to the internal validity of experiments discussed by Donald T. Campbell in his classic 1957 article: history, maturation, testing, instrument decay, statistical regression, selection, and mortality. These concepts are said to be threats to the internal validity of experiments because they pose alternate explanations for the apparent causal relationship between the independent variable and dependent variable of an experiment if they are not adequately controlled. A series of simple diagrams illustrate three pre-experimental designs and three true experimental designs discussed by Campbell in 1957 and several quasi-experimental designs described in his book written with Julian C. Stanley in 1966. The current article explains why each design controls for or fails to control for these seven threats to internal validity.

  7. Fostering creativity in product and service development: validation in the domain of information technology.

    PubMed

    Zeng, Liang; Proctor, Robert W; Salvendy, Gavriel

    2011-06-01

    This research is intended to empirically validate a general model of creative product and service development proposed in the literature. A current research gap inspired construction of a conceptual model to capture fundamental phases and pertinent facilitating metacognitive strategies in the creative design process. The model also depicts the mechanism by which design creativity affects consumer behavior. The validity and assets of this model have not yet been investigated. Four laboratory studies were conducted to demonstrate the value of the proposed cognitive phases and associated metacognitive strategies in the conceptual model. Realistic product and service design problems were used in creativity assessment to ensure ecological validity. Design creativity was enhanced by explicit problem analysis, whereby one formulates problems from different perspectives and at different levels of abstraction. Remote association in conceptual combination spawned more design creativity than did near association. Abstraction led to greater creativity in conducting conceptual expansion than did specificity, which induced mental fixation. Domain-specific knowledge and experience enhanced design creativity, indicating that design can be of a domain-specific nature. Design creativity added integrated value to products and services and positively influenced customer behavior. The validity and value of the proposed conceptual model is supported by empirical findings. The conceptual model of creative design could underpin future theory development. Propositions advanced in this article should provide insights and approaches to facilitate organizations pursuing product and service creativity to gain competitive advantage.

  8. Quality Rating and Improvement System (QRIS) Validation Study Designs. CEELO FastFacts

    ERIC Educational Resources Information Center

    Schilder, D.

    2013-01-01

    In this "Fast Facts," a state has received Race to the Top Early Learning Challenge funds and is seeking information to inform the design of the Quality Rating and Improvement System (QRIS) validation study. The Center on Enhancing Early Learning Outcomes (CEELO) responds that according to Resnick (2012), validation of a QRIS is an…

  9. Real-Time Sensor Validation, Signal Reconstruction, and Feature Detection for an RLV Propulsion Testbed

    NASA Technical Reports Server (NTRS)

    Jankovsky, Amy L.; Fulton, Christopher E.; Binder, Michael P.; Maul, William A., III; Meyer, Claudia M.

    1998-01-01

    A real-time system for validating sensor health has been developed in support of the reusable launch vehicle program. This system was designed for use in a propulsion testbed as part of an overall effort to improve the safety, diagnostic capability, and cost of operation of the testbed. The sensor validation system was designed and developed at the NASA Lewis Research Center and integrated into a propulsion checkout and control system as part of an industry-NASA partnership, led by Rockwell International for the Marshall Space Flight Center. The system includes modules for sensor validation, signal reconstruction, and feature detection and was designed to maximize portability to other applications. Review of test data from initial integration testing verified real-time operation and showed the system to perform correctly on both hard and soft sensor failure test cases. This paper discusses the design of the sensor validation and supporting modules developed at LeRC and reviews results obtained from initial test cases.

  10. Reducing Threats to Validity by Design in a Nonrandomized Experiment of a School-Wide Prevention Model

    ERIC Educational Resources Information Center

    Sørlie, Mari-Anne; Ogden, Terje

    2014-01-01

    This paper reviews literature on the rationale, challenges, and recommendations for choosing a nonequivalent comparison (NEC) group design when evaluating intervention effects. After reviewing frequently addressed threats to validity, the paper describes recommendations for strengthening the research design and how the recommendations were…

  11. The Perils of Ignoring Design Effects in Experimental Studies: Lessons from a Mammography Screening Trial

    PubMed Central

    Glenn, Beth A.; Bastani, Roshan; Maxwell, Annette E.

    2013-01-01

    Objective Threats to external validity including pretest sensitization and the interaction of selection and an intervention are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomized trial designed to promote mammography use in a high risk sample of women. Design During the trial, recruitment and intervention implementation took place in three cohorts (with different ethnic composition), utilizing two different designs (pretest-posttest control group design; posttest only control group design). Results Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample. Conclusion These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity. PMID:23289517

  12. The development of thematic materials using project based learning for elementary school

    NASA Astrophysics Data System (ADS)

    Yuliana, M.; Wiryawan, S. A.; Riyadi

    2018-05-01

    Teaching materials is one of the important factors in supporting on learning process. This paper discussed about developing thematic materials using project based learning. Thematic materials are designed to make students to be active, creative, cooperative, easy in thinking to solve the problem. The purpose of the research was to develop thematic material using project based learning which used valid variables. The method of research which used in this research was four stages of research and development proposed by Thiagarajan consisting of 4 stages, namely: (1) definition stage, (2) design stage, (3) development stage, and (4) stage of dissemination. The first stage was research and information collection, it was in form of need analysis with questionnaire, observation, interview, and document analysis. Design stage was based on the competencies and indicator. The third was development stage, this stage was used to product validation from expert. The validity of research development involved media validator, material validator, and linguistic validator. The result from the validation of thematic material by expert showed that the overall result had a very good rating which ranged from 1 to 5 likert scale, media validation showed a mean score 4,83, the material validation showed mean score 4,68, and the mean of linguistic validation was e 4,74. It showed that the thematic material using project based learning was valid and feasible to be implemented in the context thematic learning.

  13. Analytical procedure validation and the quality by design paradigm.

    PubMed

    Rozet, Eric; Lebrun, Pierre; Michiels, Jean-François; Sondag, Perceval; Scherder, Tara; Boulanger, Bruno

    2015-01-01

    Since the adoption of the ICH Q8 document concerning the development of pharmaceutical processes following a quality by design (QbD) approach, there have been many discussions on the opportunity for analytical procedure developments to follow a similar approach. While development and optimization of analytical procedure following QbD principles have been largely discussed and described, the place of analytical procedure validation in this framework has not been clarified. This article aims at showing that analytical procedure validation is fully integrated into the QbD paradigm and is an essential step in developing analytical procedures that are effectively fit for purpose. Adequate statistical methodologies have also their role to play: such as design of experiments, statistical modeling, and probabilistic statements. The outcome of analytical procedure validation is also an analytical procedure design space, and from it, control strategy can be set.

  14. MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR

    PubMed Central

    Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo

    2015-01-01

    Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. PMID:26109350

  15. Recommendations of the VAC2VAC workshop on the design of multi-centre validation studies.

    PubMed

    Halder, Marlies; Depraetere, Hilde; Delannois, Frédérique; Akkermans, Arnoud; Behr-Gross, Marie-Emmanuelle; Bruysters, Martijn; Dierick, Jean-François; Jungbäck, Carmen; Kross, Imke; Metz, Bernard; Pennings, Jeroen; Rigsby, Peter; Riou, Patrice; Balks, Elisabeth; Dobly, Alexandre; Leroy, Odile; Stirling, Catrina

    2018-03-01

    Within the Innovative Medicines Initiative 2 (IMI 2) project VAC2VAC (Vaccine batch to vaccine batch comparison by consistency testing), a workshop has been organised to discuss ways of improving the design of multi-centre validation studies and use the data generated for product-specific validation purposes. Moreover, aspects of validation within the consistency approach context were addressed. This report summarises the discussions and outlines the conclusions and recommendations agreed on by the workshop participants. Copyright © 2018.

  16. Life Satisfaction Questionnaire (Lisat-9): Reliability and Validity for Patients with Acquired Brain Injury

    ERIC Educational Resources Information Center

    Boonstra, Anne M.; Reneman, Michiel F.; Stewart, Roy E.; Balk, Gerlof A.

    2012-01-01

    The aim of this study was to determine the reliability and discriminant validity of the Dutch version of the life satisfaction questionnaire (Lisat-9 DV) to assess patients with an acquired brain injury. The reliability study used a test-retest design, and the validity study used a cross-sectional design. The setting was the general rehabilitation…

  17. Design, development, testing and validation of a Photonics Virtual Laboratory for the study of LEDs

    NASA Astrophysics Data System (ADS)

    Naranjo, Francisco L.; Martínez, Guadalupe; Pérez, Ángel L.; Pardo, Pedro J.

    2014-07-01

    This work presents the design, development, testing and validation of a Photonic Virtual Laboratory, highlighting the study of LEDs. The study was conducted from a conceptual, experimental and didactic standpoint, using e-learning and m-learning platforms. Specifically, teaching tools that help ensure that our students perform significant learning have been developed. It has been brought together the scientific aspect, such as the study of LEDs, with techniques of generation and transfer of knowledge through the selection, hierarchization and structuring of information using concept maps. For the validation of the didactic materials developed, it has been used procedures with various assessment tools for the collection and processing of data, applied in the context of an experimental design. Additionally, it was performed a statistical analysis to determine the validity of the materials developed. The assessment has been designed to validate the contributions of the new materials developed over the traditional method of teaching, and to quantify the learning achieved by students, in order to draw conclusions that serve as a reference for its application in the teaching and learning processes, and comprehensively validate the work carried out.

  18. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis.

    PubMed

    Holgado-Tello, Fco P; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity.

  19. A Simulation Study of Threats to Validity in Quasi-Experimental Designs: Interrelationship between Design, Measurement, and Analysis

    PubMed Central

    Holgado-Tello, Fco. P.; Chacón-Moscoso, Salvador; Sanduvete-Chaves, Susana; Pérez-Gil, José A.

    2016-01-01

    The Campbellian tradition provides a conceptual framework to assess threats to validity. On the other hand, different models of causal analysis have been developed to control estimation biases in different research designs. However, the link between design features, measurement issues, and concrete impact estimation analyses is weak. In order to provide an empirical solution to this problem, we use Structural Equation Modeling (SEM) as a first approximation to operationalize the analytical implications of threats to validity in quasi-experimental designs. Based on the analogies established between the Classical Test Theory (CTT) and causal analysis, we describe an empirical study based on SEM in which range restriction and statistical power have been simulated in two different models: (1) A multistate model in the control condition (pre-test); and (2) A single-trait-multistate model in the control condition (post-test), adding a new mediator latent exogenous (independent) variable that represents a threat to validity. Results show, empirically, how the differences between both the models could be partially or totally attributed to these threats. Therefore, SEM provides a useful tool to analyze the influence of potential threats to validity. PMID:27378991

  20. Integration of design and inspection

    NASA Astrophysics Data System (ADS)

    Simmonds, William H.

    1990-08-01

    Developments in advanced computer integrated manufacturing technology, coupled with the emphasis on Total Quality Management, are exposing needs for new techniques to integrate all functions from design through to support of the delivered product. One critical functional area that must be integrated into design is that embracing the measurement, inspection and test activities necessary for validation of the delivered product. This area is being tackled by a collaborative project supported by the UK Government Department of Trade and Industry. The project is aimed at developing techniques for analysing validation needs and for planning validation methods. Within the project an experimental Computer Aided Validation Expert system (CAVE) is being constructed. This operates with a generalised model of the validation process and helps with all design stages: specification of product requirements; analysis of the assurance provided by a proposed design and method of manufacture; development of the inspection and test strategy; and analysis of feedback data. The kernel of the system is a knowledge base containing knowledge of the manufacturing process capabilities and of the available inspection and test facilities. The CAVE system is being integrated into a real life advanced computer integrated manufacturing facility for demonstration and evaluation.

  1. VDA, a Method of Choosing a Better Algorithm with Fewer Validations

    PubMed Central

    Kluger, Yuval

    2011-01-01

    The multitude of bioinformatics algorithms designed for performing a particular computational task presents end-users with the problem of selecting the most appropriate computational tool for analyzing their biological data. The choice of the best available method is often based on expensive experimental validation of the results. We propose an approach to design validation sets for method comparison and performance assessment that are effective in terms of cost and discrimination power. Validation Discriminant Analysis (VDA) is a method for designing a minimal validation dataset to allow reliable comparisons between the performances of different algorithms. Implementation of our VDA approach achieves this reduction by selecting predictions that maximize the minimum Hamming distance between algorithmic predictions in the validation set. We show that VDA can be used to correctly rank algorithms according to their performances. These results are further supported by simulations and by realistic algorithmic comparisons in silico. VDA is a novel, cost-efficient method for minimizing the number of validation experiments necessary for reliable performance estimation and fair comparison between algorithms. Our VDA software is available at http://sourceforge.net/projects/klugerlab/files/VDA/ PMID:22046256

  2. Temperature and heat flux datasets of a complex object in a fire plume for the validation of fire and thermal response codes.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Jernigan, Dann A.; Blanchat, Thomas K.

    It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less

  3. Examining the Internal Validity and Statistical Precision of the Comparative Interrupted Time Series Design by Comparison with a Randomized Experiment

    ERIC Educational Resources Information Center

    St.Clair, Travis; Cook, Thomas D.; Hallberg, Kelly

    2014-01-01

    Although evaluators often use an interrupted time series (ITS) design to test hypotheses about program effects, there are few empirical tests of the design's validity. We take a randomized experiment on an educational topic and compare its effects to those from a comparative ITS (CITS) design that uses the same treatment group as the experiment…

  4. Two-Method Planned Missing Designs for Longitudinal Research

    ERIC Educational Resources Information Center

    Garnier-Villarreal, Mauricio; Rhemtulla, Mijke; Little, Todd D.

    2014-01-01

    We examine longitudinal extensions of the two-method measurement design, which uses planned missingness to optimize cost-efficiency and validity of hard-to-measure constructs. These designs use a combination of two measures: a "gold standard" that is highly valid but expensive to administer, and an inexpensive (e.g., survey-based)…

  5. Validity of High School Physic Module With Character Values Using Process Skill Approach In STKIP PGRI West Sumatera

    NASA Astrophysics Data System (ADS)

    Anaperta, M.; Helendra, H.; Zulva, R.

    2018-04-01

    This study aims to describe the validity of physics module with Character Oriented Values Using Process Approach Skills at Dynamic Electrical Material in high school physics / MA and SMK. The type of research is development research. The module development model uses the development model proposed by Plomp which consists of (1) preliminary research phase, (2) the prototyping phase, and (3) assessment phase. In this research is done is initial investigation phase and designing. Data collecting technique to know validation is observation and questionnaire. In the initial investigative phase, curriculum analysis, student analysis, and concept analysis were conducted. In the design phase and the realization of module design for SMA / MA and SMK subjects in dynamic electrical materials. After that, the formative evaluation which include self evaluation, prototyping (expert reviews, one-to-one, and small group. At this stage validity is performed. This research data is obtained through the module validation sheet, which then generates a valid module.

  6. The perils of ignoring design effects in experimental studies: lessons from a mammography screening trial.

    PubMed

    Glenn, Beth A; Bastani, Roshan; Maxwell, Annette E

    2013-01-01

    Threats to external validity, including pretest sensitisation and the interaction of selection and an intervention, are frequently overlooked by researchers despite their potential to significantly influence study outcomes. The purpose of this investigation was to conduct secondary data analyses to assess the presence of external validity threats in the setting of a randomised trial designed to promote mammography use in a high-risk sample of women. During the trial, recruitment and intervention, implementation took place in three cohorts (with different ethnic composition), utilising two different designs (pretest-posttest control group design and posttest only control group design). Results reveal that the intervention produced different outcomes across cohorts, dependent upon the research design used and the characteristics of the sample. These results illustrate the importance of weighing the pros and cons of potential research designs before making a selection and attending more closely to issues of external validity.

  7. AMOVA ["Accumulative Manifold Validation Analysis"]: An Advanced Statistical Methodology Designed to Measure and Test the Validity, Reliability, and Overall Efficacy of Inquiry-Based Psychometric Instruments

    ERIC Educational Resources Information Center

    Osler, James Edward, II

    2015-01-01

    This monograph provides an epistemological rational for the Accumulative Manifold Validation Analysis [also referred by the acronym "AMOVA"] statistical methodology designed to test psychometric instruments. This form of inquiry is a form of mathematical optimization in the discipline of linear stochastic modelling. AMOVA is an in-depth…

  8. Development, validation and utilisation of food-frequency questionnaires - a review.

    PubMed

    Cade, Janet; Thompson, Rachel; Burley, Victoria; Warm, Daniel

    2002-08-01

    The purpose of this review is to provide guidance on the development, validation and use of food-frequency questionnaires (FFQs) for different study designs. It does not include any recommendations about the most appropriate method for dietary assessment (e.g. food-frequency questionnaire versus weighed record). A comprehensive search of electronic databases was carried out for publications from 1980 to 1999. Findings from the review were then commented upon and added to by a group of international experts. Recommendations have been developed to aid in the design, validation and use of FFQs. Specific details of each of these areas are discussed in the text. FFQs are being used in a variety of ways and different study designs. There is no gold standard for directly assessing the validity of FFQs. Nevertheless, the outcome of this review should help those wishing to develop or adapt an FFQ to validate it for its intended use.

  9. Applied virtual reality in aerospace design

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1995-01-01

    A virtual reality (VR) applications program has been under development at the Marshall Space Flight Center (MSFC) since 1989. The objectives of the MSFC VR Applications Program are to develop, assess, validate, and utilize VR in hardware development, operations development and support, mission operations training and science training. Before VR can be used with confidence in a particular application, VR must be validated for that class of applications. For that reason, specific validation studies for selected classes of applications have been proposed and are currently underway. These include macro-ergonomic 'control room class' design analysis, Spacelab stowage reconfiguration training, a full-body microgravity functional reach simulator, a gross anatomy teaching simulator, and micro-ergonomic design analysis. This paper describes the MSFC VR Applications Program and the validation studies.

  10. System-Level Experimental Validations for Supersonic Commercial Transport Aircraft Entering Service in the 2018-2020 Time Period

    NASA Technical Reports Server (NTRS)

    Magee, Todd E.; Wilcox, Peter A.; Fugal, Spencer R.; Acheson, Kurt E.; Adamson, Eric E.; Bidwell, Alicia L.; Shaw, Stephen G.

    2013-01-01

    This report describes the work conducted by The Boeing Company under American Recovery and Reinvestment Act (ARRA) and NASA funding to experimentally validate the conceptual design of a supersonic airliner feasible for entry into service in the 2018 to 2020 timeframe (NASA N+2 generation). The report discusses the design, analysis and development of a low-boom concept that meets aggressive sonic boom and performance goals for a cruise Mach number of 1.8. The design is achieved through integrated multidisciplinary optimization tools. The report also describes the detailed design and fabrication of both sonic boom and performance wind tunnel models of the low-boom concept. Additionally, a description of the detailed validation wind tunnel testing that was performed with the wind tunnel models is provided along with validation comparisons with pretest Computational Fluid Dynamics (CFD). Finally, the report describes the evaluation of existing NASA sonic boom pressure rail measurement instrumentation and a detailed description of new sonic boom measurement instrumentation that was constructed for the validation wind tunnel testing.

  11. Model-based verification and validation of the SMAP uplink processes

    NASA Astrophysics Data System (ADS)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  12. Validation of the F-18 high alpha research vehicle flight control and avionics systems modifications

    NASA Technical Reports Server (NTRS)

    Chacon, Vince; Pahle, Joseph W.; Regenie, Victoria A.

    1990-01-01

    The verification and validation process is a critical portion of the development of a flight system. Verification, the steps taken to assure the system meets the design specification, has become a reasonably understood and straightforward process. Validation is the method used to ensure that the system design meets the needs of the project. As systems become more integrated and more critical in their functions, the validation process becomes more complex and important. The tests, tools, and techniques which are being used for the validation of the high alpha research vehicle (HARV) turning valve control system (TVCS) are discussed, and their solutions are documented. The emphasis of this paper is on the validation of integrated systems.

  13. Validation of the F-18 high alpha research vehicle flight control and avionics systems modifications

    NASA Technical Reports Server (NTRS)

    Chacon, Vince; Pahle, Joseph W.; Regenie, Victoria A.

    1990-01-01

    The verification and validation process is a critical portion of the development of a flight system. Verification, the steps taken to assure the system meets the design specification, has become a reasonably understood and straightforward process. Validation is the method used to ensure that the system design meets the needs of the project. As systems become more integrated and more critical in their functions, the validation process becomes more complex and important. The tests, tools, and techniques which are being used for the validation of the high alpha research vehicle (HARV) turning vane control system (TVCS) are discussed and the problems and their solutions are documented. The emphasis of this paper is on the validation of integrated system.

  14. Experimental Design and Some Threats to Experimental Validity: A Primer

    ERIC Educational Resources Information Center

    Skidmore, Susan

    2008-01-01

    Experimental designs are distinguished as the best method to respond to questions involving causality. The purpose of the present paper is to explicate the logic of experimental design and why it is so vital to questions that demand causal conclusions. In addition, types of internal and external validity threats are discussed. To emphasize the…

  15. Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)

    DTIC Science & Technology

    2016-03-10

    Contractor Address: 10 Moulton Street, Cambridge, MA 02138 Title of the Project: Seaworthy Quantum Key Distribution Design and Validation (SEAKEY...Technologies Kathryn Carson Program Manager Quantum Information Processing 2 | P a g e Approved for public release; distribution is...we have continued work calculating the key rates achievable parametrically with receiver performance. In addition, we describe the initial designs

  16. WINCADRE (COMPUTER-AIDED DATA REVIEW AND EVALUATION)

    EPA Science Inventory

    WinCADRE (Computer-Aided Data Review and Evaluation) is a Windows -based program designed for computer-assisted data validation. WinCADRE is a powerful tool which significantly decreases data validation turnaround time. The electronic-data-deliverable format has been designed ...

  17. MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR.

    PubMed

    Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo

    2015-11-16

    Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. System design from mission definition to flight validation

    NASA Technical Reports Server (NTRS)

    Batill, S. M.

    1992-01-01

    Considerations related to the engineering systems design process and an approach taken to introduce undergraduate students to that process are presented. The paper includes details on a particular capstone design course. This course is a team oriented aircraft design project which requires the students to participate in many phases of the system design process, from mission definition to validation of their design through flight testing. To accomplish this in a single course requires special types of flight vehicles. Relatively small-scale, remotely piloted vehicles have provided the class of aircraft considered in this course.

  19. Design for validation: An approach to systems validation

    NASA Technical Reports Server (NTRS)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  20. A Note on Economic Content and Test Validity.

    ERIC Educational Resources Information Center

    Soper, John C.; Brenneke, Judith Staley

    1987-01-01

    Offers practical tips on how teachers can determine whether classroom tests are actually measuring what they are designed to measure. Discusses criterion-related validity, construct validity, and content validity. Demonstrates how to determine the degree of content validity a particular test may have for a particular course or unit. (Author/DH)

  1. Survey of Verification and Validation Techniques for Small Satellite Software Development

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    2015-01-01

    The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation.

  2. Design and Validation of Implantable Passive Mechanisms for Orthopedic Surgery

    DTIC Science & Technology

    2017-10-01

    have post-surgery? Please put the designated grading next to each picture. 2. When comparing to the force applied by the index finger, what percentage...system, when compared with using the direct suture. This concept is inspired by the use of such mechanisms in the design of “underactuated” robotic...AWARD NUMBER: W81XWH-16-1-0794 TITLE: Design and Validation of Implantable Passive Mechanisms for Orthopedic Surgery PRINCIPAL INVESTIGATOR

  3. Design and Control of Compliant Tensegrity Robots Through Simulation and Hardware Validation

    NASA Technical Reports Server (NTRS)

    Caluwaerts, Ken; Despraz, Jeremie; Iscen, Atil; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; Sunspiral, Vytas

    2014-01-01

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center has developed and validated two different software environments for the analysis, simulation, and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ("tensile-integrity") structures have unique physical properties which make them ideal for interaction with uncertain environments. Yet these characteristics, such as variable structural compliance, and global multi-path load distribution through the tension network, make design and control of bio-inspired tensegrity robots extremely challenging. This work presents the progress in using these two tools in tackling the design and control challenges. The results of this analysis includes multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures. The current hardware prototype of a six-bar tensegrity, code-named ReCTeR, is presented in the context of this validation.

  4. SERENITY in Air Traffic Management

    NASA Astrophysics Data System (ADS)

    Felici, Massimo; Meduri, Valentino; Tedeschi, Alessandra; Riccucci, Carlo

    This chapter is concerned with the validation of an implementation of the SERENITY Runtime Framework (SRF) tailored for the Air Traffic Management (ATM) domain. It reports our experience in the design and validation phases of a tool, which relies on the SRF in order to support Security and Dependability (S&D) Patterns into work practices. In particular, this chapter pinpoints the activities concerning the identification of S&D Patterns, the design of an ATM prototype and its validation. The validation activities involve qualitative as well as quantitative approaches. These activities as a whole highlight the validation process for adopting S&D Patterns within the ATM domain. Moreover, they stress how S&D Patters enhance and relate to critical features within an industry domain. The empirical results point out that S&D Patterns relate to work practices. Furthermore, they highlight design and validation activities in order to tailor systems relying on S&D Patterns to specific application domains. This strengths and supports the adoption of S&D Patterns in order to address AmI (Ambient Intelligence) requirements (e.g., awareness, proactiveness, resilience, etc.) within the ATM domain.

  5. WINCADRE INORGANIC (WINDOWS COMPUTER-AIDED DATA REVIEW AND EVALUATION)

    EPA Science Inventory

    WinCADRE (Computer-Aided Data Review and Evaluation) is a Windows -based program designed for computer-assisted data validation. WinCADRE is a powerful tool which significantly decreases data validation turnaround time. The electronic-data-deliverable format has been designed in...

  6. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  7. The development and validation of a meta-tool for quality appraisal of public health evidence: Meta Quality Appraisal Tool (MetaQAT).

    PubMed

    Rosella, L; Bowman, C; Pach, B; Morgan, S; Fitzpatrick, T; Goel, V

    2016-07-01

    Most quality appraisal tools were developed for clinical medicine and tend to be study-specific with a strong emphasis on risk of bias. In order to be more relevant to public health, an appropriate quality appraisal tool needs to be less reliant on the evidence hierarchy and consider practice applicability. Given the broad range of study designs used in public health, the objective of this study was to develop and validate a meta-tool that combines public health-focused principles of appraisal coupled with a set of design-specific companion tools. Several design methods were used to develop and validate the tool including literature review, synthesis, and validation with a reference standard. A search of critical appraisal tools relevant to public health was conducted; core concepts were collated. The resulting framework was piloted during three feedback sessions with public health practitioners. Following subsequent revisions, the final meta-tool, the Meta Quality Appraisal Tool (MetaQAT), was then validated through a content analysis of appraisals conducted by two groups of experienced public health researchers (MetaQAT vs generic appraisal form). The MetaQAT framework consists of four domains: relevancy, reliability, validity, and applicability. In addition, a companion tool was assembled from existing critical appraisal tools to provide study design-specific guidance on validity appraisal. Content analysis showed similar methodological and generalizability concerns were raised by both groups; however, the MetaQAT appraisers commented more extensively on applicability to public health practice. Critical appraisal tools designed for clinical medicine have limitations for use in the context of public health. The meta-tool structure of the MetaQAT allows for rigorous appraisal, while allowing users to simultaneously appraise the multitude of study designs relevant to public health research and assess non-standard domains, such as applicability. Copyright © 2015 The Authors. Published by Elsevier Ltd.. All rights reserved.

  8. Further Validation of the Coach Identity Prominence Scale

    ERIC Educational Resources Information Center

    Pope, J. Paige; Hall, Craig R.

    2014-01-01

    This study was designed to examine select psychometric properties of the Coach Identity Prominence Scale (CIPS), including the reliability, factorial validity, convergent validity, discriminant validity, and predictive validity. Coaches (N = 338) who averaged 37 (SD = 12.27) years of age, had a mean of 13 (SD = 9.90) years of coaching experience,…

  9. Continual Response Measurement: Design and Validation.

    ERIC Educational Resources Information Center

    Baggaley, Jon

    1987-01-01

    Discusses reliability and validity of continual response measurement (CRM), a computer-based measurement technique, and its use in social science research. Highlights include the importance of criterion-referencing the data, guidelines for designing studies using CRM, examples typifying their deductive and inductive functions, and a discussion of…

  10. Adding Design Elements to Improve Time Series Designs: No Child Left behind as an Example of Causal Pattern-Matching

    ERIC Educational Resources Information Center

    Wong, Manyee; Cook, Thomas D.; Steiner, Peter M.

    2015-01-01

    Some form of a short interrupted time series (ITS) is often used to evaluate state and national programs. An ITS design with a single treatment group assumes that the pretest functional form can be validly estimated and extrapolated into the postintervention period where it provides a valid counterfactual. This assumption is problematic. Ambiguous…

  11. A Performance Management Framework for Civil Engineering

    DTIC Science & Technology

    1990-09-01

    cultural change. A non - equivalent control group design was chosen to augment the case analysis. Figure 3.18 shows the form of the quasi-experiment. The...The non - equivalent control group design controls the following obstacles to internal validity: history, maturation, testing, and instrumentation. The...and Stanley, 1963:48,50) Table 7. Validity of Quasi-Experiment The non - equivalent control group experimental design controls the following obstacles to

  12. Perceptions vs Reality: A Longitudinal Experiment in Influenced Judgement Performance

    DTIC Science & Technology

    2003-03-25

    validity were manifested equally between treatment and control groups , thereby lending further validity to the experimental research design . External...Stanley (1975) identify this as a True Experimental Design : Pretest- Posttest Control Group Design . However, due to the longitudinal aspect required to...1975:43). Nonequivalence will be ruled out as pretest equivalence is shown between treatment and control groups (1975:47). For quasi

  13. Excavator Design Validation

    NASA Technical Reports Server (NTRS)

    Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je

    2010-01-01

    The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.

  14. Validation of Statistical Sampling Algorithms in Visual Sample Plan (VSP): Summary Report

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nuffer, Lisa L; Sego, Landon H.; Wilson, John E.

    2009-02-18

    The U.S. Department of Homeland Security, Office of Technology Development (OTD) contracted with a set of U.S. Department of Energy national laboratories, including the Pacific Northwest National Laboratory (PNNL), to write a Remediation Guidance for Major Airports After a Chemical Attack. The report identifies key activities and issues that should be considered by a typical major airport following an incident involving release of a toxic chemical agent. Four experimental tasks were identified that would require further research in order to supplement the Remediation Guidance. One of the tasks, Task 4, OTD Chemical Remediation Statistical Sampling Design Validation, dealt with statisticalmore » sampling algorithm validation. This report documents the results of the sampling design validation conducted for Task 4. In 2005, the Government Accountability Office (GAO) performed a review of the past U.S. responses to Anthrax terrorist cases. Part of the motivation for this PNNL report was a major GAO finding that there was a lack of validated sampling strategies in the U.S. response to Anthrax cases. The report (GAO 2005) recommended that probability-based methods be used for sampling design in order to address confidence in the results, particularly when all sample results showed no remaining contamination. The GAO also expressed a desire that the methods be validated, which is the main purpose of this PNNL report. The objective of this study was to validate probability-based statistical sampling designs and the algorithms pertinent to within-building sampling that allow the user to prescribe or evaluate confidence levels of conclusions based on data collected as guided by the statistical sampling designs. Specifically, the designs found in the Visual Sample Plan (VSP) software were evaluated. VSP was used to calculate the number of samples and the sample location for a variety of sampling plans applied to an actual release site. Most of the sampling designs validated are probability based, meaning samples are located randomly (or on a randomly placed grid) so no bias enters into the placement of samples, and the number of samples is calculated such that IF the amount and spatial extent of contamination exceeds levels of concern, at least one of the samples would be taken from a contaminated area, at least X% of the time. Hence, "validation" of the statistical sampling algorithms is defined herein to mean ensuring that the "X%" (confidence) is actually met.« less

  15. Statistical methodology: II. Reliability and validity assessment in study design, Part B.

    PubMed

    Karras, D J

    1997-02-01

    Validity measures the correspondence between a test and other purported measures of the same or similar qualities. When a reference standard exists, a criterion-based validity coefficient can be calculated. If no such standard is available, the concepts of content and construct validity may be used, but quantitative analysis may not be possible. The Pearson and Spearman tests of correlation are often used to assess the correspondence between tests, but do not account for measurement biases and may yield misleading results. Techniques that measure interest differences may be more meaningful in validity assessment, and the kappa statistic is useful for analyzing categorical variables. Questionnaires often can be designed to allow quantitative assessment of reliability and validity, although this may be difficult. Inclusion of homogeneous questions is necessary to assess reliability. Analysis is enhanced by using Likert scales or similar techniques that yield ordinal data. Validity assessment of questionnaires requires careful definition of the scope of the test and comparison with previously validated tools.

  16. Hyper-X: Flight Validation of Hypersonic Airbreathing Technology

    NASA Technical Reports Server (NTRS)

    Rausch, Vincent L.; McClinton, Charles R.; Crawford, J. Larry

    1997-01-01

    This paper provides an overview of NASA's focused hypersonic technology program, i.e. the Hyper-X program. This program is designed to move hypersonic, air breathing vehicle technology from the laboratory environment to the flight environment, the last stage preceding prototype development. This paper presents some history leading to the flight test program, research objectives, approach, schedule and status. Substantial experimental data base and concept validation have been completed. The program is concentrating on Mach 7 vehicle development, verification and validation in preparation for wind tunnel testing in 1998 and flight testing in 1999. It is also concentrating on finalization of the Mach 5 and 10 vehicle designs. Detailed evaluation of the Mach 7 vehicle at the flight conditions is nearing completion, and will provide a data base for validation of design methods once flight test data are available.

  17. Validation experiments to determine radiation partitioning of heat flux to an object in a fully turbulent fire.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ricks, Allen; Blanchat, Thomas K.; Jernigan, Dann A.

    2006-06-01

    It is necessary to improve understanding and develop validation data of the heat flux incident to an object located within the fire plume for the validation of SIERRA/ FUEGO/SYRINX fire and SIERRA/CALORE. One key aspect of the validation data sets is the determination of the relative contribution of the radiative and convective heat fluxes. To meet this objective, a cylindrical calorimeter with sufficient instrumentation to measure total and radiative heat flux had been designed and fabricated. This calorimeter will be tested both in the controlled radiative environment of the Penlight facility and in a fire environment in the FLAME/Radiant Heatmore » (FRH) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisons between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. A significant question of interest to modeling heat flux incident to an object in or near a fire is the contribution of the radiation and convection modes of heat transfer. The series of experiments documented in this test plan is designed to provide data on the radiation partitioning, defined as the fraction of the total heat flux that is due to radiation.« less

  18. Development, calibration, and validation of performance prediction models for the Texas M-E flexible pavement design system.

    DOT National Transportation Integrated Search

    2010-08-01

    This study was intended to recommend future directions for the development of TxDOTs Mechanistic-Empirical : (TexME) design system. For stress predictions, a multi-layer linear elastic system was evaluated and its validity was : verified by compar...

  19. Development of an Independent Global Land Cover Validation Dataset

    NASA Astrophysics Data System (ADS)

    Sulla-Menashe, D. J.; Olofsson, P.; Woodcock, C. E.; Holden, C.; Metcalfe, M.; Friedl, M. A.; Stehman, S. V.; Herold, M.; Giri, C.

    2012-12-01

    Accurate information related to the global distribution and dynamics in global land cover is critical for a large number of global change science questions. A growing number of land cover products have been produced at regional to global scales, but the uncertainty in these products and the relative strengths and weaknesses among available products are poorly characterized. To address this limitation we are compiling a database of high spatial resolution imagery to support international land cover validation studies. Validation sites were selected based on a probability sample, and may therefore be used to estimate statistically defensible accuracy statistics and associated standard errors. Validation site locations were identified using a stratified random design based on 21 strata derived from an intersection of Koppen climate classes and a population density layer. In this way, the two major sources of global variation in land cover (climate and human activity) are explicitly included in the stratification scheme. At each site we are acquiring high spatial resolution (< 1-m) satellite imagery for 5-km x 5-km blocks. The response design uses an object-oriented hierarchical legend that is compatible with the UN FAO Land Cover Classification System. Using this response design, we are classifying each site using a semi-automated algorithm that blends image segmentation with a supervised RandomForest classification algorithm. In the long run, the validation site database is designed to support international efforts to validate land cover products. To illustrate, we use the site database to validate the MODIS Collection 4 Land Cover product, providing a prototype for validating the VIIRS Surface Type Intermediate Product scheduled to start operational production early in 2013. As part of our analysis we evaluate sources of error in coarse resolution products including semantic issues related to the class definitions, mixed pixels, and poor spectral separation between classes.

  20. Computational protein design-the next generation tool to expand synthetic biology applications.

    PubMed

    Gainza-Cirauqui, Pablo; Correia, Bruno Emanuel

    2018-05-02

    One powerful approach to engineer synthetic biology pathways is the assembly of proteins sourced from one or more natural organisms. However, synthetic pathways often require custom functions or biophysical properties not displayed by natural proteins, limitations that could be overcome through modern protein engineering techniques. Structure-based computational protein design is a powerful tool to engineer new functional capabilities in proteins, and it is beginning to have a profound impact in synthetic biology. Here, we review efforts to increase the capabilities of synthetic biology using computational protein design. We focus primarily on computationally designed proteins not only validated in vitro, but also shown to modulate different activities in living cells. Efforts made to validate computational designs in cells can illustrate both the challenges and opportunities in the intersection of protein design and synthetic biology. We also highlight protein design approaches, which although not validated as conveyors of new cellular function in situ, may have rapid and innovative applications in synthetic biology. We foresee that in the near-future, computational protein design will vastly expand the functional capabilities of synthetic cells. Copyright © 2018. Published by Elsevier Ltd.

  1. Design and validation of the eyesafe ladar testbed (ELT) using the LadarSIM system simulator

    NASA Astrophysics Data System (ADS)

    Neilsen, Kevin D.; Budge, Scott E.; Pack, Robert T.; Fullmer, R. Rees; Cook, T. Dean

    2009-05-01

    The development of an experimental full-waveform LADAR system has been enhanced with the assistance of the LadarSIM system simulation software. The Eyesafe LADAR Test-bed (ELT) was designed as a raster scanning, single-beam, energy-detection LADAR with the capability of digitizing and recording the return pulse waveform at up to 2 GHz for 3D off-line image formation research in the laboratory. To assist in the design phase, the full-waveform LADAR simulation in LadarSIM was used to simulate the expected return waveforms for various system design parameters, target characteristics, and target ranges. Once the design was finalized and the ELT constructed, the measured specifications of the system and experimental data captured from the operational sensor were used to validate the behavior of the system as predicted during the design phase. This paper presents the methodology used, and lessons learned from this "design, build, validate" process. Simulated results from the design phase are presented, and these are compared to simulated results using measured system parameters and operational sensor data. The advantages of this simulation-based process are also presented.

  2. The Teenage Nonviolence Test: Concurrent and Discriminant Validity.

    ERIC Educational Resources Information Center

    Konen, Kristopher; Mayton, Daniel M., II; Delva, Zenita; Sonnen, Melinda; Dahl, William; Montgomery, Richard

    This study was designed to document the validity of the Teenage Nonviolence Test (TNT). In this study the concurrent validity of the TNT in various ways, the validity of the TNT using known groups, and the discriminant validity of the TNT by evaluating its relationships with other psychological constructs were assessed. The results showed that the…

  3. Instruments to measure cancer management knowledge of rural health care providers.

    PubMed

    Elliott, T E; Regal, R R; Renier, C M; Crouse, B J; Gangeness, D E; Pharmd; Elliott, B A; Witrak, M

    2001-01-01

    Instruments to measure cancer management knowledge of rural physicians, nurses, and pharmacists were needed to evaluate the effect of an educational intervention. Since such instruments did not exist, the authors designed and validated a new instrument for each discipline. The design and validation process for these instruments are described. These three instruments were shown to be practical and to have high content and construct validity. Content validation demonstrated that all items were rated as essential or useful by 90% or more of the respondents. Construct validation show highly significant differences in mean scores among several levels of learners and practitioners as expected. These instruments may be useful to other investigators for measuring cancer management knowledge of rural physicians, nurses, and pharmacists.

  4. External model validation of binary clinical risk prediction models in cardiovascular and thoracic surgery.

    PubMed

    Hickey, Graeme L; Blackstone, Eugene H

    2016-08-01

    Clinical risk-prediction models serve an important role in healthcare. They are used for clinical decision-making and measuring the performance of healthcare providers. To establish confidence in a model, external model validation is imperative. When designing such an external model validation study, thought must be given to patient selection, risk factor and outcome definitions, missing data, and the transparent reporting of the analysis. In addition, there are a number of statistical methods available for external model validation. Execution of a rigorous external validation study rests in proper study design, application of suitable statistical methods, and transparent reporting. Copyright © 2016 The American Association for Thoracic Surgery. Published by Elsevier Inc. All rights reserved.

  5. Nursing Intensive-Care Satisfaction Scale [NICSS]: Development and validation of a patient-centred instrument.

    PubMed

    Romero-García, Marta; de la Cueva-Ariza, Laura; Benito-Aracil, Llucia; Lluch-Canut, Teresa; Trujols-Albet, Joan; Martínez-Momblan, Maria Antonia; Juvé-Udina, Maria-Eulàlia; Delgado-Hito, Pilar

    2018-06-01

    The aim of this study was to develop and validate the Nursing Intensive-Care Satisfaction Scale to measures satisfaction with nursing care from the critical care patient's perspective. Instruments that measure satisfaction with nursing cares have been designed and validated without taking the patient's perspective into consideration. Despite the benefits and advances in measuring satisfaction with nursing care, none instrument is specifically designed to assess satisfaction in intensive care units. Instrument development. The population were all discharged patients (January 2013 - January 2015) from three Intensive Care Units of a third level hospital (N = 200). All assessment instruments were given to discharged patients and 48 hours later, to analyse the temporal stability, only the questionnaire was given again. The validation process of the scale included the analysis of internal consistency, temporal stability; validity of construct through a confirmatory factor analysis; and criterion validity. Reliability was 0.95. The intraclass correlation coefficient for the total scale was 0.83 indicating a good temporal stability. Construct validity showed an acceptable fit and factorial structure with four factors, in accordance with the theoretical model, being Consequences factor the best correlated with other factors. Criterion validity, presented a correlation between low and high (range: 0.42-0.68). The scale has been designed and validated incorporating the perspective of critical care patients. Thanks to its reliability and validity, this questionnaire can be used both in research and in clinical practice. The scale offers a possibility to assess and develop interventions to improve patient satisfaction with nursing care. © 2018 John Wiley & Sons Ltd.

  6. Structural exploration for the refinement of anticancer matrix metalloproteinase-2 inhibitor designing approaches through robust validated multi-QSARs

    NASA Astrophysics Data System (ADS)

    Adhikari, Nilanjan; Amin, Sk. Abdul; Saha, Achintya; Jha, Tarun

    2018-03-01

    Matrix metalloproteinase-2 (MMP-2) is a promising pharmacological target for designing potential anticancer drugs. MMP-2 plays critical functions in apoptosis by cleaving the DNA repair enzyme namely poly (ADP-ribose) polymerase (PARP). Moreover, MMP-2 expression triggers the vascular endothelial growth factor (VEGF) having a positive influence on tumor size, invasion, and angiogenesis. Therefore, it is an urgent need to develop potential MMP-2 inhibitors without any toxicity but better pharmacokinetic property. In this article, robust validated multi-quantitative structure-activity relationship (QSAR) modeling approaches were attempted on a dataset of 222 MMP-2 inhibitors to explore the important structural and pharmacophoric requirements for higher MMP-2 inhibition. Different validated regression and classification-based QSARs, pharmacophore mapping and 3D-QSAR techniques were performed. These results were challenged and subjected to further validation to explain 24 in house MMP-2 inhibitors to judge the reliability of these models further. All these models were individually validated internally as well as externally and were supported and validated by each other. These results were further justified by molecular docking analysis. Modeling techniques adopted here not only helps to explore the necessary structural and pharmacophoric requirements but also for the overall validation and refinement techniques for designing potential MMP-2 inhibitors.

  7. [Valuating public health in some zoos in Colombia. Phase 1: designing and validating instruments].

    PubMed

    Agudelo-Suárez, Angela N; Villamil-Jiménez, Luis C

    2009-10-01

    Designing and validating instruments for identifying public health problems in some zoological parks in Colombia, thereby allowing them to be evaluated. Four instruments were designed and validated along with the participation of five zoos. The instruments were validated regarding appearance, content, sensitivity to change, reliability tests and determining the tools' usefulness. An evaluation scale was created which assigned a maximum of 400 points, having the following evaluation intervals: 350-400 points meant good public health management, 100-349 points for regular management and 0-99 points for deficient management. The instruments were applied to the five zoos as part of the validation, forming a base-line for future evaluation of public health in them. Four valid and useful instruments were obtained for evaluating public health in zoos in Colombia. The five zoos presented regular public health management. The base-line obtained when validating the instruments led to identifying strengths and weaknesses regarding public health management in the zoos. The instruments obtained generally and specifically evaluated public health management; they led to diagnosing, identifying, quantifying and scoring zoos in Colombia in terms of public health. The base-line provided a starting point for making comparisons and enabling future follow-up of public health in Colombian zoos.

  8. Design, Implementation and Validation of the Three-Wheel Holonomic Motion System of the Assistant Personal Robot (APR).

    PubMed

    Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi

    2016-10-10

    This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm.

  9. Design, Implementation and Validation of the Three-Wheel Holonomic Motion System of the Assistant Personal Robot (APR)

    PubMed Central

    Moreno, Javier; Clotet, Eduard; Lupiañez, Ruben; Tresanchez, Marcel; Martínez, Dani; Pallejà, Tomàs; Casanovas, Jordi; Palacín, Jordi

    2016-01-01

    This paper presents the design, implementation and validation of the three-wheel holonomic motion system of a mobile robot designed to operate in homes. The holonomic motion system is described in terms of mechanical design and electronic control. The paper analyzes the kinematics of the motion system and validates the estimation of the trajectory comparing the displacement estimated with the internal odometry of the motors and the displacement estimated with a SLAM procedure based on LIDAR information. Results obtained in different experiments have shown a difference on less than 30 mm between the position estimated with the SLAM and odometry, and a difference in the angular orientation of the mobile robot lower than 5° in absolute displacements up to 1000 mm. PMID:27735857

  10. Reducing Bias and Increasing Precision by Adding Either a Pretest Measure of the Study Outcome or a Nonequivalent Comparison Group to the Basic Regression Discontinuity Design: An Example from Education

    ERIC Educational Resources Information Center

    Tang, Yang; Cook, Thomas D.; Kisbu-Sakarya, Yasemin

    2015-01-01

    Regression discontinuity design (RD) has been widely used to produce reliable causal estimates. Researchers have validated the accuracy of RD design using within study comparisons (Cook, Shadish & Wong, 2008; Cook & Steiner, 2010; Shadish et al, 2011). Within study comparisons examines the validity of a quasi-experiment by comparing its…

  11. Drug designs fulfilling the requirements of clinical trials aiming at personalizing medicine

    PubMed Central

    Mandrekar, Sumithra J.; Sargent, Daniel J.

    2014-01-01

    In the current era of stratified medicine and biomarker-driven therapies, the focus has shifted from predictions based on the traditional anatomic staging systems to guide the choice of treatment for an individual patient to an integrated approach using the genetic makeup of the tumor and the genotype of the patient. The clinical trial designs utilized in the developmental pathway for biomarkers and biomarker-directed therapies from discovery to clinical practice are rapidly evolving. While several issues need careful consideration, two critical issues that surround the validation of biomarkers are the choice of the clinical trial design (which is based on the strength of the preliminary evidence and marker prevalence), and biomarker assay related issues surrounding the marker assessment methods such as the reliability and reproducibility of the assay. In this review, we focus on trial designs aiming at personalized medicine in the context of early phase trials for initial marker validation, as well as in the context of larger definitive trials. Designs for biomarker validation are broadly classified as retrospective (i.e., using data from previously well-conducted randomized controlled trials (RCTs) versus prospective (enrichment, all-comers, hybrid or adaptive). We believe that the systematic evaluation and implementation of these design strategies are essential to accelerate the clinical validation of biomarker guided therapy. PMID:25414851

  12. IR-drop analysis for validating power grids and standard cell architectures in sub-10nm node designs

    NASA Astrophysics Data System (ADS)

    Ban, Yongchan; Wang, Chenchen; Zeng, Jia; Kye, Jongwook

    2017-03-01

    Since chip performance and power are highly dependent on the operating voltage, the robust power distribution network (PDN) is of utmost importance in designs to provide with the reliable voltage without voltage (IR)-drop. However, rapid increase of parasitic resistance and capacitance (RC) in interconnects makes IR-drop much worse with technology scaling. This paper shows various IR-drop analyses in sub 10nm designs. The major objectives are to validate standard cell architectures, where different sizes of power/ground and metal tracks are validated, and to validate PDN architecture, where types of power hook-up approaches are evaluated with IR-drop calculation. To estimate IR-drops in 10nm and below technologies, we first prepare physically routed designs given standard cell libraries, where we use open RISC RTL, synthesize the CPU, and apply placement & routing with process-design kits (PDK). Then, static and dynamic IR-drop flows are set up with commercial tools. Using the IR-drop flow, we compare standard cell architectures, and analysis impacts on performance, power, and area (PPA) with the previous technology-node designs. With this IR-drop flow, we can optimize the best PDN structure against IR-drops as well as types of standard cell library.

  13. Design, Validation, and Use of an Evaluation Instrument for Monitoring Systemic Reform.

    ERIC Educational Resources Information Center

    Scantlebury, Kathryn; Boone, William; Kahle, Jane Butler; Fraser, Barry J.

    2001-01-01

    Describes the design, development, validation, and use of an instrument that measures student attitudes and several environmental dimensions (i.e., standards-based teaching, home support, and peer support). Indicates that the classroom environment (standards-based teaching practices) was the strongest independent predictor of both achievement and…

  14. Flexible Programmes in Higher Professional Education: Expert Validation of a Flexible Educational Model

    ERIC Educational Resources Information Center

    Schellekens, Ad; Paas, Fred; Verbraeck, Alexander; van Merrienboer, Jeroen J. G.

    2010-01-01

    In a preceding case study, a process-focused demand-driven approach for organising flexible educational programmes in higher professional education (HPE) was developed. Operations management and instructional design contributed to designing a flexible educational model by means of discrete-event simulation. Educational experts validated the model…

  15. Design, Development, and Validation of Learning Objects

    ERIC Educational Resources Information Center

    Nugent, Gwen; Soh, Leen-Kiat; Samal, Ashok

    2006-01-01

    A learning object is a small, stand-alone, mediated content resource that can be reused in multiple instructional contexts. In this article, we describe our approach to design, develop, and validate Shareable Content Object Reference Model (SCORM) compliant learning objects for undergraduate computer science education. We discuss the advantages of…

  16. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  17. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  18. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  19. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  20. 29 CFR 1607.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... in the design of the study and their effects identified. (5) Statistical relationships. The degree of...; or such factors should be included in the design of the study and their effects identified. (f... arduous effort involving a series of research studies, which include criterion related validity studies...

  1. 76 FR 4360 - Guidance for Industry on Process Validation: General Principles and Practices; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... elements of process validation for the manufacture of human and animal drug and biological products... process validation for the manufacture of human and animal drug and biological products, including APIs. This guidance describes process validation activities in three stages: In Stage 1, Process Design, the...

  2. Validation of a Scalable Solar Sailcraft

    NASA Technical Reports Server (NTRS)

    Murphy, D. M.

    2006-01-01

    The NASA In-Space Propulsion (ISP) program sponsored intensive solar sail technology and systems design, development, and hardware demonstration activities over the past 3 years. Efforts to validate a scalable solar sail system by functional demonstration in relevant environments, together with test-analysis correlation activities on a scalable solar sail system have recently been successfully completed. A review of the program, with descriptions of the design, results of testing, and analytical model validations of component and assembly functional, strength, stiffness, shape, and dynamic behavior are discussed. The scaled performance of the validated system is projected to demonstrate the applicability to flight demonstration and important NASA road-map missions.

  3. Design and validation of a comprehensive fecal incontinence questionnaire.

    PubMed

    Macmillan, Alexandra K; Merrie, Arend E H; Marshall, Roger J; Parry, Bryan R

    2008-10-01

    Fecal incontinence can have a profound effect on quality of life. Its prevalence remains uncertain because of stigma, lack of consistent definition, and dearth of validated measures. This study was designed to develop a valid clinical and epidemiologic questionnaire, building on current literature and expertise. Patients and experts undertook face validity testing. Construct validity, criterion validity, and test-retest reliability was undertaken. Construct validity comprised factor analysis and internal consistency of the quality of life scale. The validity of known groups was tested against 77 control subjects by using regression models. Questionnaire results were compared with a stool diary for criterion validity. Test-retest reliability was calculated from repeated questionnaire completion. The questionnaire achieved good face validity. It was completed by 104 patients. The quality of life scale had four underlying traits (factor analysis) and high internal consistency (overall Cronbach alpha = 0.97). Patients and control subjects answered the questionnaire significantly differently (P < 0.01) in known-groups validity testing. Criterion validity assessment found mean differences close to zero. Median reliability for the whole questionnaire was 0.79 (range, 0.35-1). This questionnaire compares favorably with other available instruments, although the interpretation of stool consistency requires further research. Its sensitivity to treatment still needs to be investigated.

  4. Design and control of compliant tensegrity robots through simulation and hardware validation

    PubMed Central

    Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P.; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas

    2014-01-01

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity (‘tensile–integrity’) structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. PMID:24990292

  5. Aeroacoustic Validation of Installed Low Noise Propulsion for NASA's N+2 Supersonic Airliner

    NASA Technical Reports Server (NTRS)

    Bridges, James

    2018-01-01

    An aeroacoustic test was conducted at NASA Glenn Research Center on an integrated propulsion system designed to meet noise regulations of ICAO Chapter 4 with 10EPNdB cumulative margin. The test had two objectives: to demonstrate that the aircraft design did meet the noise goal, and to validate the acoustic design tools used in the design. Variations in the propulsion system design and its installation were tested and the results compared against predictions. Far-field arrays of microphones measured the acoustic spectral directivity, which was transformed to full scale as noise certification levels. Phased array measurements confirmed that the shielding of the installation model adequately simulated the full aircraft and provided data for validating RANS-based noise prediction tools. Particle image velocimetry confirmed that the flow field around the nozzle on the jet rig mimicked that of the full aircraft and produced flow data to validate the RANS solutions used in the noise predictions. The far-field acoustic measurements confirmed the empirical predictions for the noise. Results provided here detail the steps taken to ensure accuracy of the measurements and give insights into the physics of exhaust noise from installed propulsion systems in future supersonic vehicles.

  6. A design procedure for the handling qualities optimization of the X-29A aircraft

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Cox, Timothy H.

    1989-01-01

    The techniques used to improve the pitch-axis handling qualities of the X-29A wing-canard-planform fighter aircraft are reviewed. The aircraft and its FCS are briefly described, and the design method, which works within the existing FCS architecture, is characterized in detail. Consideration is given to the selection of design goals and design variables, the definition and calculation of the cost function, the validation of the mathematical model on the basis of flight-test data, and the validation of the improved design by means of nonlinear simulations. Flight tests of the improved design are shown to verify the simulation results.

  7. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.

  8. The Model Human Processor and the Older Adult: Parameter Estimation and Validation Within a Mobile Phone Task

    PubMed Central

    Jastrzembski, Tiffany S.; Charness, Neil

    2009-01-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; Mage = 20) and older (N = 20; Mage = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies. PMID:18194048

  9. The Model Human Processor and the older adult: parameter estimation and validation within a mobile phone task.

    PubMed

    Jastrzembski, Tiffany S; Charness, Neil

    2007-12-01

    The authors estimate weighted mean values for nine information processing parameters for older adults using the Card, Moran, and Newell (1983) Model Human Processor model. The authors validate a subset of these parameters by modeling two mobile phone tasks using two different phones and comparing model predictions to a sample of younger (N = 20; M-sub(age) = 20) and older (N = 20; M-sub(age) = 69) adults. Older adult models fit keystroke-level performance at the aggregate grain of analysis extremely well (R = 0.99) and produced equivalent fits to previously validated younger adult models. Critical path analyses highlighted points of poor design as a function of cognitive workload, hardware/software design, and user characteristics. The findings demonstrate that estimated older adult information processing parameters are valid for modeling purposes, can help designers understand age-related performance using existing interfaces, and may support the development of age-sensitive technologies.

  10. The Integrated Airframe/Propulsion Control System Architecture program (IAPSA)

    NASA Technical Reports Server (NTRS)

    Palumbo, Daniel L.; Cohen, Gerald C.; Meissner, Charles W.

    1990-01-01

    The Integrated Airframe/Propulsion Control System Architecture program (IAPSA) is a two-phase program which was initiated by NASA in the early 80s. The first phase, IAPSA 1, studied different architectural approaches to the problem of integrating engine control systems with airframe control systems in an advanced tactical fighter. One of the conclusions of IAPSA 1 was that the technology to construct a suitable system was available, yet the ability to create these complex computer architectures has outpaced the ability to analyze the resulting system's performance. With this in mind, the second phase of IAPSA approached the same problem with the added constraint that the system be designed for validation. The intent of the design for validation requirement is that validation requirements should be shown to be achievable early in the design process. IAPSA 2 has demonstrated that despite diligent efforts, integrated systems can retain characteristics which are difficult to model and, therefore, difficult to validate.

  11. Assessing the validity of sales self-efficacy: a cautionary tale.

    PubMed

    Gupta, Nina; Ganster, Daniel C; Kepes, Sven

    2013-07-01

    We developed a focused, context-specific measure of sales self-efficacy and assessed its incremental validity against the broad Big 5 personality traits with department store salespersons, using (a) both a concurrent and a predictive design and (b) both objective sales measures and supervisory ratings of performance. We found that in the concurrent study, sales self-efficacy predicted objective and subjective measures of job performance more than did the Big 5 measures. Significant differences between the predictability of subjective and objective measures of performance were not observed. Predictive validity coefficients were generally lower than concurrent validity coefficients. The results suggest that there are different dynamics operating in concurrent and predictive designs and between broad and contextualized measures; they highlight the importance of distinguishing between these designs and measures in meta-analyses. The results also point to the value of focused, context-specific personality predictors in selection research. PsycINFO Database Record (c) 2013 APA, all rights reserved.

  12. Standing wave design and experimental validation of a tandem simulated moving bed process for insulin purification.

    PubMed

    Xie, Yi; Mun, Sungyong; Kim, Jinhyun; Wang, Nien-Hwa Linda

    2002-01-01

    A tandem simulated moving bed (SMB) process for insulin purification has been proposed and validated experimentally. The mixture to be separated consists of insulin, high molecular weight proteins, and zinc chloride. A systematic approach based on the standing wave design, rate model simulations, and experiments was used to develop this multicomponent separation process. The standing wave design was applied to specify the SMB operating conditions of a lab-scale unit with 10 columns. The design was validated with rate model simulations prior to experiments. The experimental results show 99.9% purity and 99% yield, which closely agree with the model predictions and the standing wave design targets. The agreement proves that the standing wave design can ensure high purity and high yield for the tandem SMB process. Compared to a conventional batch SEC process, the tandem SMB has 10% higher yield, 400% higher throughput, and 72% lower eluant consumption. In contrast, a design that ignores the effects of mass transfer and nonideal flow cannot meet the purity requirement and gives less than 96% yield.

  13. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods.

    PubMed

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J Sunil

    2014-08-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called "Patient Recursive Survival Peeling" is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called "combined" cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication.

  14. Cross-Validation of Survival Bump Hunting by Recursive Peeling Methods

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    We introduce a survival/risk bump hunting framework to build a bump hunting model with a possibly censored time-to-event type of response and to validate model estimates. First, we describe the use of adequate survival peeling criteria to build a survival/risk bump hunting model based on recursive peeling methods. Our method called “Patient Recursive Survival Peeling” is a rule-induction method that makes use of specific peeling criteria such as hazard ratio or log-rank statistics. Second, to validate our model estimates and improve survival prediction accuracy, we describe a resampling-based validation technique specifically designed for the joint task of decision rule making by recursive peeling (i.e. decision-box) and survival estimation. This alternative technique, called “combined” cross-validation is done by combining test samples over the cross-validation loops, a design allowing for bump hunting by recursive peeling in a survival setting. We provide empirical results showing the importance of cross-validation and replication. PMID:26997922

  15. Verification and Validation Process for Progressive Damage and Failure Analysis Methods in the NASA Advanced Composites Consortium

    NASA Technical Reports Server (NTRS)

    Wanthal, Steven; Schaefer, Joseph; Justusson, Brian; Hyder, Imran; Engelstad, Stephen; Rose, Cheryl

    2017-01-01

    The Advanced Composites Consortium is a US Government/Industry partnership supporting technologies to enable timeline and cost reduction in the development of certified composite aerospace structures. A key component of the consortium's approach is the development and validation of improved progressive damage and failure analysis methods for composite structures. These methods will enable increased use of simulations in design trade studies and detailed design development, and thereby enable more targeted physical test programs to validate designs. To accomplish this goal with confidence, a rigorous verification and validation process was developed. The process was used to evaluate analysis methods and associated implementation requirements to ensure calculation accuracy and to gage predictability for composite failure modes of interest. This paper introduces the verification and validation process developed by the consortium during the Phase I effort of the Advanced Composites Project. Specific structural failure modes of interest are first identified, and a subset of standard composite test articles are proposed to interrogate a progressive damage analysis method's ability to predict each failure mode of interest. Test articles are designed to capture the underlying composite material constitutive response as well as the interaction of failure modes representing typical failure patterns observed in aerospace structures.

  16. Sonic Boom Research at NASA Dryden: Objectives and Flight Results from the Lift and Nozzle Change Effects on Tail Shock (LaNCETS) Project

    NASA Technical Reports Server (NTRS)

    Moes, Timothy R.

    2009-01-01

    The principal objective of the Supersonics Project is to develop and validate multidisciplinary physics-based predictive design, analysis and optimization capabilities for supersonic vehicles. For aircraft, the focus will be on eliminating the efficiency, environmental and performance barriers to practical supersonic flight. Previous flight projects found that a shaped sonic boom could propagate all the way to the ground (F-5 SSBD experiment) and validated design tools for forebody shape modifications (F-5 SSBD and Quiet Spike experiments). The current project, Lift and Nozzle Change Effects on Tail Shock (LaNCETS) seeks to obtain flight data to develop and validate design tools for low-boom tail shock modifications. Attempts will be made to alter the shock structure of NASA's NF-15B TN/837 by changing the lift distribution by biasing the canard positions, changing the plume shape by under- and over-expanding the nozzles, and changing the plume shape using thrust vectoring. Additional efforts will measure resulting shocks with a probing aircraft (F-15B TN/836) and use the results to validate and update predictive tools. Preliminary flight results are presented and are available to provide truth data for developing and validating the CFD tools required to design low-boom supersonic aircraft.

  17. Human factors engineering and design validation for the redesigned follitropin alfa pen injection device.

    PubMed

    Mahony, Mary C; Patterson, Patricia; Hayward, Brooke; North, Robert; Green, Dawne

    2015-05-01

    To demonstrate, using human factors engineering (HFE), that a redesigned, pre-filled, ready-to-use, pre-asembled follitropin alfa pen can be used to administer prescribed follitropin alfa doses safely and accurately. A failure modes and effects analysis identified hazards and harms potentially caused by use errors; risk-control measures were implemented to ensure acceptable device use risk management. Participants were women with infertility, their significant others, and fertility nurse (FN) professionals. Preliminary testing included 'Instructions for Use' (IFU) and pre-validation studies. Validation studies used simulated injections in a representative use environment; participants received prior training on pen use. User performance in preliminary testing led to IFU revisions and a change to outer needle cap design to mitigate needle stick potential. In the first validation study (49 users, 343 simulated injections), in the FN group, one observed critical use error resulted in a device design modification and another in an IFU change. A second validation study tested the mitigation strategies; previously reported use errors were not repeated. Through an iterative process involving a series of studies, modifications were made to the pen design and IFU. Simulated-use testing demonstrated that the redesigned pen can be used to administer follitropin alfa effectively and safely.

  18. Validating EHR clinical models using ontology patterns.

    PubMed

    Martínez-Costa, Catalina; Schulz, Stefan

    2017-12-01

    Clinical models are artefacts that specify how information is structured in electronic health records (EHRs). However, the makeup of clinical models is not guided by any formal constraint beyond a semantically vague information model. We address this gap by advocating ontology design patterns as a mechanism that makes the semantics of clinical models explicit. This paper demonstrates how ontology design patterns can validate existing clinical models using SHACL. Based on the Clinical Information Modelling Initiative (CIMI), we show how ontology patterns detect both modeling and terminology binding errors in CIMI models. SHACL, a W3C constraint language for the validation of RDF graphs, builds on the concept of "Shape", a description of data in terms of expected cardinalities, datatypes and other restrictions. SHACL, as opposed to OWL, subscribes to the Closed World Assumption (CWA) and is therefore more suitable for the validation of clinical models. We have demonstrated the feasibility of the approach by manually describing the correspondences between six CIMI clinical models represented in RDF and two SHACL ontology design patterns. Using a Java-based SHACL implementation, we found at least eleven modeling and binding errors within these CIMI models. This demonstrates the usefulness of ontology design patterns not only as a modeling tool but also as a tool for validation. Copyright © 2017 Elsevier Inc. All rights reserved.

  19. Validation of the AVM Blast Computational Modeling and Simulation Tool Set

    DTIC Science & Technology

    2015-08-04

    by-construction" methodology is powerful and would not be possible without high -level design languages to support validation and verification. [1,4...to enable the making of informed design decisions.  Enable rapid exploration of the design trade-space for high -fidelity requirements tradeoffs...live-fire tests, the jump height of the target structure is recorded by using either high speed cameras or a string pot. A simple projectile motion

  20. Seaworthy Quantum Key Distribution Design and Validation (SEAKEY)

    DTIC Science & Technology

    2015-11-12

    polarization control and the CV state and the LO state are separated at a polarizing beam splitter . The CV state is delayed relative to the LO state, and... splitter or loss imperfections. We have identified a number of risks associated with implementing this design . The two most critical risks are: • The...Contractor Address: 10 Moulton Street, Cambridge, MA 02138 Title of the Project: Seaworthy Quantum Key Distribution Design and Validation (SEAKEY

  1. Vibration Characterization and Health Risk Assessment of the Vermont Army National Guard UH-72 Lakota and HH-60M MEDEVAC

    DTIC Science & Technology

    2014-04-01

    For assessing comfort reaction, the overall vibration total value (oVTV) was calculated as the vector sum of the weighted triaxial seat pan and...the health symptoms require investigation in order to develop or improve effective exposure criteria, ergonomic design requirements, and mitigation...effects, seat design , and validation testing. However, appropriate science- and technology-based guidelines on exposure, seat design , and validation

  2. A Psychometric Validation of the Internal and External Motivation to Respond without Prejudice toward People with Disabilities Scale

    ERIC Educational Resources Information Center

    Pruett, Steven R.; Deiches, Jon; Pfaller, Joseph; Moser, Erin; Chan, Fong

    2014-01-01

    Objective: To determine the factorial validity of the Internal and External Motivation to Respond without Prejudice toward People with Disabilities Scale (D-IMS/EMS). Design: A quantitative descriptive design using factor analysis. Participants: 233 rehabilitation counseling and rehabilitation services students. Results: Both exploratory and…

  3. School Climate of Educational Institutions: Design and Validation of a Diagnostic Scale

    ERIC Educational Resources Information Center

    Becerra, Sandra

    2016-01-01

    School climate is recognized as a relevant factor for the improvement of educative processes, favoring the administrative processes and optimum school performance. The present article is the result of a quantitative research model which had the objective of psychometrically designing and validating a scale to diagnose the organizational climate of…

  4. Perception of Parents Scale: Development and Validation.

    ERIC Educational Resources Information Center

    Wintre, Maxine Gallander; Yaffe, Marvin

    This paper describes the development and validation of the Perception of Parents Scale (POPS), which was designed to measure the transformation in parent-child relations from the initial positions of authority and obedience to the mature position of mutual reciprocity. A 51-item, 4-point Likert scale was designed. Items were divided into three…

  5. Parent-Reported Social Support for Child's Fruit and Vegetable Intake: Validity of Measures

    ERIC Educational Resources Information Center

    Dave, Jayna M.; Evans, Alexandra E.; Condrasky, Marge D.; Williams, Joel E.

    2012-01-01

    Objective: To develop and validate measures of parental social support to increase their child's fruit and vegetable (FV) consumption. Design: Cross-sectional study design. Setting: School and home. Participants: Two hundred three parents with at least 1 elementary school-aged child. Main Outcome Measure: Parents completed a questionnaire that…

  6. Design and Validation of the Quantum Mechanics Conceptual Survey

    ERIC Educational Resources Information Center

    McKagan, S. B.; Perkins, K. K.; Wieman, C. E.

    2010-01-01

    The Quantum Mechanics Conceptual Survey (QMCS) is a 12-question survey of students' conceptual understanding of quantum mechanics. It is intended to be used to measure the relative effectiveness of different instructional methods in modern physics courses. In this paper, we describe the design and validation of the survey, a process that included…

  7. Design of Mobile e-Books as a Teaching Tool for Diabetes Education

    ERIC Educational Resources Information Center

    Guo, Sophie Huey-Ming

    2017-01-01

    To facilitate people with diabetes adopting information technologies, a tool of mobile eHealth education for diabetes was described in this paper, presenting the validity of mobile eBook for diabetes educators. This paper describes the design concepts and validity of this mobile eBook for diabetes educators delivering diabetes electronic…

  8. Designing Mouse Behavioral Tasks Relevant to Autistic-Like Behaviors

    ERIC Educational Resources Information Center

    Crawley, Jacqueline N.

    2004-01-01

    The importance of genetic factors in autism has prompted the development of mutant mouse models to advance our understanding of biological mechanisms underlying autistic behaviors. Mouse models of human neuropsychiatric diseases are designed to optimize (1) face validity, i.e., resemblance to the human symptoms; (2) construct validity, i.e.,…

  9. Preparation for implementation of the mechanistic-empirical pavement design guide in Michigan, part 3 : local calibration and validation of the pavement-ME performance models.

    DOT National Transportation Integrated Search

    2014-11-01

    The main objective of Part 3 was to locally calibrate and validate the mechanistic-empirical pavement : design guide (Pavement-ME) performance models to Michigan conditions. The local calibration of the : performance models in the Pavement-ME is a ch...

  10. An extended protocol for usability validation of medical devices: Research design and reference model.

    PubMed

    Schmettow, Martin; Schnittker, Raphaela; Schraagen, Jan Maarten

    2017-05-01

    This paper proposes and demonstrates an extended protocol for usability validation testing of medical devices. A review of currently used methods for the usability evaluation of medical devices revealed two main shortcomings. Firstly, the lack of methods to closely trace the interaction sequences and derive performance measures. Secondly, a prevailing focus on cross-sectional validation studies, ignoring the issues of learnability and training. The U.S. Federal Drug and Food Administration's recent proposal for a validation testing protocol for medical devices is then extended to address these shortcomings: (1) a novel process measure 'normative path deviations' is introduced that is useful for both quantitative and qualitative usability studies and (2) a longitudinal, completely within-subject study design is presented that assesses learnability, training effects and allows analysis of diversity of users. A reference regression model is introduced to analyze data from this and similar studies, drawing upon generalized linear mixed-effects models and a Bayesian estimation approach. The extended protocol is implemented and demonstrated in a study comparing a novel syringe infusion pump prototype to an existing design with a sample of 25 healthcare professionals. Strong performance differences between designs were observed with a variety of usability measures, as well as varying training-on-the-job effects. We discuss our findings with regard to validation testing guidelines, reflect on the extensions and discuss the perspectives they add to the validation process. Copyright © 2017 Elsevier Inc. All rights reserved.

  11. A Model for Estimating the Reliability and Validity of Criterion-Referenced Measures.

    ERIC Educational Resources Information Center

    Edmonston, Leon P.; Randall, Robert S.

    A decision model designed to determine the reliability and validity of criterion referenced measures (CRMs) is presented. General procedures which pertain to the model are discussed as to: Measures of relationship, Reliability, Validity (content, criterion-oriented, and construct validation), and Item Analysis. The decision model is presented in…

  12. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  13. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  14. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  15. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  16. 41 CFR 60-3.14 - Technical standards for validity studies.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... likely to affect validity differences; or that these factors are included in the design of the study and... construct validity is both an extensive and arduous effort involving a series of research studies, which... validity studies. 60-3.14 Section 60-3.14 Public Contracts and Property Management Other Provisions...

  17. Improved specific energy Ni-H2 cell

    NASA Astrophysics Data System (ADS)

    Miller, L.

    1985-07-01

    Design optimization activities which have evolved and validated the necessary technology to produce Ni-H2 battery cells exhibiting a specific energy of 75-80 Whr/Kg (energy density approximately 73 Whr/L are summarized. Final design validation is currently underway with the production of battery cells for qualification and life testing. The INTELSAT type Ni-H2 battery cell design has been chosen for expository purposes. However, it should be recognized portions of the improved technology could be applied to the Air Force type Ni-H2 battery cell design with equal benefit.

  18. Improved Specific Energy Ni-h2 Cell

    NASA Technical Reports Server (NTRS)

    Miller, L.

    1985-01-01

    Design optimization activities which have evolved and validated the necessary technology to produce Ni-H2 battery cells exhibiting a specific energy of 75-80 Whr/Kg (energy density approximately 73 Whr/L are summarized. Final design validation is currently underway with the production of battery cells for qualification and life testing. The INTELSAT type Ni-H2 battery cell design has been chosen for expository purposes. However, it should be recognized portions of the improved technology could be applied to the Air Force type Ni-H2 battery cell design with equal benefit.

  19. Exploring the Benefits of Respite Services to Family Caregivers: Methodological Issues and Current Findings

    PubMed Central

    Zarit, Steven H.; Liu, Yin; Bangerter, Lauren R.; Rovine, Michael J.

    2017-01-01

    Objectives There is growing emphasis on empirical validation of the efficacy of community-based services for older people and their families, but research on services such as respite care faces methodological challenges that have limited the growth of outcome studies. We identify problems associated with the usual research approaches for studying respite care, with the goal of stimulating use of novel and more appropriate research designs that can lead to improved studies of community-based services. Method Using the concept of research validity, we evaluate the methodological approaches in the current literature on respite services, including adult day services, in-home respite and overnight respite. Results Although randomized control trials (RCTs) are possible in community settings, validity is compromised by practical limitations of randomization and other problems. Quasi-experimental and interrupted time series designs offer comparable validity to RCTs and can be implemented effectively in community settings. Conclusion An emphasis on RCTs by funders and researchers is not supported by scientific evidence. Alternative designs can lead to development of a valid body of research on community services such as respite. PMID:26729467

  20. Exploring the benefits of respite services to family caregivers: methodological issues and current findings.

    PubMed

    Zarit, Steven H; Bangerter, Lauren R; Liu, Yin; Rovine, Michael J

    2017-03-01

    There is growing emphasis on empirical validation of the efficacy of community-based services for older people and their families, but research on services such as respite care faces methodological challenges that have limited the growth of outcome studies. We identify problems associated with the usual research approaches for studying respite care, with the goal of stimulating use of novel and more appropriate research designs that can lead to improved studies of community-based services. Using the concept of research validity, we evaluate the methodological approaches in the current literature on respite services, including adult day services, in-home respite and overnight respite. Although randomized control trials (RCTs) are possible in community settings, validity is compromised by practical limitations of randomization and other problems. Quasi-experimental and interrupted time series designs offer comparable validity to RCTs and can be implemented effectively in community settings. An emphasis on RCTs by funders and researchers is not supported by scientific evidence. Alternative designs can lead to development of a valid body of research on community services such as respite.

  1. Designing and validation of a yoga-based intervention for schizophrenia.

    PubMed

    Govindaraj, Ramajayam; Varambally, Shivarama; Sharma, Manjunath; Gangadhar, Bangalore Nanjundaiah

    2016-06-01

    Schizophrenia is a chronic mental illness which causes significant distress and dysfunction. Yoga has been found to be effective as an add-on therapy in schizophrenia. Modules of yoga used in previous studies were based on individual researcher's experience. This study aimed to develop and validate a specific generic yoga-based intervention module for patients with schizophrenia. The study was conducted at NIMHANS Integrated Centre for Yoga (NICY). A yoga module was designed based on traditional and contemporary yoga literature as well as published studies. The yoga module along with three case vignettes of adult patients with schizophrenia was sent to 10 yoga experts for their validation. Experts (n = 10) gave their opinion on the usefulness of a yoga module for patients with schizophrenia with some modifications. In total, 87% (13 of 15 items) of the items in the initial module were retained, with modification in the remainder as suggested by the experts. A specific yoga-based module for schizophrenia was designed and validated by experts. Further studies are needed to confirm efficacy and clinical utility of the module. Additional clinical validation is suggested.

  2. Design and validation of a critical pathway for hospital management of patients with severe traumatic brain injury.

    PubMed

    Espinosa-Aguilar, Amilcar; Reyes-Morales, Hortensia; Huerta-Posada, Carlos E; de León, Itzcoatl Limón-Pérez; López-López, Fernando; Mejía-Hernández, Margarita; Mondragón-Martínez, María A; Calderón-Téllez, Ligia M; Amezcua-Cuevas, Rosa L; Rebollar-González, Jorge A

    2008-05-01

    Critical pathways for the management of patients with severe traumatic brain injury (STBI) may contribute to reducing the incidence of hospital complications, length of hospitalization stay, and cost of care. Such pathways have previously been developed for departments with significant resource availability. In Mexico, STBI is the most important cause of complications and length of stay in neurotrauma services at public hospitals. Although current treatment is designed basically in accordance with the Brain Trauma Foundation guidelines, shortfalls in the availability of local resources make it difficult to comply with these standards, and no critical pathway is available that accords with the resources of public hospitals. The purpose of the present study was to design and to validate a critical pathway for managing STBI patients that would be suitable for implementation in neurotrauma departments of middle-income level countries. The study comprised two phases: design (through literature review and design plan) and validation (content, construct, and appearance) of the critical pathway. The validated critical pathway for managing STBI patients entails four sequential subprocesses summarizing the hospital's care procedures, and includes three components: (1) nodes and criteria (in some cases, indicators are also included); (2) health team members in charge of the patient; (3) maximum estimated time for compliance with recommendations. This validated critical pathway is based on the current scientific evidence and accords with the availability of resources of middle-income countries.

  3. A Design Rationale Capture Tool to Support Design Verification and Re-use

    NASA Technical Reports Server (NTRS)

    Hooey, Becky Lee; Da Silva, Jonny C.; Foyle, David C.

    2012-01-01

    A design rationale tool (DR tool) was developed to capture design knowledge to support design verification and design knowledge re-use. The design rationale tool captures design drivers and requirements, and documents the design solution including: intent (why it is included in the overall design); features (why it is designed the way it is); information about how the design components support design drivers and requirements; and, design alternatives considered but rejected. For design verification purposes, the tool identifies how specific design requirements were met and instantiated within the final design, and which requirements have not been met. To support design re-use, the tool identifies which design decisions are affected when design drivers and requirements are modified. To validate the design tool, the design knowledge from the Taxiway Navigation and Situation Awareness (T-NASA; Foyle et al., 1996) system was captured and the DR tool was exercised to demonstrate its utility for validation and re-use.

  4. On validation of the rain climatic zone designations for Nigeria

    NASA Astrophysics Data System (ADS)

    Obiyemi, O. O.; Ibiyemi, T. S.; Ojo, J. S.

    2017-07-01

    In this paper, validation of rain climatic zone classifications for Nigeria is presented based on global radio-climatic models by the International Telecommunication Union-Radiocommunication (ITU-R) and Crane. Rain rate estimates deduced from several ground-based measurements and those earlier estimated from the precipitation index on the Tropical Rain Measurement Mission (TRMM) were employed for the validation exercise. Although earlier classifications indicated that Nigeria falls into zones P, Q, N, and K for the ITU-R designations, and zones E and H for Crane's climatic zone designations, the results however confirmed that the rain climatic zones across Nigeria can only be classified into four, namely P, Q, M, and N for the ITU-R designations, while the designations by Crane exhibited only three zones, namely E, G, and H. The ITU-R classification was found to be more suitable for planning microwave and millimeter wave links across Nigeria. The research outcomes are vital in boosting the confidence level of system designers in using the ITU-R designations as presented in the map developed for the rain zone designations for estimating the attenuation induced by rain along satellite and terrestrial microwave links over Nigeria.

  5. [Design and validation of scales to measure adolescent attitude toward eating and toward physical activity].

    PubMed

    Lima-Serrano, Marta; Lima-Rodríguez, Joaquín Salvador; Sáez-Bueno, Africa

    2012-01-01

    Different authors suggest that attitude is a mediator in behavior change, so it is a predictor of behavior practice. The main of this study was to design and to validate two scales for measure adolescent attitude toward healthy eating and adolescent attitude toward healthy physical activity. Scales were design based on a literature review. After, they were validated using an on-line Delphi Panel with eighteen experts, a pretest, and a pilot test with a sample of 188 high school students. Comprehensibility, content validity, adequacy, as well as the reliability (alpha of Cronbach test), and construct validity (exploratory factor analysis) of scales were tested. Scales validated by experts were considered appropriate in the pretest. In the pilot test, the ten-item Attitude to Eating Scale obtained α=0.72. The eight-item Attitude to Physical Activity Scale obtained α=0.86. They showed evidence of one-dimensional interpretation after factor analysis, a) all items got weights r>0.30 in first factor before rotations, b) the first factor explained a significant proportion of variance before rotations, and c) the total variance explained by the main factors extracted was greater than 50%. The Scales showed their reliability and validity. They could be employed to assess attitude to these priority intervention areas in Spanish adolescents, and to evaluate this intermediate result of health interventions and health programs.

  6. Face and Content Validity of the MacArthur Competence Assessment Tool for the Treatment of Iranian Patients.

    PubMed

    Saber, Ali; Tabatabaei, Seyed Mahmoud; Akasheh, Godarz; Sehat, Mojtaba; Zanjani, Zahra; Larijani, Bagher

    2017-01-01

    There is not a valid Persian tool for measuring the decision-making competency of patients. The aim of this study is to evaluate the face and content validity of the MacArthur Competence Assessment Tool for the treatment of Iranian Persian-speaking patients. To assess the validity of the Persian version of the tool, a self-administrated questionnaire was designed. The Lawshe method was also used for assessing each item. Content validity ratio (CVR) and content validity index (CVI) were used to assess the content validity quantitatively. According to the experts' judgment, questions with a CVR ≥0.62 and CVR <0.62 were maintainable and unmaintainable, respectively. The questions were designed in a manner to achieve the desirable result (CVR ≥0.62). The CVI scale (S-CVI) and CVI (S-CVI/Ave) were 0.94 (higher than 0.79). Thus, the content validity was confirmed. Since capacity assessments are usually based on physician's subjective judgment, they are likely to bias and therefore, with this suitably validated tool, we can improve judgment of physicians and health-care providers in out- and in-patient cases.

  7. CMLLite: a design philosophy for CML

    PubMed Central

    2011-01-01

    CMLLite is a collection of definitions and processes which provide strong and flexible validation for a document in Chemical Markup Language (CML). It consists of an updated CML schema (schema3), conventions specifying rules in both human and machine-understandable forms and a validator available both online and offline to check conformance. This article explores the rationale behind the changes which have been made to the schema, explains how conventions interact and how they are designed, formulated, implemented and tested, and gives an overview of the validation service. PMID:21999395

  8. Analysis and optimization of hybrid excitation permanent magnet synchronous generator for stand-alone power system

    NASA Astrophysics Data System (ADS)

    Wang, Huijun; Qu, Zheng; Tang, Shaofei; Pang, Mingqi; Zhang, Mingju

    2017-08-01

    In this paper, electromagnetic design and permanent magnet shape optimization for permanent magnet synchronous generator with hybrid excitation are investigated. Based on generator structure and principle, design outline is presented for obtaining high efficiency and low voltage fluctuation. In order to realize rapid design, equivalent magnetic circuits for permanent magnet and iron poles are developed. At the same time, finite element analysis is employed. Furthermore, by means of design of experiment (DOE) method, permanent magnet is optimized to reduce voltage waveform distortion. Finally, the validity of proposed design methods is validated by the analytical and experimental results.

  9. Quality by Design: Multidimensional exploration of the design space in high performance liquid chromatography method development for better robustness before validation.

    PubMed

    Monks, K; Molnár, I; Rieger, H-J; Bogáti, B; Szabó, E

    2012-04-06

    Robust HPLC separations lead to fewer analysis failures and better method transfer as well as providing an assurance of quality. This work presents the systematic development of an optimal, robust, fast UHPLC method for the simultaneous assay of two APIs of an eye drop sample and their impurities, in accordance with Quality by Design principles. Chromatography software is employed to effectively generate design spaces (Method Operable Design Regions), which are subsequently employed to determine the final method conditions and to evaluate robustness prior to validation. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Assessing Adolescent Mindfulness: Validation of an Adapted Mindful Attention Awareness Scale in Adolescent Normative and Psychiatric Populations

    ERIC Educational Resources Information Center

    Brown, Kirk Warren; West, Angela Marie; Loverich, Tamara M.; Biegel, Gina M.

    2011-01-01

    Interest in mindfulness-based interventions for children and adolescents is burgeoning, bringing with it the need for validated instruments to assess mindfulness in youths. The present studies were designed to validate among adolescents a measure of mindfulness previously validated for adults (e.g., Brown & Ryan, 2003), which we herein call…

  11. Development and Validation of a Project Package for Junior Secondary School Basic Science

    ERIC Educational Resources Information Center

    Udofia, Nsikak-Abasi

    2014-01-01

    This was a Research and Developmental study designed to develop and validate projects for Junior Secondary School Basic Science instruction and evaluation. The projects were developed using the project blueprint and sent for validation by experts in science education and measurement and evaluation; using a project validation scale. They were to…

  12. The Intuitive Eating Scale: Development and Preliminary Validation

    ERIC Educational Resources Information Center

    Hawks, Steven; Merrill, Ray M.; Madanat, Hala N.

    2004-01-01

    This article describes the development and validation of an instrument designed to measure the concept of intuitive eating. To ensure face and content validity for items used in the Likert-type Intuitive Eating Scale (IES), content domain was clearly specified and a panel of experts assessed the validity of each item. Based on responses from 391…

  13. Teachers' Cloud-Based Learning Designs: The Development of a Guiding Rubric Using the TPACK Framework

    ERIC Educational Resources Information Center

    Al-Harthi, Aisha Salim Ali; Campbell, Chris; Karimi, Arafeh

    2018-01-01

    This study aimed to develop, validate, and trial a rubric for evaluating the cloud-based learning designs (CBLD) that were developed by teachers using virtual learning environments. The rubric was developed using the technological pedagogical content knowledge (TPACK) framework, with rubric development including content and expert validation of…

  14. The Design and Validation of a Parent-Report Questionnaire for Assessing the Characteristics and Quality of Early Intervention over Time

    ERIC Educational Resources Information Center

    Young, Alys; Gascon-Ramos, Maria; Campbell, Malcolm; Bamford, John

    2009-01-01

    This article concerns a parent-report repeat questionnaire to evaluate the quality of multiprofessional early intervention following early identification of deafness. It discusses the rationale for the design of the instrument, its theoretical underpinnings, its psychometric properties, and its usability. Results for the validity and reliability…

  15. Design, Development and Validation of a Model of Problem Solving for Egyptian Science Classes

    ERIC Educational Resources Information Center

    Shahat, Mohamed A.; Ohle, Annika; Treagust, David F.; Fischer, Hans E.

    2013-01-01

    Educators and policymakers envision the future of education in Egypt as enabling learners to acquire scientific inquiry and problem-solving skills. In this article, we describe the validation of a model for problem solving and the design of instruments for evaluating new teaching methods in Egyptian science classes. The instruments were based on…

  16. Designing and Piloting a Leadership Daily Practice Log: Using Logs to Study the Practice of Leadership

    ERIC Educational Resources Information Center

    Spillane, James P.; Zuberi, Anita

    2009-01-01

    Purpose: This article aims to validate the Leadership Daily Practice (LDP) log, an instrument for conducting research on leadership in schools. Research Design: Using a combination of data sources--namely, a daily practice log, observations, and open-ended cognitive interviews--the authors evaluate the validity of the LDP log. Participants: Formal…

  17. Development and Validation of Teaching Practice Evaluation Instrument for Assessing Chemistry Students' Teaching Skills

    ERIC Educational Resources Information Center

    Ezeudu, F. O.; Chiaha, G. T. U.; Eze, J. U.

    2013-01-01

    The study was designed to develop and factorially validate an instrument for measuring teaching practice skills of chemistry student-teachers in University of Nigeria, Nsukka. Two research questions guided the study. The design of the study was instrumentation. All the chemistry student-teachers in the Department of Science Education, University…

  18. Validity of a Competing Food Choice Construct regarding Fruit and Vegetable Consumption among Urban College Freshmen

    ERIC Educational Resources Information Center

    Yeh, Ming-Chin; Matsumori, Brandy; Obenchain, Janel; Viladrich, Anahi; Das, Dhiman; Navder, Khursheed

    2010-01-01

    Objective: This paper presents the reliability and validity of a "competing food choice" construct designed to assess whether factors related to consumption of less-healthful food were perceived to be barriers to fruit and vegetable consumption in college freshmen. Design: Cross-sectional, self-administered survey. Setting: An urban public college…

  19. Preparation of the implementation plan of AASHTO Mechanistic-Empirical Pavement Design Guide (M-EPDG) in Connecticut : Phase II : expanded sensitivity analysis and validation with pavement management data.

    DOT National Transportation Integrated Search

    2017-02-08

    The study re-evaluates distress prediction models using the Mechanistic-Empirical Pavement Design Guide (MEPDG) and expands the sensitivity analysis to a wide range of pavement structures and soils. In addition, an extensive validation analysis of th...

  20. Conceptual Design of a Hypervelocity Asteroid Intercept Vehicle (HAIV) Flight Validation Mission

    NASA Technical Reports Server (NTRS)

    Barbee, Brent W.; Wie, Bong; Steiner, Mark; Getzandanner, Kenneth

    2013-01-01

    In this paper we present a detailed overview of the MDL study results and subsequent advances in the design of GNC algorithms for accurate terminal guidance during hypervelocity NEO intercept. The MDL study produced a conceptual con guration of the two-body HAIV and its subsystems; a mission scenario and trajectory design for a notional flight validation mission to a selected candidate target NEO; GNC results regarding the ability of the HAIV to reliably intercept small (50 m) NEOs at hypervelocity (typically greater than 10 km/s); candidate launch vehicle selection; a notional operations concept and cost estimate for the flight validation mission; and a list of topics to address during the remainder of our NIAC Phase II study.

  1. The NASA Hyper-X Program

    NASA Technical Reports Server (NTRS)

    Freeman, Delman C., Jr.; Reubush, Daivd E.; McClinton, Charles R.; Rausch, Vincent L.; Crawford, J. Larry

    1997-01-01

    This paper provides an overview of NASA's Hyper-X Program; a focused hypersonic technology effort designed to move hypersonic, airbreathing vehicle technology from the laboratory environment to the flight environment. This paper presents an overview of the flight test program, research objectives, approach, schedule and status. Substantial experimental database and concept validation have been completed. The program is currently concentrating on the first, Mach 7, vehicle development, verification and validation in preparation for wind-tunnel testing in 1998 and flight testing in 1999. Parallel to this effort the Mach 5 and 10 vehicle designs are being finalized. Detailed analytical and experimental evaluation of the Mach 7 vehicle at the flight conditions is nearing completion, and will provide a database for validation of design methods once flight test data are available.

  2. Measuring safety climate in health care.

    PubMed

    Flin, R; Burns, C; Mearns, K; Yule, S; Robertson, E M

    2006-04-01

    To review quantitative studies of safety climate in health care to examine the psychometric properties of the questionnaires designed to measure this construct. A systematic literature review was undertaken to study sample and questionnaire design characteristics (source, no of items, scale type), construct validity (content validity, factor structure and internal reliability, concurrent validity), within group agreement, and level of analysis. Twelve studies were examined. There was a lack of explicit theoretical underpinning for most questionnaires and some instruments did not report standard psychometric criteria. Where this information was available, several questionnaires appeared to have limitations. More consideration should be given to psychometric factors in the design of healthcare safety climate instruments, especially as these are beginning to be used in large scale surveys across healthcare organisations.

  3. Validity and power of association testing in family-based sampling designs: evidence for and against the common wisdom.

    PubMed

    Knight, Stacey; Camp, Nicola J

    2011-04-01

    Current common wisdom posits that association analyses using family-based designs have inflated type 1 error rates (if relationships are ignored) and independent controls are more powerful than familial controls. We explore these suppositions. We show theoretically that family-based designs can have deflated type-error rates. Through simulation, we examine the validity and power of family designs for several scenarios: cases from randomly or selectively ascertained pedigrees; and familial or independent controls. Family structures considered are as follows: sibships, nuclear families, moderate-sized and extended pedigrees. Three methods were considered with the χ(2) test for trend: variance correction (VC), weighted (weights assigned to account for genetic similarity), and naïve (ignoring relatedness) as well as the Modified Quasi-likelihood Score (MQLS) test. Selectively ascertained pedigrees had similar levels of disease enrichment; random ascertainment had no such restriction. Data for 1,000 cases and 1,000 controls were created under the null and alternate models. The VC and MQLS methods were always valid. The naïve method was anti-conservative if independent controls were used and valid or conservative in designs with familial controls. The weighted association method was generally valid for independent controls, and was conservative for familial controls. With regard to power, independent controls were more powerful for small-to-moderate selectively ascertained pedigrees, but familial and independent controls were equivalent in the extended pedigrees and familial controls were consistently more powerful for all randomly ascertained pedigrees. These results suggest a more complex situation than previously assumed, which has important implications for study design and analysis. © 2011 Wiley-Liss, Inc.

  4. Validation Methods for Fault-Tolerant avionics and control systems, working group meeting 1

    NASA Technical Reports Server (NTRS)

    1979-01-01

    The proceedings of the first working group meeting on validation methods for fault tolerant computer design are presented. The state of the art in fault tolerant computer validation was examined in order to provide a framework for future discussions concerning research issues for the validation of fault tolerant avionics and flight control systems. The development of positions concerning critical aspects of the validation process are given.

  5. Design and control of compliant tensegrity robots through simulation and hardware validation.

    PubMed

    Caluwaerts, Ken; Despraz, Jérémie; Işçen, Atıl; Sabelhaus, Andrew P; Bruce, Jonathan; Schrauwen, Benjamin; SunSpiral, Vytas

    2014-09-06

    To better understand the role of tensegrity structures in biological systems and their application to robotics, the Dynamic Tensegrity Robotics Lab at NASA Ames Research Center, Moffett Field, CA, USA, has developed and validated two software environments for the analysis, simulation and design of tensegrity robots. These tools, along with new control methodologies and the modular hardware components developed to validate them, are presented as a system for the design of actuated tensegrity structures. As evidenced from their appearance in many biological systems, tensegrity ('tensile-integrity') structures have unique physical properties that make them ideal for interaction with uncertain environments. Yet, these characteristics make design and control of bioinspired tensegrity robots extremely challenging. This work presents the progress our tools have made in tackling the design and control challenges of spherical tensegrity structures. We focus on this shape since it lends itself to rolling locomotion. The results of our analyses include multiple novel control approaches for mobility and terrain interaction of spherical tensegrity structures that have been tested in simulation. A hardware prototype of a spherical six-bar tensegrity, the Reservoir Compliant Tensegrity Robot, is used to empirically validate the accuracy of simulation. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  6. NDARC - NASA Design and Analysis of Rotorcraft Validation and Demonstration

    NASA Technical Reports Server (NTRS)

    Johnson, Wayne

    2010-01-01

    Validation and demonstration results from the development of the conceptual design tool NDARC (NASA Design and Analysis of Rotorcraft) are presented. The principal tasks of NDARC are to design a rotorcraft to satisfy specified design conditions and missions, and then analyze the performance of the aircraft for a set of off-design missions and point operating conditions. The aircraft chosen as NDARC development test cases are the UH-60A single main-rotor and tail-rotor helicopter, the CH-47D tandem helicopter, the XH-59A coaxial lift-offset helicopter, and the XV-15 tiltrotor. These aircraft were selected because flight performance data, a weight statement, detailed geometry information, and a correlated comprehensive analysis model are available for each. Validation consists of developing the NDARC models for these aircraft by using geometry and weight information, airframe wind tunnel test data, engine decks, rotor performance tests, and comprehensive analysis results; and then comparing the NDARC results for aircraft and component performance with flight test data. Based on the calibrated models, the capability of the code to size rotorcraft is explored.

  7. Supersonic, nonlinear, attached-flow wing design for high lift with experimental validation

    NASA Technical Reports Server (NTRS)

    Pittman, J. L.; Miller, D. S.; Mason, W. H.

    1984-01-01

    Results of the experimental validation are presented for the three dimensional cambered wing which was designed to achieve attached supercritical cross flow for lifting conditions typical of supersonic maneuver. The design point was a lift coefficient of 0.4 at Mach 1.62 and 12 deg angle of attack. Results from the nonlinear full potential method are presented to show the validity of the design process along with results from linear theory codes. Longitudinal force and moment data and static pressure data were obtained in the Langley Unitary Plan Wind Tunnel at Mach numbers of 1.58, 1.62, 1.66, 1.70, and 2.00 over an angle of attack range of 0 to 14 deg at a Reynolds number of 2.0 x 10 to the 6th power per foot. Oil flow photographs of the upper surface were obtained at M = 1.62 for alpha approx. = 8, 10, 12, and 14 deg.

  8. Methodological convergence of program evaluation designs.

    PubMed

    Chacón-Moscoso, Salvador; Anguera, M Teresa; Sanduvete-Chaves, Susana; Sánchez-Martín, Milagrosa

    2014-01-01

    Nowadays, the confronting dichotomous view between experimental/quasi-experimental and non-experimental/ethnographic studies still exists but, despite the extensive use of non-experimental/ethnographic studies, the most systematic work on methodological quality has been developed based on experimental and quasi-experimental studies. This hinders evaluators and planners' practice of empirical program evaluation, a sphere in which the distinction between types of study is changing continually and is less clear. Based on the classical validity framework of experimental/quasi-experimental studies, we carry out a review of the literature in order to analyze the convergence of design elements in methodological quality in primary studies in systematic reviews and ethnographic research. We specify the relevant design elements that should be taken into account in order to improve validity and generalization in program evaluation practice in different methodologies from a practical methodological and complementary view. We recommend ways to improve design elements so as to enhance validity and generalization in program evaluation practice.

  9. Preliminary Structural Sensitivity Study of Hypersonic Inflatable Aerodynamic Decelerator Using Probabilistic Methods

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.

    2014-01-01

    Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.

  10. Idealized gas turbine combustor for performance research and validation of large eddy simulations.

    PubMed

    Williams, Timothy C; Schefer, Robert W; Oefelein, Joseph C; Shaddix, Christopher R

    2007-03-01

    This paper details the design of a premixed, swirl-stabilized combustor that was designed and built for the express purpose of obtaining validation-quality data for the development of large eddy simulations (LES) of gas turbine combustors. The combustor features nonambiguous boundary conditions, a geometrically simple design that retains the essential fluid dynamics and thermochemical processes that occur in actual gas turbine combustors, and unrestrictive access for laser and optical diagnostic measurements. After discussing the design detail, a preliminary investigation of the performance and operating envelope of the combustor is presented. With the combustor operating on premixed methane/air, both the equivalence ratio and the inlet velocity were systematically varied and the flame structure was recorded via digital photography. Interesting lean flame blowout and resonance characteristics were observed. In addition, the combustor exhibited a large region of stable, acoustically clean combustion that is suitable for preliminary validation of LES models.

  11. Scenario-based design: A method for connecting information system design with public health operations and emergency management

    PubMed Central

    Reeder, Blaine; Turner, Anne M

    2011-01-01

    Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Methods: Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Results: Interview analysis identified twenty-five information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create twenty-five scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. Conclusion: The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. PMID:21807120

  12. Development and testing of the ‘Culture of Care Barometer’ (CoCB) in healthcare organisations: a mixed methods study

    PubMed Central

    Rafferty, Anne Marie; Philippou, Julia; Fitzpatrick, Joanne M; Pike, Geoff; Ball, Jane

    2017-01-01

    Objective Concerns about care quality have prompted calls to create workplace cultures conducive to high-quality, safe and compassionate care and to provide a supportive environment in which staff can operate effectively. How healthcare organisations assess their culture of care is an important first step in creating such cultures. This article reports on the development and validation of a tool, the Culture of Care Barometer, designed to assess perceptions of a caring culture among healthcare workers preliminary to culture change. Design/setting/participants An exploratory mixed methods study designed to develop and test the validity of a tool to measure ‘culture of care’ through focus groups and questionnaires. Questionnaire development was facilitated through: a literature review, experts generating items of interest and focus group discussions with healthcare staff across specialities, roles and seniority within three types of public healthcare organisations in the UK. The tool was designed to be multiprofessional and pilot tested with a sample of 467 nurses and healthcare support workers in acute care and then validated with a sample of 1698 staff working across acute, mental health and community services in England. Exploratory factor analysis was used to identify dimensions underlying the Barometer. Results Psychometric testing resulted in the development of a 30-item questionnaire linked to four domains with retained items loading to four factors: organisational values (α=0.93, valid n=1568, M=3.7), team support (α=0.93, valid n=1557, M=3.2), relationships with colleagues (α=0.84, valid n=1617, M=4.0) and job constraints (α=0.70, valid n=1616, M=3.3). Conclusions The study developed a valid and reliable instrument with which to gauge the different attributes of care culture perceived by healthcare staff with potential for organisational benchmarking. PMID:28821526

  13. PRA (Probabilistic Risk Assessments) Participation versus Validation

    NASA Technical Reports Server (NTRS)

    DeMott, Diana; Banke, Richard

    2013-01-01

    Probabilistic Risk Assessments (PRAs) are performed for projects or programs where the consequences of failure are highly undesirable. PRAs primarily address the level of risk those projects or programs posed during operations. PRAs are often developed after the design has been completed. Design and operational details used to develop models include approved and accepted design information regarding equipment, components, systems and failure data. This methodology basically validates the risk parameters of the project or system design. For high risk or high dollar projects, using PRA methodologies during the design process provides new opportunities to influence the design early in the project life cycle to identify, eliminate or mitigate potential risks. Identifying risk drivers before the design has been set allows the design engineers to understand the inherent risk of their current design and consider potential risk mitigation changes. This can become an iterative process where the PRA model can be used to determine if the mitigation technique is effective in reducing risk. This can result in more efficient and cost effective design changes. PRA methodology can be used to assess the risk of design alternatives and can demonstrate how major design changes or program modifications impact the overall program or project risk. PRA has been used for the last two decades to validate risk predictions and acceptability. Providing risk information which can positively influence final system and equipment design the PRA tool can also participate in design development, providing a safe and cost effective product.

  14. Pump CFD code validation tests

    NASA Technical Reports Server (NTRS)

    Brozowski, L. A.

    1993-01-01

    Pump CFD code validation tests were accomplished by obtaining nonintrusive flow characteristic data at key locations in generic current liquid rocket engine turbopump configurations. Data were obtained with a laser two-focus (L2F) velocimeter at scaled design flow. Three components were surveyed: a 1970's-designed impeller, a 1990's-designed impeller, and a four-bladed unshrouded inducer. Two-dimensional velocities were measured upstream and downstream of the two impellers. Three-dimensional velocities were measured upstream, downstream, and within the blade row of the unshrouded inducer.

  15. Conference Proceedings on Validation of Computational Fluid Dynamics. Volume 2. Poster Papers Held in Lisbon, Portugal on 2-5 May 1988

    DTIC Science & Technology

    1988-05-01

    ifforiable manpower investement. On the basis of our current experience it seems that the basic design principles are valid. The system developed will... system is operational on various computer networks, and in both industrial and in research environments. The design pri,lciples for the construction of...to a useful numerical simulation and design system for very complex configurations and flows. 7. REFERENCES 1. Bartlett G. W. , "An experimental

  16. A newly developed tool for classifying study designs in systematic reviews of interventions and exposures showed substantial reliability and validity.

    PubMed

    Seo, Hyun-Ju; Kim, Soo Young; Lee, Yoon Jae; Jang, Bo-Hyoung; Park, Ji-Eun; Sheen, Seung-Soo; Hahn, Seo Kyung

    2016-02-01

    To develop a study Design Algorithm for Medical Literature on Intervention (DAMI) and test its interrater reliability, construct validity, and ease of use. We developed and then revised the DAMI to include detailed instructions. To test the DAMI's reliability, we used a purposive sample of 134 primary, mainly nonrandomized studies. We then compared the study designs as classified by the original authors and through the DAMI. Unweighted kappa statistics were computed to test interrater reliability and construct validity based on the level of agreement between the original and DAMI classifications. Assessment time was also recorded to evaluate ease of use. The DAMI includes 13 study designs, including experimental and observational studies of interventions and exposure. Both the interrater reliability (unweighted kappa = 0.67; 95% CI [0.64-0.75]) and construct validity (unweighted kappa = 0.63, 95% CI [0.52-0.67]) were substantial. Mean classification time using the DAMI was 4.08 ± 2.44 minutes (range, 0.51-10.92). The DAMI showed substantial interrater reliability and construct validity. Furthermore, given its ease of use, it could be used to accurately classify medical literature for systematic reviews of interventions although minimizing disagreement between authors of such reviews. Copyright © 2016 Elsevier Inc. All rights reserved.

  17. Ocean power technology design optimization

    DOE PAGES

    van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen; ...

    2017-07-18

    For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less

  18. Ocean power technology design optimization

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    van Rij, Jennifer; Yu, Yi -Hsiang; Edwards, Kathleen

    For this study, the National Renewable Energy Laboratory and Ocean Power Technologies (OPT) conducted a collaborative code validation and design optimization study for OPT's PowerBuoy wave energy converter (WEC). NREL utilized WEC-Sim, an open-source WEC simulator, to compare four design variations of OPT's PowerBuoy. As an input to the WEC-Sim models, viscous drag coefficients for the PowerBuoy floats were first evaluated using computational fluid dynamics. The resulting WEC-Sim PowerBuoy models were then validated with experimental power output and fatigue load data provided by OPT. The validated WEC-Sim models were then used to simulate the power performance and loads for operationalmore » conditions, extreme conditions, and directional waves, for each of the four PowerBuoy design variations, assuming the wave environment of Humboldt Bay, California. And finally, ratios of power-to-weight, power-to-fatigue-load, power-to-maximum-extreme-load, power-to-water-plane-area, and power-to-wetted-surface-area were used to make a final comparison of the potential PowerBuoy WEC designs. Lastly, the design comparison methodologies developed and presented in this study are applicable to other WEC devices and may be useful as a framework for future WEC design development projects.« less

  19. The assessment of health policy changes using the time-reversed crossover design.

    PubMed Central

    Sollecito, W A; Gillings, D B

    1986-01-01

    The time-reversed crossover design is a quasi-experimental design which can be applied to evaluate the impact of a change in health policy on a large population. This design makes use of separate sampling and analysis strategies to improve the validity of conclusions drawn from such an evaluation. The properties of the time-reversed crossover design are presented including the use of stratification on outcome in the sampling stage, which is intended to improve external validity. It is demonstrated that, although this feature of the design introduces internal validity threats due to regression toward the mean in extreme-outcome strata, these effects can be measured and eliminated from the test of significance of treatment effects. Methods for within- and across-stratum estimation and hypothesis-testing are presented which are similar to those which have been developed for the traditional two-period crossover design widely used in clinical trials. The procedures are illustrated using data derived from a study conducted by the United Mine Workers of America Health and Retirement Funds to measure the impact of cost-sharing on health care utilization among members of its health plan. PMID:3081465

  20. Controls-Structures Interaction (CSI) technology program summary. Earth orbiting platforms program area of the space platforms technology program

    NASA Technical Reports Server (NTRS)

    Newsom, Jerry R.

    1991-01-01

    Control-Structures Interaction (CSI) technology embraces the understanding of the interaction between the spacecraft structure and the control system, and the creation and validation of concepts, techniques, and tools, for enabling the interdisciplinary design of an integrated structure and control system, rather than the integration of a structural design and a control system design. The goal of this program is to develop validated CSI technology for integrated design/analysis and qualification of large flexible space systems and precision space structures. A description of the CSI technology program is presented.

  1. A unified approach to validation, reliability, and education study design for surgical technical skills training.

    PubMed

    Sweet, Robert M; Hananel, David; Lawrenz, Frances

    2010-02-01

    To present modern educational psychology theory and apply these concepts to validity and reliability of surgical skills training and assessment. In a series of cross-disciplinary meetings, we applied a unified approach of behavioral science principles and theory to medical technical skills education given the recent advances in the theories in the field of behavioral psychology and statistics. While validation of the individual simulation tools is important, it is only one piece of a multimodal curriculum that in and of itself deserves examination and study. We propose concurrent validation throughout the design of simulation-based curriculum rather than once it is complete. We embrace the concept that validity and curriculum development are interdependent, ongoing processes that are never truly complete. Individual predictive, construct, content, and face validity aspects should not be considered separately but as interdependent and complementary toward an end application. Such an approach could help guide our acceptance and appropriate application of these exciting new training and assessment tools for technical skills training in medicine.

  2. Virtual Model Validation of Complex Multiscale Systems: Applications to Nonlinear Elastostatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oden, John Tinsley; Prudencio, Ernest E.; Bauman, Paul T.

    We propose a virtual statistical validation process as an aid to the design of experiments for the validation of phenomenological models of the behavior of material bodies, with focus on those cases in which knowledge of the fabrication process used to manufacture the body can provide information on the micro-molecular-scale properties underlying macroscale behavior. One example is given by models of elastomeric solids fabricated using polymerization processes. We describe a framework for model validation that involves Bayesian updates of parameters in statistical calibration and validation phases. The process enables the quanti cation of uncertainty in quantities of interest (QoIs) andmore » the determination of model consistency using tools of statistical information theory. We assert that microscale information drawn from molecular models of the fabrication of the body provides a valuable source of prior information on parameters as well as a means for estimating model bias and designing virtual validation experiments to provide information gain over calibration posteriors.« less

  3. Validation (not just verification) of Deep Space Missions

    NASA Technical Reports Server (NTRS)

    Duren, Riley M.

    2006-01-01

    ion & Validation (V&V) is a widely recognized and critical systems engineering function. However, the often used definition 'Verification proves the design is right; validation proves it is the right design' is rather vague. And while Verification is a reasonably well standardized systems engineering process, Validation is a far more abstract concept and the rigor and scope applied to it varies widely between organizations and individuals. This is reflected in the findings in recent Mishap Reports for several NASA missions, in which shortfalls in Validation (not just Verification) were cited as root- or contributing-factors in catastrophic mission loss. Furthermore, although there is strong agreement in the community that Test is the preferred method for V&V, many people equate 'V&V' with 'Test', such that Analysis and Modeling aren't given comparable attention. Another strong motivator is a realization that the rapid growth in complexity of deep-space missions (particularly Planetary Landers and Space Observatories given their inherent unknowns) is placing greater demands on systems engineers to 'get it right' with Validation.

  4. Demonstration of automated proximity and docking technologies

    NASA Astrophysics Data System (ADS)

    Anderson, Robert L.; Tsugawa, Roy K.; Bryan, Thomas C.

    An autodock was demonstrated using straightforward techniques and real sensor hardware. A simulation testbed was established and validated. The sensor design was refined with improved optical performance and image processing noise mitigation techniques, and the sensor is ready for production from off-the-shelf components. The autonomous spacecraft architecture is defined. The areas of sensors, docking hardware, propulsion, and avionics are included in the design. The Guidance Navigation and Control architecture and requirements are developed. Modular structures suitable for automated control are used. The spacecraft system manager functions including configuration, resource, and redundancy management are defined. The requirements for autonomous spacecraft executive are defined. High level decisionmaking, mission planning, and mission contingency recovery are a part of this. The next step is to do flight demonstrations. After the presentation the following question was asked. How do you define validation? There are two components to validation definition: software simulation with formal and vigorous validation, and hardware and facility performance validated with respect to software already validated against analytical profile.

  5. Validation Database Based Thermal Analysis of an Advanced RPS Concept

    NASA Technical Reports Server (NTRS)

    Balint, Tibor S.; Emis, Nickolas D.

    2006-01-01

    Advanced RPS concepts can be conceived, designed and assessed using high-end computational analysis tools. These predictions may provide an initial insight into the potential performance of these models, but verification and validation are necessary and required steps to gain confidence in the numerical analysis results. This paper discusses the findings from a numerical validation exercise for a small advanced RPS concept, based on a thermal analysis methodology developed at JPL and on a validation database obtained from experiments performed at Oregon State University. Both the numerical and experimental configurations utilized a single GPHS module enabled design, resembling a Mod-RTG concept. The analysis focused on operating and environmental conditions during the storage phase only. This validation exercise helped to refine key thermal analysis and modeling parameters, such as heat transfer coefficients, and conductivity and radiation heat transfer values. Improved understanding of the Mod-RTG concept through validation of the thermal model allows for future improvements to this power system concept.

  6. Validation of Autoclave Protocols for Successful Decontamination of Category A Medical Waste Generated from Care of Patients with Serious Communicable Diseases

    PubMed Central

    Reimers, Mallory; Ernst, Neysa; Bova, Gregory; Nowakowski, Elaine; Bukowski, James; Ellis, Brandon C.; Smith, Chris; Sauer, Lauren; Dionne, Kim; Carroll, Karen C.; Maragakis, Lisa L.; Parrish, Nicole M.

    2016-01-01

    ABSTRACT In response to the Ebola outbreak in 2014, many hospitals designated specific areas to care for patients with Ebola and other highly infectious diseases. The safe handling of category A infectious substances is a unique challenge in this environment. One solution is on-site waste treatment with a steam sterilizer or autoclave. The Johns Hopkins Hospital (JHH) installed two pass-through autoclaves in its biocontainment unit (BCU). The JHH BCU and The Johns Hopkins biosafety level 3 (BSL-3) clinical microbiology laboratory designed and validated waste-handling protocols with simulated patient trash to ensure adequate sterilization. The results of the validation process revealed that autoclave factory default settings are potentially ineffective for certain types of medical waste and highlighted the critical role of waste packaging in successful sterilization. The lessons learned from the JHH validation process can inform the design of waste management protocols to ensure effective treatment of highly infectious medical waste. PMID:27927920

  7. Initial design and performance of the near surface unmanned aircraft system sensor suite in support of the GOES-R field campaign

    NASA Astrophysics Data System (ADS)

    Pearlman, Aaron J.; Padula, Francis; Shao, Xi; Cao, Changyong; Goodman, Steven J.

    2016-09-01

    One of the main objectives of the Geostationary Operational Environmental Satellite R-Series (GOES-R) field campaign is to validate the SI traceability of the Advanced Baseline Imager. The campaign plans include a feasibility demonstration study for new near surface unmanned aircraft system (UAS) measurement capability that is being developed to meet the challenges of validating geostationary sensors. We report our progress in developing our initial systems by presenting the design and preliminary characterization results of the sensor suite. The design takes advantage of off-the-shelf technologies and fiber-based optical components to make hemispheric directional measurements from a UAS. The characterization results - including laboratory measurements of temperature effects and polarization sensitivity - are used to refine the radiometric uncertainty budget towards meeting the validation objectives for the campaign. These systems will foster improved validation capabilities for the GOES-R field campaign and other next generation satellite systems.

  8. Validation of Autoclave Protocols for Successful Decontamination of Category A Medical Waste Generated from Care of Patients with Serious Communicable Diseases.

    PubMed

    Garibaldi, Brian T; Reimers, Mallory; Ernst, Neysa; Bova, Gregory; Nowakowski, Elaine; Bukowski, James; Ellis, Brandon C; Smith, Chris; Sauer, Lauren; Dionne, Kim; Carroll, Karen C; Maragakis, Lisa L; Parrish, Nicole M

    2017-02-01

    In response to the Ebola outbreak in 2014, many hospitals designated specific areas to care for patients with Ebola and other highly infectious diseases. The safe handling of category A infectious substances is a unique challenge in this environment. One solution is on-site waste treatment with a steam sterilizer or autoclave. The Johns Hopkins Hospital (JHH) installed two pass-through autoclaves in its biocontainment unit (BCU). The JHH BCU and The Johns Hopkins biosafety level 3 (BSL-3) clinical microbiology laboratory designed and validated waste-handling protocols with simulated patient trash to ensure adequate sterilization. The results of the validation process revealed that autoclave factory default settings are potentially ineffective for certain types of medical waste and highlighted the critical role of waste packaging in successful sterilization. The lessons learned from the JHH validation process can inform the design of waste management protocols to ensure effective treatment of highly infectious medical waste. Copyright © 2017 American Society for Microbiology.

  9. Using Patient Feedback to Optimize the Design of a Certolizumab Pegol Electromechanical Self-Injection Device: Insights from Human Factors Studies.

    PubMed

    Domańska, Barbara; Stumpp, Oliver; Poon, Steven; Oray, Serkan; Mountian, Irina; Pichon, Clovis

    2018-01-01

    We incorporated patient feedback from human factors studies (HFS) in the patient-centric design and validation of ava ® , an electromechanical device (e-Device) for self-injecting the anti-tumor necrosis factor certolizumab pegol (CZP). Healthcare professionals, caregivers, healthy volunteers, and patients with rheumatoid arthritis, psoriatic arthritis, ankylosing spondylitis, or Crohn's disease participated in 11 formative HFS to optimize the e-Device design through intended user feedback; nine studies involved simulated injections. Formative participant questionnaire feedback was collected following e-Device prototype handling. Validation HFS (one EU study and one US study) assessed the safe and effective setup and use of the e-Device using 22 predefined critical tasks. Task outcomes were categorized as "failures" if participants did not succeed within three attempts. Two hundred eighty-three participants entered formative (163) and validation (120) HFS; 260 participants performed one or more simulated e-Device self-injections. Design changes following formative HFS included alterations to buttons and the graphical user interface screen. All validation HFS participants completed critical tasks necessary for CZP dose delivery, with minimal critical task failures (12 of 572 critical tasks, 2.1%, in the EU study, and 2 of 5310 critical tasks, less than 0.1%, in the US study). CZP e-Device development was guided by intended user feedback through HFS, ensuring the final design addressed patients' needs. In both validation studies, participants successfully performed all critical tasks, demonstrating safe and effective e-Device self-injections. UCB Pharma. Plain language summary available on the journal website.

  10. Bayesian Adaptive Trial Design for a Newly Validated Surrogate Endpoint

    PubMed Central

    Renfro, Lindsay A.; Carlin, Bradley P.; Sargent, Daniel J.

    2011-01-01

    Summary The evaluation of surrogate endpoints for primary use in future clinical trials is an increasingly important research area, due to demands for more efficient trials coupled with recent regulatory acceptance of some surrogates as ‘valid.’ However, little consideration has been given to how a trial which utilizes a newly-validated surrogate endpoint as its primary endpoint might be appropriately designed. We propose a novel Bayesian adaptive trial design that allows the new surrogate endpoint to play a dominant role in assessing the effect of an intervention, while remaining realistically cautious about its use. By incorporating multi-trial historical information on the validated relationship between the surrogate and clinical endpoints, then subsequently evaluating accumulating data against this relationship as the new trial progresses, we adaptively guard against an erroneous assessment of treatment based upon a truly invalid surrogate. When the joint outcomes in the new trial seem plausible given similar historical trials, we proceed with the surrogate endpoint as the primary endpoint, and do so adaptively–perhaps stopping the trial for early success or inferiority of the experimental treatment, or for futility. Otherwise, we discard the surrogate and switch adaptive determinations to the original primary endpoint. We use simulation to test the operating characteristics of this new design compared to a standard O’Brien-Fleming approach, as well as the ability of our design to discriminate trustworthy from untrustworthy surrogates in hypothetical future trials. Furthermore, we investigate possible benefits using patient-level data from 18 adjuvant therapy trials in colon cancer, where disease-free survival is considered a newly-validated surrogate endpoint for overall survival. PMID:21838811

  11. CFD validation experiments for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for CFD code validation is introduced. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments could provide new validation data.

  12. Pre-engineering Spaceflight Validation of Environmental Models and the 2005 HZETRN Simulation Code

    NASA Technical Reports Server (NTRS)

    Nealy, John E.; Cucinotta, Francis A.; Wilson, John W.; Badavi, Francis F.; Dachev, Ts. P.; Tomov, B. T.; Walker, Steven A.; DeAngelis, Giovanni; Blattnig, Steve R.; Atwell, William

    2006-01-01

    The HZETRN code has been identified by NASA for engineering design in the next phase of space exploration highlighting a return to the Moon in preparation for a Mars mission. In response, a new series of algorithms beginning with 2005 HZETRN, will be issued by correcting some prior limitations and improving control of propagated errors along with established code verification processes. Code validation processes will use new/improved low Earth orbit (LEO) environmental models with a recently improved International Space Station (ISS) shield model to validate computational models and procedures using measured data aboard ISS. These validated models will provide a basis for flight-testing the designs of future space vehicles and systems of the Constellation program in the LEO environment.

  13. Design and Validation of a Rubric to Assess the Use of American Psychological Association Style in Scientific Articles

    ERIC Educational Resources Information Center

    Merma Molina, Gladys; Peña Alfaro, Hilda; Peña Alfaro González, Silvia Rosa

    2017-01-01

    In this study, the researchers will explore the process of designing and validating a rubric to evaluate the adaptation of scientific articles in the format of the "American Psychological Association" (APA). The rubric will evaluate certain aspects of the APA format that allow authors, editors, and evaluators to decide if the scientific…

  14. Measurement of Spatial Ability: Construction and Validation of the Spatial Reasoning Instrument for Middle School Students

    ERIC Educational Resources Information Center

    Ramful, Ajay; Lowrie, Thomas; Logan, Tracy

    2017-01-01

    This article describes the development and validation of a newly designed instrument for measuring the spatial ability of middle school students (11-13 years old). The design of the Spatial Reasoning Instrument (SRI) is based on three constructs (mental rotation, spatial orientation, and spatial visualization) and is aligned to the type of spatial…

  15. Design, Implementation and Validation of a Europe-Wide Pedagogical Framework for E-Learning

    ERIC Educational Resources Information Center

    Granic, Andrina; Mifsud, Charles; Cukusic, Maja

    2009-01-01

    Within the context of a Europe-wide project UNITE, a number of European partners set out to design, implement and validate a pedagogical framework (PF) for e- and m-Learning in secondary schools. The process of formulating and testing the PF was an evolutionary one that reflected the experiences and skills of the various European partners and…

  16. Brief Assessment of Motor Function: Content Validity and Reliability of the Upper Extremity Gross Motor Scale

    ERIC Educational Resources Information Center

    Cintas, Holly Lea; Parks, Rebecca; Don, Sarah; Gerber, Lynn

    2011-01-01

    Content validity and reliability of the Brief Assessment of Motor Function (BAMF) Upper Extremity Gross Motor Scale (UEGMS) were evaluated in this prospective, descriptive study. The UEGMS is one of five BAMF ordinal scales designed for quick documentation of gross, fine, and oral motor skill levels. Designed to be independent of age and…

  17. Development and Validation of an Instrument to Assess Student Attitudes toward Science across Grades 5 through 10

    ERIC Educational Resources Information Center

    Summers, Ryan; Abd-El-Khalick, Fouad

    2018-01-01

    The aim of the present study is to enable future studies into students' attitudes toward science, and related constructs, by developing and validating an instrument suitable for cross-sectional designs. Following a thorough review of the literature it was determined that many extant instruments included design aspects that appeared to be limited…

  18. The Design and Evaluation of Class Exercises as Active Learning Tools in Software Verification and Validation

    ERIC Educational Resources Information Center

    Wu, Peter Y.; Manohar, Priyadarshan A.; Acharya, Sushil

    2016-01-01

    It is well known that interesting questions can stimulate thinking and invite participation. Class exercises are designed to make use of questions to engage students in active learning. In a project toward building a community skilled in software verification and validation (SV&V), we critically review and further develop course materials in…

  19. Investigating the Quality of the School Technology Needs Assessment (STNA) 3.0: A Validity and Reliability Study

    ERIC Educational Resources Information Center

    Corn, Jenifer O.

    2010-01-01

    Schools and districts should use a well-designed needs assessment to inform important decisions about a range of technology program areas. Presently, there is a lack of valid and reliable instruments available and accessible to schools to effectively assess their educational needs to better design and evaluate their projects and initiatives. The…

  20. Design and Validation of a Photographic Expressive Persian Grammar Test for Children Aged 4-6 Years

    ERIC Educational Resources Information Center

    Haresabadi, Fatemeh; Ebadi, Abbas; Shirazi, Tahereh Sima; Dastjerdi Kazemi, Mehdi

    2016-01-01

    Syntax has a high importance among linguistic parameters, and syntax-related problems are the most common in language disorders. Therefore, the present study aimed to design a Photographic Expressive Persian Grammar Test for Iranian children in the age group of 4-6 years and to determine its validity and reliability. First, the target…

  1. Shape Optimization by Bayesian-Validated Computer-Simulation Surrogates

    NASA Technical Reports Server (NTRS)

    Patera, Anthony T.

    1997-01-01

    A nonparametric-validated, surrogate approach to optimization has been applied to the computational optimization of eddy-promoter heat exchangers and to the experimental optimization of a multielement airfoil. In addition to the baseline surrogate framework, a surrogate-Pareto framework has been applied to the two-criteria, eddy-promoter design problem. The Pareto analysis improves the predictability of the surrogate results, preserves generality, and provides a means to rapidly determine design trade-offs. Significant contributions have been made in the geometric description used for the eddy-promoter inclusions as well as to the surrogate framework itself. A level-set based, geometric description has been developed to define the shape of the eddy-promoter inclusions. The level-set technique allows for topology changes (from single-body,eddy-promoter configurations to two-body configurations) without requiring any additional logic. The continuity of the output responses for input variations that cross the boundary between topologies has been demonstrated. Input-output continuity is required for the straightforward application of surrogate techniques in which simplified, interpolative models are fitted through a construction set of data. The surrogate framework developed previously has been extended in a number of ways. First, the formulation for a general, two-output, two-performance metric problem is presented. Surrogates are constructed and validated for the outputs. The performance metrics can be functions of both outputs, as well as explicitly of the inputs, and serve to characterize the design preferences. By segregating the outputs and the performance metrics, an additional level of flexibility is provided to the designer. The validated outputs can be used in future design studies and the error estimates provided by the output validation step still apply, and require no additional appeals to the expensive analysis. Second, a candidate-based a posteriori error analysis capability has been developed which provides probabilistic error estimates on the true performance for a design randomly selected near the surrogate-predicted optimal design.

  2. Validation of Reverse-Engineered and Additive-Manufactured Microsurgical Instrument Prototype.

    PubMed

    Singh, Ramandeep; Suri, Ashish; Anand, Sneh; Baby, Britty

    2016-12-01

    With advancements in imaging techniques, neurosurgical procedures are becoming highly precise and minimally invasive, thus demanding development of new ergonomically aesthetic instruments. Conventionally, neurosurgical instruments are manufactured using subtractive manufacturing methods. Such a process is complex, time-consuming, and impractical for prototype development and validation of new designs. Therefore, an alternative design process has been used utilizing blue light scanning, computer-aided designing, and additive manufacturing direct metal laser sintering (DMLS) for microsurgical instrument prototype development. Deviations of DMLS-fabricated instrument were studied by superimposing scan data of fabricated instrument with the computer-aided designing model. Content and concurrent validity of the fabricated prototypes was done by a group of 15 neurosurgeons by performing sciatic nerve anastomosis in small laboratory animals. Comparative scoring was obtained for the control and study instrument. T test was applied to the individual parameters and P values for force (P < .0001) and surface roughness (P < .01) were found to be statistically significant. These 2 parameters were further analyzed using objective measures. Results depicts that additive manufacturing by DMLS provides an effective method for prototype development. However, direct application of these additive-manufactured instruments in the operating room requires further validation. © The Author(s) 2016.

  3. Design and validation of pictograms in a pediatric anaphylaxis action plan.

    PubMed

    Mok, Garrick; Vaillancourt, Régis; Irwin, Danica; Wong, Alexandre; Zemek, Roger; Alqurashi, Waleed

    2015-05-01

    Current anaphylaxis action plans (AAPs) are based on written instructions without inclusion of pictograms. To develop an AAP with pictorial aids and to prospectively validate the pictogram components of this plan. Participants recruited from the emergency department and allergy clinic participated in a questionnaire to validate pictograms depicting key counseling points of an anaphylactic reaction. Children ≥ 10 years of age and caregivers of children < 10 years with acute anaphylaxis or who carried epinephrine auto-injector for confirmed allergy were eligible. Guessability, translucency, and recall were assessed for 11 pictogram designs. Pictograms identified as correct or partially correct by at least 85% of participants were considered valid. Three independent reviewers assessed these outcome measures. Of the 115 total participants, 73 (63%) were female, 76 (66%) were parents/guardians, and 39 (34%) were children aged 10-17. Overall, 10 pictograms (91%) reached ≥ 85% for correct guessability, translucency, and recall. Four pictograms were redesigned to reach the preset validation target. One pictogram depicting symptom management (5-min wait time after first epinephrine treatment) reached 82% translucency after redesign. However, it reached 98% and 100% of correct guessability and recall, respectively. We prospectively designed and validated a set of pictograms to be included in an AAP. The incorporation of validated pictograms into an AAP may potentially increase comprehension of the triggers, signs and symptoms, and management of an anaphylactic reaction. © 2015 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  4. Connecting Technological Innovation in Artificial Intelligence to Real-world Medical Practice through Rigorous Clinical Validation: What Peer-reviewed Medical Journals Could Do

    PubMed Central

    2018-01-01

    Artificial intelligence (AI) is projected to substantially influence clinical practice in the foreseeable future. However, despite the excitement around the technologies, it is yet rare to see examples of robust clinical validation of the technologies and, as a result, very few are currently in clinical use. A thorough, systematic validation of AI technologies using adequately designed clinical research studies before their integration into clinical practice is critical to ensure patient benefit and safety while avoiding any inadvertent harms. We would like to suggest several specific points regarding the role that peer-reviewed medical journals can play, in terms of study design, registration, and reporting, to help achieve proper and meaningful clinical validation of AI technologies designed to make medical diagnosis and prediction, focusing on the evaluation of diagnostic accuracy efficacy. Peer-reviewed medical journals can encourage investigators who wish to validate the performance of AI systems for medical diagnosis and prediction to pay closer attention to the factors listed in this article by emphasizing their importance. Thereby, peer-reviewed medical journals can ultimately facilitate translating the technological innovations into real-world practice while securing patient safety and benefit. PMID:29805337

  5. On various metrics used for validation of predictive QSAR models with applications in virtual screening and focused library design.

    PubMed

    Roy, Kunal; Mitra, Indrani

    2011-07-01

    Quantitative structure-activity relationships (QSARs) have important applications in drug discovery research, environmental fate modeling, property prediction, etc. Validation has been recognized as a very important step for QSAR model development. As one of the important objectives of QSAR modeling is to predict activity/property/toxicity of new chemicals falling within the domain of applicability of the developed models and QSARs are being used for regulatory decisions, checking reliability of the models and confidence of their predictions is a very important aspect, which can be judged during the validation process. One prime application of a statistically significant QSAR model is virtual screening for molecules with improved potency based on the pharmacophoric features and the descriptors appearing in the QSAR model. Validated QSAR models may also be utilized for design of focused libraries which may be subsequently screened for the selection of hits. The present review focuses on various metrics used for validation of predictive QSAR models together with an overview of the application of QSAR models in the fields of virtual screening and focused library design for diverse series of compounds with citation of some recent examples.

  6. The development and validation of three videos designed to psychologically prepare patients for coronary bypass surgery.

    PubMed

    Mahler, H I; Kulik, J A

    1995-02-01

    The purpose of this study was to demonstrate the validation of videotape interventions that were designed to prepare patients for coronary artery bypass graft (CABG) surgery. First, three videotapes were developed. Two of the tapes featured the experiences of three actual CABG patients and were constructed to present either an optimistic portrayal of the recovery period (mastery tape) or a portrayal designed to inoculate patients against potential problems (coping tape). The third videotape contained the more general nurse scenes and narration used in the other two tapes, but did not include the experiences of particular patients. We then conducted a study to establish the convergent and discriminant validity of the three tapes. That is, we sought to demonstrate both that the tapes did differ along the mastery-coping dimension, and that they did not differ in other respects (such as in the degree of information provided or the perceived credibility of the narrator). The validation study, conducted with 42 males who had previously undergone CABG, demonstrated that the intended equivalences and differences between the tapes were achieved. The importance of establishing the validity of health-related interventions is discussed.

  7. Statistical considerations on prognostic models for glioma

    PubMed Central

    Molinaro, Annette M.; Wrensch, Margaret R.; Jenkins, Robert B.; Eckel-Passow, Jeanette E.

    2016-01-01

    Given the lack of beneficial treatments in glioma, there is a need for prognostic models for therapeutic decision making and life planning. Recently several studies defining subtypes of glioma have been published. Here, we review the statistical considerations of how to build and validate prognostic models, explain the models presented in the current glioma literature, and discuss advantages and disadvantages of each model. The 3 statistical considerations to establishing clinically useful prognostic models are: study design, model building, and validation. Careful study design helps to ensure that the model is unbiased and generalizable to the population of interest. During model building, a discovery cohort of patients can be used to choose variables, construct models, and estimate prediction performance via internal validation. Via external validation, an independent dataset can assess how well the model performs. It is imperative that published models properly detail the study design and methods for both model building and validation. This provides readers the information necessary to assess the bias in a study, compare other published models, and determine the model's clinical usefulness. As editors, reviewers, and readers of the relevant literature, we should be cognizant of the needed statistical considerations and insist on their use. PMID:26657835

  8. Connecting Technological Innovation in Artificial Intelligence to Real-world Medical Practice through Rigorous Clinical Validation: What Peer-reviewed Medical Journals Could Do.

    PubMed

    Park, Seong Ho; Kressel, Herbert Y

    2018-05-28

    Artificial intelligence (AI) is projected to substantially influence clinical practice in the foreseeable future. However, despite the excitement around the technologies, it is yet rare to see examples of robust clinical validation of the technologies and, as a result, very few are currently in clinical use. A thorough, systematic validation of AI technologies using adequately designed clinical research studies before their integration into clinical practice is critical to ensure patient benefit and safety while avoiding any inadvertent harms. We would like to suggest several specific points regarding the role that peer-reviewed medical journals can play, in terms of study design, registration, and reporting, to help achieve proper and meaningful clinical validation of AI technologies designed to make medical diagnosis and prediction, focusing on the evaluation of diagnostic accuracy efficacy. Peer-reviewed medical journals can encourage investigators who wish to validate the performance of AI systems for medical diagnosis and prediction to pay closer attention to the factors listed in this article by emphasizing their importance. Thereby, peer-reviewed medical journals can ultimately facilitate translating the technological innovations into real-world practice while securing patient safety and benefit.

  9. Reliability and validity of electrothermometers and associated thermocouples.

    PubMed

    Jutte, Lisa S; Knight, Kenneth L; Long, Blaine C

    2008-02-01

    Examine thermocouple model uncertainty (reliability+validity). First, a 3x3 repeated measures design with independent variables electrothermometers and thermocouple model. Second, a 1x3 repeated measures design with independent variable subprobe. Three electrothermometers, 3 thermocouple models, a multi-sensor probe and a mercury thermometer measured a stable water bath. Temperature and absolute temperature differences between thermocouples and a mercury thermometer. Thermocouple uncertainty was greater than manufactures'claims. For all thermocouple models, validity and reliability were better in the Iso-Themex than the Datalogger, but there were no practical differences between models within an electrothermometers. Validity of multi-sensor probes and thermocouples within a probe were not different but were greater than manufacturers'claims. Reliability of multiprobes and thermocouples within a probe were within manufacturers claims. Thermocouple models vary in reliability and validity. Scientists should test and report the uncertainty of their equipment rather than depending on manufactures' claims.

  10. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  11. Internal Validity: A Must in Research Designs

    ERIC Educational Resources Information Center

    Cahit, Kaya

    2015-01-01

    In experimental research, internal validity refers to what extent researchers can conclude that changes in dependent variable (i.e. outcome) are caused by manipulations in independent variable. The causal inference permits researchers to meaningfully interpret research results. This article discusses (a) internal validity threats in social and…

  12. Validation of the Juhnke-Balkin Life Balance Inventory

    ERIC Educational Resources Information Center

    Davis, R. J.; Balkin, Richard S.; Juhnke, Gerald A.

    2014-01-01

    Life balance is an important construct within the counseling profession. A validation study utilizing exploratory factor analysis and multiple regression was conducted on the Juhnke-Balkin Life Balance Inventory. Results from the study serve as evidence of validity for an assessment instrument designed to measure life balance.

  13. 49 CFR 236.905 - Railroad Safety Program Plan (RSPP).

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...

  14. 49 CFR 236.905 - Railroad Safety Program Plan (RSPP).

    Code of Federal Regulations, 2014 CFR

    2014-10-01

    ... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...

  15. 49 CFR 236.905 - Railroad Safety Program Plan (RSPP).

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...

  16. 49 CFR 236.905 - Railroad Safety Program Plan (RSPP).

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... to be used in the verification and validation process, consistent with appendix C to this part. The...; and (iv) The identification of the safety assessment process. (2) Design for verification and validation. The RSPP must require the identification of verification and validation methods for the...

  17. Ambient Assisted Living spaces validation by services and devices simulation.

    PubMed

    Fernández-Llatas, Carlos; Mocholí, Juan Bautista; Sala, Pilar; Naranjo, Juan Carlos; Pileggi, Salvatore F; Guillén, Sergio; Traver, Vicente

    2011-01-01

    The design of Ambient Assisted Living (AAL) products is a very demanding challenge. AAL products creation is a complex iterative process which must accomplish exhaustive prerequisites about accessibility and usability. In this process the early detection of errors is crucial to create cost-effective systems. Computer-assisted tools can suppose a vital help to usability designers in order to avoid design errors. Specifically computer simulation of products in AAL environments can be used in all the design phases to support the validation. In this paper, a computer simulation tool for supporting usability designers in the creation of innovative AAL products is presented. This application will benefit their work saving time and improving the final system functionality.

  18. X-56A MUTT: Aeroservoelastic Modeling

    NASA Technical Reports Server (NTRS)

    Ouellette, Jeffrey A.

    2015-01-01

    For the NASA X-56a Program, Armstrong Flight Research Center has been developing a set of linear states space models that integrate the flight dynamics and structural dynamics. These high order models are needed for the control design, control evaluation, and test input design. The current focus has been on developing stiff wing models to validate the current modeling approach. The extension of the modeling approach to the flexible wings requires only a change in the structural model. Individual subsystems models (actuators, inertial properties, etc.) have been validated by component level ground tests. Closed loop simulation of maneuvers designed to validate the flight dynamics of these models correlates very well flight test data. The open loop structural dynamics are also shown to correlate well to the flight test data.

  19. Modeling and simulation of soft sensor design for real-time speed estimation, measurement and control of induction motor.

    PubMed

    Etien, Erik

    2013-05-01

    This paper deals with the design of a speed soft sensor for induction motor. The sensor is based on the physical model of the motor. Because the validation step highlight the fact that the sensor cannot be validated for all the operating points, the model is modified in order to obtain a fully validated sensor in the whole speed range. An original feature of the proposed approach is that the modified model is derived from stability analysis using automatic control theory. Copyright © 2012 ISA. Published by Elsevier Ltd. All rights reserved.

  20. A CFD validation roadmap for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1992-01-01

    A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.

  1. A CFD validation roadmap for hypersonic flows

    NASA Technical Reports Server (NTRS)

    Marvin, Joseph G.

    1993-01-01

    A roadmap for computational fluid dynamics (CFD) code validation is developed. The elements of the roadmap are consistent with air-breathing vehicle design requirements and related to the important flow path components: forebody, inlet, combustor, and nozzle. Building block and benchmark validation experiments are identified along with their test conditions and measurements. Based on an evaluation criteria, recommendations for an initial CFD validation data base are given and gaps identified where future experiments would provide the needed validation data.

  2. Aerolastic tailoring and integrated wing design

    NASA Technical Reports Server (NTRS)

    Love, Mike H.; Bohlmann, Jon

    1989-01-01

    Much has been learned from the TSO optimization code over the years in determining aeroelastic tailoring's place in the integrated design process. Indeed, it has become apparent that aeroelastic tailoring is and should be deeply embedded in design. Aeroelastic tailoring can have tremendous effects on the design loads, and design loads affect every aspect of the design process. While optimization enables the evaluation of design sensitivities, valid computational simulations are required to make these sensitivities valid. Aircraft maneuvers simulated must adequately cover the plane's intended flight envelope, realistic design criteria must be included, and models among the various disciplines must be calibrated among themselves and with any hard-core (e.g., wind tunnel) data available. The information gained and benefits derived from aeroelastic tailoring provide a focal point for the various disciplines to become involved and communicate with one another to reach the best design possible.

  3. Flight Test Validation of Optimal Input Design and Comparison to Conventional Inputs

    NASA Technical Reports Server (NTRS)

    Morelli, Eugene A.

    1997-01-01

    A technique for designing optimal inputs for aerodynamic parameter estimation was flight tested on the F-18 High Angle of Attack Research Vehicle (HARV). Model parameter accuracies calculated from flight test data were compared on an equal basis for optimal input designs and conventional inputs at the same flight condition. In spite of errors in the a priori input design models and distortions of the input form by the feedback control system, the optimal inputs increased estimated parameter accuracies compared to conventional 3-2-1-1 and doublet inputs. In addition, the tests using optimal input designs demonstrated enhanced design flexibility, allowing the optimal input design technique to use a larger input amplitude to achieve further increases in estimated parameter accuracy without departing from the desired flight test condition. This work validated the analysis used to develop the optimal input designs, and demonstrated the feasibility and practical utility of the optimal input design technique.

  4. Development of a Conservative Model Validation Approach for Reliable Analysis

    DTIC Science & Technology

    2015-01-01

    CIE 2015 August 2-5, 2015, Boston, Massachusetts, USA [DRAFT] DETC2015-46982 DEVELOPMENT OF A CONSERVATIVE MODEL VALIDATION APPROACH FOR RELIABLE...obtain a conservative simulation model for reliable design even with limited experimental data. Very little research has taken into account the...3, the proposed conservative model validation is briefly compared to the conventional model validation approach. Section 4 describes how to account

  5. Construction and Validation of a Professional Suitability Scale for Social Work Practice

    ERIC Educational Resources Information Center

    Tam, Dora M. Y.; Coleman, Heather

    2009-01-01

    This article reports on the construction and validation of a professional suitability scale, designed for assessing students' suitability for social work practice. Data were collected from 188 field supervisors who provided usable questionnaires, representing a response rate of 74%. Construct validation by exploratory factor analysis identified a…

  6. The Metacognitive Awareness Listening Questionnaire: Development and Validation

    ERIC Educational Resources Information Center

    Vandergrift, Larry; Goh, Christine C. M.; Mareschal, Catherine J.; Tafaghodtari, Marzieh H.

    2006-01-01

    This article describes the development and validation of a listening questionnaire designed to assess second language (L2) listeners' metacognitive awareness and perceived use of strategies while listening to oral texts. The process of instrument development and validation is described, along with a review of the relevant literature related to…

  7. Empirical Validation and Application of the Computing Attitudes Survey

    ERIC Educational Resources Information Center

    Dorn, Brian; Elliott Tew, Allison

    2015-01-01

    Student attitudes play an important role in shaping learning experiences. However, few validated instruments exist for measuring student attitude development in a discipline-specific way. In this paper, we present the design, development, and validation of the computing attitudes survey (CAS). The CAS is an extension of the Colorado Learning…

  8. Validating long-term satellite-derived disturbance products: the case of burned areas

    NASA Astrophysics Data System (ADS)

    Boschetti, L.; Roy, D. P.

    2015-12-01

    The potential research, policy and management applications of satellite products place a high priority on providing statements about their accuracy. A number of NASA, ESA and EU funded global and continental burned area products have been developed using coarse spatial resolution satellite data, and have the potential to become part of a long-term fire Climate Data Record. These products have usually been validated by comparison with reference burned area maps derived by visual interpretation of Landsat or similar spatial resolution data selected on an ad hoc basis. More optimally, a design-based validation method should be adopted that is characterized by the selection of reference data via a probability sampling that can subsequently be used to compute accuracy metrics, taking into account the sampling probability. Design based techniques have been used for annual land cover and land cover change product validation, but have not been widely used for burned area products, or for the validation of global products that are highly variable in time and space (e.g. snow, floods or other non-permanent phenomena). This has been due to the challenge of designing an appropriate sampling strategy, and to the cost of collecting independent reference data. We propose a tri-dimensional sampling grid that allows for probability sampling of Landsat data in time and in space. To sample the globe in the spatial domain with non-overlapping sampling units, the Thiessen Scene Area (TSA) tessellation of the Landsat WRS path/rows is used. The TSA grid is then combined with the 16-day Landsat acquisition calendar to provide tri-dimensonal elements (voxels). This allows the implementation of a sampling design where not only the location but also the time interval of the reference data is explicitly drawn by probability sampling. The proposed sampling design is a stratified random sampling, with two-level stratification of the voxels based on biomes and fire activity (Figure 1). The novel validation approach, used for the validation of the MODIS and forthcoming VIIRS global burned area products, is a general one, and could be used for the validation of other global products that are highly variable in space and time and is required to assess the accuracy of climate records. The approach is demonstrated using a 1 year dataset of MODIS fire products.

  9. Quantitative impurity analysis of monoclonal antibody size heterogeneity by CE-LIF: example of development and validation through a quality-by-design framework.

    PubMed

    Michels, David A; Parker, Monica; Salas-Solano, Oscar

    2012-03-01

    This paper describes the framework of quality by design applied to the development, optimization and validation of a sensitive capillary electrophoresis-sodium dodecyl sulfate (CE-SDS) assay for monitoring impurities that potentially impact drug efficacy or patient safety produced in the manufacture of therapeutic MAb products. Drug substance or drug product samples are derivatized with fluorogenic 3-(2-furoyl)quinoline-2-carboxaldehyde and nucleophilic cyanide before separation by CE-SDS coupled to LIF detection. Three design-of-experiments enabled critical labeling parameters to meet method requirements for detecting minor impurities while building precision and robustness into the assay during development. The screening design predicted optimal conditions to control labeling artifacts while two full factorial designs demonstrated method robustness through control of temperature and cyanide parameters within the normal operating range. Subsequent validation according to the guidelines of the International Committee of Harmonization showed the CE-SDS/LIF assay was specific, accurate, and precise (RSD ≤ 0.8%) for relative peak distribution and linear (R > 0.997) between the range of 0.5-1.5 mg/mL with LOD and LOQ of 10 ng/mL and 35 ng/mL, respectively. Validation confirmed the system suitability criteria used as a level of control to ensure reliable method performance. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  10. Scenario-based design: a method for connecting information system design with public health operations and emergency management.

    PubMed

    Reeder, Blaine; Turner, Anne M

    2011-12-01

    Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Interview analysis identified 25 information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create 25 scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. Copyright © 2011 Elsevier Inc. All rights reserved.

  11. Utility of the MMPI-2-RF (Restructured Form) Validity Scales in Detecting Malingering in a Criminal Forensic Setting: A Known-Groups Design

    ERIC Educational Resources Information Center

    Sellbom, Martin; Toomey, Joseph A.; Wygant, Dustin B.; Kucharski, L. Thomas; Duncan, Scott

    2010-01-01

    The current study examined the utility of the recently released Minnesota Multiphasic Personality Inventory-2 Restructured Form (MMPI-2-RF; Ben-Porath & Tellegen, 2008) validity scales to detect feigned psychopathology in a criminal forensic setting. We used a known-groups design with the Structured Interview of Reported Symptoms (SIRS;…

  12. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Rhinefrank, Kenneth E.; Lenee-Bluhm, Pukha; Prudell, Joseph H.

    The most prudent path to a full-scale design, build and deployment of a wave energy conversion (WEC) system involves establishment of validated numerical models using physical experiments in a methodical scaling program. This Project provides essential additional rounds of wave tank testing at 1:33 scale and ocean/bay testing at a 1:7 scale, necessary to validate numerical modeling that is essential to a utility-scale WEC design and associated certification.

  13. Curriculum Design Orientations Preference Scale of Teachers: Validity and Reliability Study

    ERIC Educational Resources Information Center

    Bas, Gokhan

    2013-01-01

    The purpose of this study was to develop a valid and reliable scale for preferences of teachers in regard of their curriculum design orientations. Because there was no scale development study similar to this one in Turkey, it was considered as an urgent need to develop such a scale in the study. The sample of the research consisted of 300…

  14. Requirements and feasibility study of flight demonstration of Active Controls Technology (ACT) on the NASA 515 airplane

    NASA Technical Reports Server (NTRS)

    Gordon, C. K.

    1975-01-01

    A preliminary design study was conducted to evaluate the suitability of the NASA 515 airplane as a flight demonstration vehicle, and to develop plans, schedules, and budget costs for fly-by-wire/active controls technology flight validation in the NASA 515 airplane. The preliminary design and planning were accomplished for two phases of flight validation.

  15. Empirical Assessment of Effect of Publication Bias on a Meta-Analysis of Validity Studies on University Matriculation Examinations in Nigeria

    ERIC Educational Resources Information Center

    Adeyemo, Emily Oluseyi

    2012-01-01

    This study examined the impact of publication bias on a meta-analysis of empirical studies on validity of University Matriculation Examinations in Nigeria with a view to determine the level of difference between published and unpublished articles. Specifically, the design was an ex-post facto, a causal comparative design. The sample size consisted…

  16. Occupant Protection during Orion Crew Exploration Vehicle Landings

    NASA Technical Reports Server (NTRS)

    Gernhardt, Michael L.; Jones, J. A.; Granderson, B. K.; Somers, J. T.

    2009-01-01

    The constellation program is evaluating current vehicle design capabilities for nominal water landings and contingency land landings of the Orion Crew Exploration vehicle. The Orion Landing Strategy tiger team was formed to lead the technical effort for which associated activities include the current vehicle design, susceptibility to roll control and tip over, reviewing methods for assessing occupant injury during ascent / aborts /landings, developing an alternate seat/attenuation design solution which improves occupant protection and operability, and testing the seat/attenuation system designs to ensure valid results. The EVA physiology, systems and Performance (EPSP) project is leading the effort under the authority of the Tiger Team Steering committee to develop, verify, validate and accredit biodynamics models using a variety of crash and injury databases including NASCAR, Indy Car and military aircraft. The validated biodynamics models will be used by the Constellation program to evaluate a variety of vehicle, seat and restraint designs in the context of multiple nominal and off-nominal landing scenarios. The models will be used in conjunction with Acceptable Injury Risk definitions to provide new occupant protection requirements for the Constellation Program.

  17. GPS Auto-Navigation Design for Unmanned Air Vehicles

    NASA Technical Reports Server (NTRS)

    Nilsson, Caroline C. A.; Heinzen, Stearns N.; Hall, Charles E., Jr.; Chokani, Ndaona

    2003-01-01

    A GPS auto-navigation system is designed for Unmanned Air Vehicles. The objective is to enable the air vehicle to be used as a test-bed for novel flow control concepts. The navigation system uses pre-programmed GPS waypoints. The actual GPS position, heading, and velocity are collected by the flight computer, a PC104 system running in Real-Time Linux, and compared with the desired waypoint. The navigator then determines the necessity of a heading correction and outputs the correction in the form of a commanded bank angle, for a level coordinated turn, to the controller system. This controller system consists of 5 controller! (pitch rate PID, yaw damper, bank angle PID, velocity hold, and altitude hold) designed for a closed loop non-linear aircraft model with linear aerodynamic coefficients. The ability and accuracy of using GPS data, is validated by a GPS flight. The autopilots are also validated in flight. The autopilot unit flight validations show that the designed autopilots function as designed. The aircraft model, generated on Matlab SIMULINK is also enhanced by the flight data to accurately represent the actual aircraft.

  18. Development of an ultra high performance liquid chromatography method for determining triamcinolone acetonide in hydrogels using the design of experiments/design space strategy in combination with process capability index.

    PubMed

    Oliva, Alexis; Monzón, Cecilia; Santoveña, Ana; Fariña, José B; Llabrés, Matías

    2016-07-01

    An ultra high performance liquid chromatography method was developed and validated for the quantitation of triamcinolone acetonide in an injectable ophthalmic hydrogel to determine the contribution of analytical method error in the content uniformity measurement. During the development phase, the design of experiments/design space strategy was used. For this, the free R-program was used as a commercial software alternative, a fast efficient tool for data analysis. The process capability index was used to find the permitted level of variation for each factor and to define the design space. All these aspects were analyzed and discussed under different experimental conditions by the Monte Carlo simulation method. Second, a pre-study validation procedure was performed in accordance with the International Conference on Harmonization guidelines. The validated method was applied for the determination of uniformity of dosage units and the reasons for variability (inhomogeneity and the analytical method error) were analyzed based on the overall uncertainty. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  19. Methodologies for pre-validation of biofilters and wetlands for stormwater treatment.

    PubMed

    Zhang, Kefeng; Randelovic, Anja; Aguiar, Larissa M; Page, Declan; McCarthy, David T; Deletic, Ana

    2015-01-01

    Water Sensitive Urban Design (WSUD) systems are frequently used as part of a stormwater harvesting treatment trains (e.g. biofilters (bio-retentions and rain-gardens) and wetlands). However, validation frameworks for such systems do not exist, limiting their adoption for end-uses such as drinking water. The first stage in the validation framework is pre-validation, which prepares information for further validation monitoring. A pre-validation roadmap, consisting of five steps, is suggested in this paper. Detailed methods for investigating target micropollutants in stormwater, and determining challenge conditions for biofilters and wetlands, are provided. A literature review was undertaken to identify and quantify micropollutants in stormwater. MUSIC V5.1 was utilized to simulate the behaviour of the systems based on 30-year rainfall data in three distinct climate zones; outputs were evaluated to identify the threshold of operational variables, including length of dry periods (LDPs) and volume of water treated per event. The paper highlights that a number of micropollutants were found in stormwater at levels above various worldwide drinking water guidelines (eight pesticides, benzene, benzo(a)pyrene, pentachlorophenol, di-(2-ethylhexyl)-phthalate and a total of polychlorinated biphenyls). The 95th percentile LDPs was exponentially related to system design area while the 5th percentile length of dry periods remained within short durations (i.e. 2-8 hours). 95th percentile volume of water treated per event was exponentially related to system design area as a percentage of an impervious catchment area. The out-comings of this study show that pre-validation could be completed through a roadmap consisting of a series of steps; this will help in the validation of stormwater treatment systems.

  20. Methodologies for Pre-Validation of Biofilters and Wetlands for Stormwater Treatment

    PubMed Central

    Zhang, Kefeng; Randelovic, Anja; Aguiar, Larissa M.; Page, Declan; McCarthy, David T.; Deletic, Ana

    2015-01-01

    Background Water Sensitive Urban Design (WSUD) systems are frequently used as part of a stormwater harvesting treatment trains (e.g. biofilters (bio-retentions and rain-gardens) and wetlands). However, validation frameworks for such systems do not exist, limiting their adoption for end-uses such as drinking water. The first stage in the validation framework is pre-validation, which prepares information for further validation monitoring. Objectives A pre-validation roadmap, consisting of five steps, is suggested in this paper. Detailed methods for investigating target micropollutants in stormwater, and determining challenge conditions for biofilters and wetlands, are provided. Methods A literature review was undertaken to identify and quantify micropollutants in stormwater. MUSIC V5.1 was utilized to simulate the behaviour of the systems based on 30-year rainfall data in three distinct climate zones; outputs were evaluated to identify the threshold of operational variables, including length of dry periods (LDPs) and volume of water treated per event. Results The paper highlights that a number of micropollutants were found in stormwater at levels above various worldwide drinking water guidelines (eight pesticides, benzene, benzo(a)pyrene, pentachlorophenol, di-(2-ethylhexyl)-phthalate and a total of polychlorinated biphenyls). The 95th percentile LDPs was exponentially related to system design area while the 5th percentile length of dry periods remained within short durations (i.e. 2–8 hours). 95th percentile volume of water treated per event was exponentially related to system design area as a percentage of an impervious catchment area. Conclusions The out-comings of this study show that pre-validation could be completed through a roadmap consisting of a series of steps; this will help in the validation of stormwater treatment systems. PMID:25955688

  1. Application of a Computer Model to Various Specifications of Fuel Injection System for DI Diesel Engines

    NASA Astrophysics Data System (ADS)

    Yamanishi, Manabu

    A combined experimental and computational investigation was performed in order to evaluate the effects of various design parameters of an in-line injection pump on the nozzle exit characteristics for DI diesel engines. Measurements of the pump chamber pressure and the delivery valve lift were included for validation by using specially designed transducers installed inside the pump. The results confirm that the simulation model is capable of predicting the pump operation for all the different designs investigated pump operating conditions. Following the successful validation of this model, parametric studies were performed which allow for improved fuel injection system design.

  2. Design and validation of an open-source library of dynamic reference frames for research and education in optical tracking.

    PubMed

    Brown, Alisa; Uneri, Ali; Silva, Tharindu De; Manbachi, Amir; Siewerdsen, Jeffrey H

    2018-04-01

    Dynamic reference frames (DRFs) are a common component of modern surgical tracking systems; however, the limited number of commercially available DRFs poses a constraint in developing systems, especially for research and education. This work presents the design and validation of a large, open-source library of DRFs compatible with passive, single-face tracking systems, such as Polaris stereoscopic infrared trackers (NDI, Waterloo, Ontario). An algorithm was developed to create new DRF designs consistent with intra- and intertool design constraints and convert to computer-aided design (CAD) files suitable for three-dimensional printing. A library of 10 such groups, each with 6 to 10 DRFs, was produced and tracking performance was validated in comparison to a standard commercially available reference, including pivot calibration, fiducial registration error (FRE), and target registration error (TRE). Pivot tests showed calibration error [Formula: see text], indistinguishable from the reference. FRE was [Formula: see text], and TRE in a CT head phantom was [Formula: see text], both equivalent to the reference. The library of DRFs offers a useful resource for surgical navigation research and could be extended to other tracking systems and alternative design constraints.

  3. Base Flow Model Validation

    NASA Technical Reports Server (NTRS)

    Sinha, Neeraj; Brinckman, Kevin; Jansen, Bernard; Seiner, John

    2011-01-01

    A method was developed of obtaining propulsive base flow data in both hot and cold jet environments, at Mach numbers and altitude of relevance to NASA launcher designs. The base flow data was used to perform computational fluid dynamics (CFD) turbulence model assessments of base flow predictive capabilities in order to provide increased confidence in base thermal and pressure load predictions obtained from computational modeling efforts. Predictive CFD analyses were used in the design of the experiments, available propulsive models were used to reduce program costs and increase success, and a wind tunnel facility was used. The data obtained allowed assessment of CFD/turbulence models in a complex flow environment, working within a building-block procedure to validation, where cold, non-reacting test data was first used for validation, followed by more complex reacting base flow validation.

  4. Mechanistic design concepts for conventional flexible pavements

    NASA Astrophysics Data System (ADS)

    Elliott, R. P.; Thompson, M. R.

    1985-02-01

    Mechanical design concepts for convetional flexible pavement (asphalt concrete (AC) surface plus granular base/subbase) for highways are proposed and validated. The procedure is based on ILLI-PAVE, a stress dependent finite element computer program, coupled with appropriate transfer functions. Two design criteria are considered: AC flexural fatigue cracking and subgrade rutting. Algorithms were developed relating pavement response parameters (stresses, strains, deflections) to AC thickness, AC moduli, granular layer thickness, and subgrade moduli. Extensive analyses of the AASHO Road Test flexible pavement data are presented supporting the validity of the proposed concepts.

  5. Conceptual Design of a Flight Validation Mission for a Hypervelocity Asteroid Intercept Vehicle

    NASA Technical Reports Server (NTRS)

    Barbee, Brent W.; Wie, Bong; Steiner, Mark; Getzandanner, Kenneth

    2013-01-01

    Near-Earth Objects (NEOs) are asteroids and comets whose orbits approach or cross Earth s orbit. NEOs have collided with our planet in the past, sometimes to devastating effect, and continue to do so today. Collisions with NEOs large enough to do significant damage to the ground are fortunately infrequent, but such events can occur at any time and we therefore need to develop and validate the techniques and technologies necessary to prevent the Earth impact of an incoming NEO. In this paper we provide background on the hazard posed to Earth by NEOs and present the results of a recent study performed by the NASA/Goddard Space Flight Center s Mission Design Lab (MDL) in collaboration with Iowa State University s Asteroid Deflection Research Center (ADRC) to design a flight validation mission for a Hypervelocity Asteroid Intercept Vehicle (HAIV) as part of a Phase 2 NASA Innovative Advanced Concepts (NIAC) research project. The HAIV is a two-body vehicle consisting of a leading kinetic impactor and trailing follower carrying a Nuclear Explosive Device (NED) payload. The HAIV detonates the NED inside the crater in the NEO s surface created by the lead kinetic impactor portion of the vehicle, effecting a powerful subsurface detonation to disrupt the NEO. For the flight validation mission, only a simple mass proxy for the NED is carried in the HAIV. Ongoing and future research topics are discussed following the presentation of the detailed flight validation mission design results produced in the MDL.

  6. Proposal of an Extended Taxonomy of Serious Games for Health Rehabilitation.

    PubMed

    Rego, Paula Alexandra; Moreira, Pedro Miguel; Reis, Luís Paulo

    2018-06-29

    Serious Games is a field of research that has evolved substantially with valuable contributions to many application domains and areas. Patients often consider traditional rehabilitation approaches to be repetitive and boring, making it difficult for them to maintain their ongoing interest and assure the completion of the treatment program. Since the publication of our first taxonomy of Serious Games for Health Rehabilitation (SGHR), many studies have been published with game prototypes in this area. Based on literature review, our goal is to propose an updated taxonomy taking into account the works, updates, and innovations in game criteria that have been researched since our first publication in 2010. In addition, we aim to present the validation mechanism used for the proposed extended taxonomy. Based on a literature review in the area and on the analysis of the contributions made by other researchers, we propose an extended taxonomy for SGHR. For validating the taxonomy proposal, a questionnaire was designed to use on a survey among experts in the area. An extended taxonomy for SGHR was proposed. As we have identified that, in general, and besides the mechanisms associated with the adoption of a given taxonomy, there were no reported validation mechanisms for the proposals, we designed a mechanism to validate our proposal. The mechanism uses a questionnaire addressed to a sample of researchers and professionals with experience and expertise in domains of knowledge interrelated with SGHR, such as Computer Graphics, Game Design, Interaction Design, Computer Programming, and Health Rehabilitation. The extended taxonomy proposal for health rehabilitation serious games provides the research community with a tool to fully characterize serious games. The mechanism designed for validating the taxonomy proposal is another contribution of this work.

  7. A high power ion thruster for deep space missions

    NASA Astrophysics Data System (ADS)

    Polk, James E.; Goebel, Dan M.; Snyder, John S.; Schneider, Analyn C.; Johnson, Lee K.; Sengupta, Anita

    2012-07-01

    The Nuclear Electric Xenon Ion System ion thruster was developed for potential outer planet robotic missions using nuclear electric propulsion (NEP). This engine was designed to operate at power levels ranging from 13 to 28 kW at specific impulses of 6000-8500 s and for burn times of up to 10 years. State-of-the-art performance and life assessment tools were used to design the thruster, which featured 57-cm-diameter carbon-carbon composite grids operating at voltages of 3.5-6.5 kV. Preliminary validation of the thruster performance was accomplished with a laboratory model thruster, while in parallel, a flight-like development model (DM) thruster was completed and two DM thrusters fabricated. The first thruster completed full performance testing and a 2000-h wear test. The second successfully completed vibration tests at the full protoflight levels defined for this NEP program and then passed performance validation testing. The thruster design, performance, and the experimental validation of the design tools are discussed in this paper.

  8. A high power ion thruster for deep space missions.

    PubMed

    Polk, James E; Goebel, Dan M; Snyder, John S; Schneider, Analyn C; Johnson, Lee K; Sengupta, Anita

    2012-07-01

    The Nuclear Electric Xenon Ion System ion thruster was developed for potential outer planet robotic missions using nuclear electric propulsion (NEP). This engine was designed to operate at power levels ranging from 13 to 28 kW at specific impulses of 6000-8500 s and for burn times of up to 10 years. State-of-the-art performance and life assessment tools were used to design the thruster, which featured 57-cm-diameter carbon-carbon composite grids operating at voltages of 3.5-6.5 kV. Preliminary validation of the thruster performance was accomplished with a laboratory model thruster, while in parallel, a flight-like development model (DM) thruster was completed and two DM thrusters fabricated. The first thruster completed full performance testing and a 2000-h wear test. The second successfully completed vibration tests at the full protoflight levels defined for this NEP program and then passed performance validation testing. The thruster design, performance, and the experimental validation of the design tools are discussed in this paper.

  9. Design and landing dynamic analysis of reusable landing leg for a near-space manned capsule

    NASA Astrophysics Data System (ADS)

    Yue, Shuai; Nie, Hong; Zhang, Ming; Wei, Xiaohui; Gan, Shengyong

    2018-06-01

    To improve the landing performance of a near-space manned capsule under various landing conditions, a novel landing system is designed that employs double chamber and single chamber dampers in the primary and auxiliary struts, respectively. A dynamic model of the landing system is established, and the damper parameters are determined by employing the design method. A single-leg drop test with different initial pitch angles is then conducted to compare and validate the simulation model. Based on the validated simulation model, seven critical landing conditions regarding nine crucial landing responses are found by combining the radial basis function (RBF) surrogate model and adaptive simulated annealing (ASA) optimization method. Subsequently, the adaptability of the landing system under critical landing conditions is analyzed. The results show that the simulation effectively results match the test results, which validates the accuracy of the dynamic model. In addition, all of the crucial responses under their corresponding critical landing conditions satisfy the design specifications, demonstrating the feasibility of the landing system.

  10. Design and Implementation Content Validity Study: Development of an instrument for measuring Patient-Centered Communication

    PubMed Central

    Zamanzadeh, Vahid; Ghahramanian, Akram; Rassouli, Maryam; Abbaszadeh, Abbas; Alavi-Majd, Hamid; Nikanfar, Ali-Reza

    2015-01-01

    Introduction: The importance of content validity in the instrument psychometric and its relevance with reliability, have made it an essential step in the instrument development. This article attempts to give an overview of the content validity process and to explain the complexity of this process by introducing an example. Methods: We carried out a methodological study conducted to examine the content validity of the patient-centered communication instrument through a two-step process (development and judgment). At the first step, domain determination, sampling (item generation) and instrument formation and at the second step, content validity ratio, content validity index and modified kappa statistic was performed. Suggestions of expert panel and item impact scores are used to examine the instrument face validity. Results: From a set of 188 items, content validity process identified seven dimensions includes trust building (eight items), informational support (seven items), emotional support (five items), problem solving (seven items), patient activation (10 items), intimacy/friendship (six items) and spirituality strengthening (14 items). Content validity study revealed that this instrument enjoys an appropriate level of content validity. The overall content validity index of the instrument using universal agreement approach was low; however, it can be advocated with respect to the high number of content experts that makes consensus difficult and high value of the S-CVI with the average approach, which was equal to 0.93. Conclusion: This article illustrates acceptable quantities indices for content validity a new instrument and outlines them during design and psychometrics of patient-centered communication measuring instrument. PMID:26161370

  11. The Hyper-X Flight Systems Validation Program

    NASA Technical Reports Server (NTRS)

    Redifer, Matthew; Lin, Yohan; Bessent, Courtney Amos; Barklow, Carole

    2007-01-01

    For the Hyper-X/X-43A program, the development of a comprehensive validation test plan played an integral part in the success of the mission. The goal was to demonstrate hypersonic propulsion technologies by flight testing an airframe-integrated scramjet engine. Preparation for flight involved both verification and validation testing. By definition, verification is the process of assuring that the product meets design requirements; whereas validation is the process of assuring that the design meets mission requirements for the intended environment. This report presents an overview of the program with emphasis on the validation efforts. It includes topics such as hardware-in-the-loop, failure modes and effects, aircraft-in-the-loop, plugs-out, power characterization, antenna pattern, integration, combined systems, captive carry, and flight testing. Where applicable, test results are also discussed. The report provides a brief description of the flight systems onboard the X-43A research vehicle and an introduction to the ground support equipment required to execute the validation plan. The intent is to provide validation concepts that are applicable to current, follow-on, and next generation vehicles that share the hybrid spacecraft and aircraft characteristics of the Hyper-X vehicle.

  12. ICP-MS Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  13. Measuring Eating Competence: Psychometric Properties and Validity of the ecSatter Inventory

    ERIC Educational Resources Information Center

    Lohse, Barbara; Satter, Ellyn; Horacek, Tanya; Gebreselassie, Tesfayi; Oakland, Mary Jane

    2007-01-01

    Objective: Assess validity of the ecSatter Inventory (ecSI) to measure eating competence (EC). Design: Concurrent administration of ecSI with validated measures of eating behaviors using on-line and paper-pencil formats. Setting: The on-line survey was completed by 370 participants; 462 completed the paper version. Participants: Participants…

  14. The Development and Preliminary Validation of the Behavior, Environment, and Changeability Survey (BECS)

    ERIC Educational Resources Information Center

    Walsh, Jennifer R.; Hebert, Angel; Byrd-Bredbenner, Carol; Carey, Gale; Colby, Sarah; Brown-Esters, Onikia N.; Greene, Geoffrey; Hoerr, Sharon; Horacek, Tanya; Kattelmann, Kendra; Kidd, Tandalayo; Koenings, Mallory; Phillips, Beatrice; Shelnutt, Karla P.; White, Adrienne A.

    2012-01-01

    Objective: To develop and test the validity of the Behavior, Environment, and Changeability Survey (BECS) for identifying the importance and changeability of nutrition, exercise, and stress management behavior and related aspects of the environment. Design: A cross-sectional, online survey of the BECS and selected validated instruments. Setting:…

  15. Development and Initial Validation of the Performance Perfectionism Scale for Sport (PPS-S)

    ERIC Educational Resources Information Center

    Hill, Andrew P.; Appleton, Paul R.; Mallinson, Sarah H.

    2016-01-01

    Valid and reliable instruments are required to appropriately study perfectionism. With this in mind, three studies are presented that describe the development and initial validation of a new instrument designed to measure multidimensional performance perfectionism for use in sport (Performance Perfectionism Scale--Sport [PPS-S]). The instrument is…

  16. Construct and Concurrent Validity of a Prototype Questionnaire to Survey Public Attitudes toward Stuttering

    ERIC Educational Resources Information Center

    St. Louis, Kenneth O.; Reichel, Isabella K.; Yaruss, J. Scott; Lubker, Bobbie Boyd

    2009-01-01

    Purpose: Construct validity and concurrent validity were investigated in a prototype survey instrument, the "Public Opinion Survey of Human Attributes-Experimental Edition" (POSHA-E). The POSHA-E was designed to measure public attitudes toward stuttering within the context of eight other attributes, or "anchors," assumed to range from negative…

  17. Dragons and Dinosaurs: Directing Inquiry in Biology Using the Notions of "Milieu" and "Validation"

    ERIC Educational Resources Information Center

    Achiam, Marianne; Solberg, Jan; Evans, Robert

    2013-01-01

    This article describes how inquiry teaching can be directed towards specific content learning goals while allowing for student exploration and validation of hypotheses. Drawing from the Theory of Didactical Situations, the concepts of "milieu" and "validation" are illustrated through two sample biology lessons designed to engage and challenge…

  18. The Physical Education and School Sport Environment Inventory: Preliminary Validation and Reliability

    ERIC Educational Resources Information Center

    Fairclough, Stuart J.; Hilland, Toni A.; Vinson, Don; Stratton, Gareth

    2012-01-01

    The study purpose was to assess preliminary validity and reliability of the Physical Education and School Sport Environment Inventory (PESSEI), which was designed to audit physical education (PE) and school sport spaces and resources. PE teachers from eight English secondary schools completed the PESSEI. Criterion validity was assessed by…

  19. Following Phaedrus: Alternate Choices in Surmounting the Reliability/Validity Dilemma

    ERIC Educational Resources Information Center

    Slomp, David H.; Fuite, Jim

    2004-01-01

    Specialists in the field of large-scale, high-stakes writing assessment have, over the last forty years alternately discussed the issue of maximizing either reliability or validity in test design. Factors complicating the debate--such as Messick's (1989) expanded definition of validity, and the ethical implications of testing--are explored. An…

  20. Relative validity of a semiquantitative food frequency questionnaire designed for schoolchildren in western Greece

    PubMed Central

    Roumelioti, Maria; Leotsinidis, Michalis

    2009-01-01

    Background The use of food frequency questionnaires (FFQs) has become increasingly important in epidemiologic studies. During the past few decades, a wide variety of nutritional studies have used the semiquantitative FFQ as a tool for assessing and evaluating dietary intake. One of the main concerns in a dietary analysis is the validity of the collected dietary data. Methods This paper discusses several methodological and statistical issues related to the validation of a semiquantitative FFQ. This questionnaire was used to assess the nutritional habits of schoolchildren in western Greece. For validation purposes, we selected 200 schoolchildren and contacted their respective parents. We evaluated the relative validity of 400 FFQs (200 children's FFQs and 200 parents' FFQs). Results The correlations between the children's and the parents' questionnaire responses showed that the questionnaire we designed was appropriate for fulfilling the purposes of our study and in ranking subjects according to food group intake. Conclusion Our study shows that the semiquantitative FFQ provides a reasonably reliable measure of dietary intake and corroborates the relative validity of our questionnaire. PMID:19196469

  1. Development and validation of a questionnaire to measure preferences and expectations of patients undergoing palliative chemotherapy: EXPECT questionnaire.

    PubMed

    Patil, V M; Chakraborty, S; Jithin, T K; Dessai, S; Sajith Babu, T P; Raghavan, V; Geetha, M; Kumar, T Shiva; Biji, M S; Bhattacharjee, A; Nair, C

    2016-01-01

    The objective was to design and validate the questionnaire for capturing palliative chemotherapy-related preferences and expectations. Single arm, unicentric, prospective observational study. EXPECT questionnaire was designed to capture preferences and expectations of patients undergoing palliative chemotherapy. This questionnaire underwent a linguistic validation and then was tested in patients. Ten patients are undergoing chemotherapy for solid tumors who fulfilled the inclusion and exclusion criteria self-administered the EXPECT questionnaire in regional language. After filling this questionnaire, they self-administered quick questionnaire-10 (QQ-10). SPSS version 16 (IBM New York) was used for analysis. Completion rate of EXPECT questionnaire was calculated. The feasibility, face validity, utility and time taken for completion of EXPECT questionnaire was also assessed. The completion rate of this questionnaire was 100%. All patients completed questionnaire within 5 min. The QQ-10 tool confirmed the feasibility, face validity and utility of the questionnaire. EXPECT questionnaire was validated in the regional language, and it's an effective tool for capturing patient's preferences and expectation from chemotherapy.

  2. Practical Aspects of Designing and Conducting Validation Studies Involving Multi-study Trials.

    PubMed

    Coecke, Sandra; Bernasconi, Camilla; Bowe, Gerard; Bostroem, Ann-Charlotte; Burton, Julien; Cole, Thomas; Fortaner, Salvador; Gouliarmou, Varvara; Gray, Andrew; Griesinger, Claudius; Louhimies, Susanna; Gyves, Emilio Mendoza-de; Joossens, Elisabeth; Prinz, Maurits-Jan; Milcamps, Anne; Parissis, Nicholaos; Wilk-Zasadna, Iwona; Barroso, João; Desprez, Bertrand; Langezaal, Ingrid; Liska, Roman; Morath, Siegfried; Reina, Vittorio; Zorzoli, Chiara; Zuang, Valérie

    This chapter focuses on practical aspects of conducting prospective in vitro validation studies, and in particular, by laboratories that are members of the European Union Network of Laboratories for the Validation of Alternative Methods (EU-NETVAL) that is coordinated by the EU Reference Laboratory for Alternatives to Animal Testing (EURL ECVAM). Prospective validation studies involving EU-NETVAL, comprising a multi-study trial involving several laboratories or "test facilities", typically consist of two main steps: (1) the design of the validation study by EURL ECVAM and (2) the execution of the multi-study trial by a number of qualified laboratories within EU-NETVAL, coordinated and supported by EURL ECVAM. The approach adopted in the conduct of these validation studies adheres to the principles described in the OECD Guidance Document on the Validation and International Acceptance of new or updated test methods for Hazard Assessment No. 34 (OECD 2005). The context and scope of conducting prospective in vitro validation studies is dealt with in Chap. 4 . Here we focus mainly on the processes followed to carry out a prospective validation of in vitro methods involving different laboratories with the ultimate aim of generating a dataset that can support a decision in relation to the possible development of an international test guideline (e.g. by the OECD) or the establishment of performance standards.

  3. Validating the Vocabulary Levels Test with Fourth and Fifth Graders to Identify Students At-Risk in Vocabulary Development Using a Quasiexperimental Single Group Design

    ERIC Educational Resources Information Center

    Dunn, Suzanna

    2012-01-01

    This quasiexperimental single group design study investigated the validity of the Vocabulary Levels Test (VLT) to identify fourth and fifth grade students who are at-risk in vocabulary development. The subjects of the study were 88 fourth and fifth grade students at one elementary school in Washington State. The Group Reading Assessment and…

  4. HPLC-MS/MS method for dexmedetomidine quantification with Design of Experiments approach: application to pediatric pharmacokinetic study.

    PubMed

    Szerkus, Oliwia; Struck-Lewicka, Wiktoria; Kordalewska, Marta; Bartosińska, Ewa; Bujak, Renata; Borsuk, Agnieszka; Bienert, Agnieszka; Bartkowska-Śniatkowska, Alicja; Warzybok, Justyna; Wiczling, Paweł; Nasal, Antoni; Kaliszan, Roman; Markuszewski, Michał Jan; Siluk, Danuta

    2017-02-01

    The purpose of this work was to develop and validate a rapid and robust LC-MS/MS method for the determination of dexmedetomidine (DEX) in plasma, suitable for analysis of a large number of samples. Systematic approach, Design of Experiments, was applied to optimize ESI source parameters and to evaluate method robustness, therefore, a rapid, stable and cost-effective assay was developed. The method was validated according to US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (5-2500 pg/ml), Results: Experimental design approach was applied for optimization of ESI source parameters and evaluation of method robustness. The method was validated according to the US FDA guidelines. LLOQ was determined at 5 pg/ml. The assay was linear over the examined concentration range (R 2 > 0.98). The accuracies, intra- and interday precisions were less than 15%. The stability data confirmed reliable behavior of DEX under tested conditions. Application of Design of Experiments approach allowed for fast and efficient analytical method development and validation as well as for reduced usage of chemicals necessary for regular method optimization. The proposed technique was applied to determination of DEX pharmacokinetics in pediatric patients undergoing long-term sedation in the intensive care unit.

  5. A novel integrated framework and improved methodology of computer-aided drug design.

    PubMed

    Chen, Calvin Yu-Chian

    2013-01-01

    Computer-aided drug design (CADD) is a critical initiating step of drug development, but a single model capable of covering all designing aspects remains to be elucidated. Hence, we developed a drug design modeling framework that integrates multiple approaches, including machine learning based quantitative structure-activity relationship (QSAR) analysis, 3D-QSAR, Bayesian network, pharmacophore modeling, and structure-based docking algorithm. Restrictions for each model were defined for improved individual and overall accuracy. An integration method was applied to join the results from each model to minimize bias and errors. In addition, the integrated model adopts both static and dynamic analysis to validate the intermolecular stabilities of the receptor-ligand conformation. The proposed protocol was applied to identifying HER2 inhibitors from traditional Chinese medicine (TCM) as an example for validating our new protocol. Eight potent leads were identified from six TCM sources. A joint validation system comprised of comparative molecular field analysis, comparative molecular similarity indices analysis, and molecular dynamics simulation further characterized the candidates into three potential binding conformations and validated the binding stability of each protein-ligand complex. The ligand pathway was also performed to predict the ligand "in" and "exit" from the binding site. In summary, we propose a novel systematic CADD methodology for the identification, analysis, and characterization of drug-like candidates.

  6. Space Technology 5: Changing the Mission Design without Changing the Hardware

    NASA Technical Reports Server (NTRS)

    Carlisle, Candace C.; Webb, Evan H.; Slavin, James A.

    2005-01-01

    The Space Technology 5 (ST-5) Project is part of NASA's New Millennium Program. The validation objectives are to demonstrate the research-quality science capability of the ST-5 spacecraft; to operate the three spacecraft as a constellation; and to design, develop, test and flight-validate three capable micro-satellites with new technologies. A three-month flight demonstration phase is planned, beginning in March 2006. This year, the mission was re-planned for a Pegasus XL dedicated launch into an elliptical polar orbit (instead of the Originally-planned Geosynchronous Transfer Orbit.) The re-plan allows the mission to achieve the same high-level technology validation objectives with a different launch vehicle. The new mission design involves a revised science validation strategy, a new orbit and different communication strategy, while minimizing changes to the ST-5 spacecraft itself. The constellation operations concepts have also been refined. While the system engineers, orbit analysts, and operations teams were re-planning the mission, the implementation team continued to make progress on the flight hardware. Most components have been delivered, and the first spacecraft is well into integration and test.

  7. Optimization and Validation of a Sensitive Method for HPLC-PDA Simultaneous Determination of Torasemide and Spironolactone in Human Plasma using Central Composite Design.

    PubMed

    Subramanian, Venkatesan; Nagappan, Kannappan; Sandeep Mannemala, Sai

    2015-01-01

    A sensitive, accurate, precise and rapid HPLC-PDA method was developed and validated for the simultaneous determination of torasemide and spironolactone in human plasma using Design of experiments. Central composite design was used to optimize the method using content of acetonitrile, concentration of buffer and pH of mobile phase as independent variables, while the retention factor of spironolactone, resolution between torasemide and phenobarbitone; and retention time of phenobarbitone were chosen as dependent variables. The chromatographic separation was achieved on Phenomenex C(18) column and the mobile phase comprising 20 mM potassium dihydrogen ortho phosphate buffer (pH-3.2) and acetonitrile in 82.5:17.5 v/v pumped at a flow rate of 1.0 mL min(-1). The method was validated according to USFDA guidelines in terms of selectivity, linearity, accuracy, precision, recovery and stability. The limit of quantitation values were 80 and 50 ng mL(-1) for torasemide and spironolactone respectively. Furthermore, the sensitivity and simplicity of the method suggests the validity of method for routine clinical studies.

  8. Regression discontinuity was a valid design for dichotomous outcomes in three randomized trials.

    PubMed

    van Leeuwen, Nikki; Lingsma, Hester F; Mooijaart, Simon P; Nieboer, Daan; Trompet, Stella; Steyerberg, Ewout W

    2018-06-01

    Regression discontinuity (RD) is a quasi-experimental design that may provide valid estimates of treatment effects in case of continuous outcomes. We aimed to evaluate validity and precision in the RD design for dichotomous outcomes. We performed validation studies in three large randomized controlled trials (RCTs) (Corticosteroid Randomization After Significant Head injury [CRASH], the Global Utilization of Streptokinase and Tissue Plasminogen Activator for Occluded Coronary Arteries [GUSTO], and PROspective Study of Pravastatin in elderly individuals at risk of vascular disease [PROSPER]). To mimic the RD design, we selected patients above and below a cutoff (e.g., age 75 years) randomized to treatment and control, respectively. Adjusted logistic regression models using restricted cubic splines (RCS) and polynomials and local logistic regression models estimated the odds ratio (OR) for treatment, with 95% confidence intervals (CIs) to indicate precision. In CRASH, treatment increased mortality with OR 1.22 [95% CI 1.06-1.40] in the RCT. The RD estimates were 1.42 (0.94-2.16) and 1.13 (0.90-1.40) with RCS adjustment and local regression, respectively. In GUSTO, treatment reduced mortality (OR 0.83 [0.72-0.95]), with more extreme estimates in the RD analysis (OR 0.57 [0.35; 0.92] and 0.67 [0.51; 0.86]). In PROSPER, similar RCT and RD estimates were found, again with less precision in RD designs. We conclude that the RD design provides similar but substantially less precise treatment effect estimates compared with an RCT, with local regression being the preferred method of analysis. Copyright © 2018 Elsevier Inc. All rights reserved.

  9. A comprehensive scoring system to measure healthy community design in land use plans and regulations.

    PubMed

    Maiden, Kristin M; Kaplan, Marina; Walling, Lee Ann; Miller, Patricia P; Crist, Gina

    2017-02-01

    Comprehensive land use plans and their corresponding regulations play a role in determining the nature of the built environment and community design, which are factors that influence population health and health disparities. To determine the level in which a plan addresses healthy living and active design, there is a need for a systematic, reliable and valid method of analyzing and scoring health-related content in plans and regulations. This paper describes the development and validation of a scoring tool designed to measure the strength and comprehensiveness of health-related content found in land use plans and the corresponding regulations. The measures are scored based on the presence of a specific item and the specificity and action-orientation of language. To establish reliability and validity, 42 land use plans and regulations from across the United States were scored January-April 2016. Results of the psychometric analysis indicate the scorecard is a reliable scoring tool for land use plans and regulations related to healthy living and active design. Intraclass correlation coefficients (ICC) scores showed strong inter-rater reliability for total strength and comprehensiveness. ICC scores for total implementation scores showed acceptable consistency among scorers. Cronbach's alpha values for all focus areas were acceptable. Strong content validity was measured through a committee vetting process. The development of this tool has far-reaching implications, bringing standardization of measurement to the field of land use plan assessment, and paving the way for systematic inclusion of health-related design principles, policies, and requirements in land use plans and their corresponding regulations. Copyright © 2016 Elsevier Inc. All rights reserved.

  10. Verification and Validation of Digitally Upgraded Control Rooms

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Boring, Ronald; Lau, Nathan

    2015-09-01

    As nuclear power plants undertake main control room modernization, a challenge is the lack of a clearly defined human factors process to follow. Verification and validation (V&V) as applied in the nuclear power community has tended to involve efforts such as integrated system validation, which comes at the tail end of the design stage. To fill in guidance gaps and create a step-by-step process for control room modernization, we have developed the Guideline for Operational Nuclear Usability and Knowledge Elicitation (GONUKE). This approach builds on best practices in the software industry, which prescribe an iterative user-centered approach featuring multiple cyclesmore » of design and evaluation. Nuclear regulatory guidance for control room design emphasizes summative evaluation—which occurs after the design is complete. In the GONUKE approach, evaluation is also performed at the formative stage of design—early in the design cycle using mockups and prototypes for evaluation. The evaluation may involve expert review (e.g., software heuristic evaluation at the formative stage and design verification against human factors standards like NUREG-0700 at the summative stage). The evaluation may also involve user testing (e.g., usability testing at the formative stage and integrated system validation at the summative stage). An additional, often overlooked component of evaluation is knowledge elicitation, which captures operator insights into the system. In this report we outline these evaluation types across design phases that support the overall modernization process. The objective is to provide industry-suitable guidance for steps to be taken in support of the design and evaluation of a new human-machine interface (HMI) in the control room. We suggest the value of early-stage V&V and highlight how this early-stage V&V can help improve the design process for control room modernization. We argue that there is a need to overcome two shortcomings of V&V in current practice—the propensity for late-stage V&V and the use of increasingly complex psychological assessment measures for V&V.« less

  11. Ensuring the Quality of Evidence: Using the Best Design to Answer Health IT Questions.

    PubMed

    Weir, Charlene R

    2016-01-01

    The quality of logic in a research design determines the value of the results and our confidence regarding the validity of the findings. The purpose of this contribution is to review the principles of research design as they apply to research and evaluation in health IT. We review the architecture of research design, the definitions of cause, sources of bias and confounds, and the importance of measurement as related to the various types of health IT questions. The goal is to provide practitioners a roadmap for making decisions for their own specific study. The contribution is organized around the Threats to Validity taxonomy and explains how different design models address these threats through the use of blocking, factorial design, control groups and time series analysis. The contribution discusses randomized experiments, and includes regression discontinuity designs and various quasi-experimental designs with a special emphasis on how to improve pre/post designs. At the end, general recommendations are provided for improving weaker designs and general research procedures.

  12. ICP-AES Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  13. Trace Volatile Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  14. Characterizing problematic hypoglycaemia: iterative design and preliminary psychometric validation of the Hypoglycaemia Awareness Questionnaire (HypoA-Q).

    PubMed

    Speight, J; Barendse, S M; Singh, H; Little, S A; Inkster, B; Frier, B M; Heller, S R; Rutter, M K; Shaw, J A M

    2016-03-01

    To design and conduct preliminary validation of a measure of hypoglycaemia awareness and problematic hypoglycaemia, the Hypoglycaemia Awareness Questionnaire. Exploratory and cognitive debriefing interviews were conducted with 17 adults (nine of whom were women) with Type 1 diabetes (mean ± sd age 48 ± 10 years). Questionnaire items were modified in consultation with diabetologists/psychologists. Psychometric validation was undertaken using data from 120 adults (53 women) with Type 1 diabetes (mean ± sd age 44 ± 16 years; 50% with clinically diagnosed impaired awareness of hypoglycaemia), who completed the following questionnaires: the Hypoglycaemia Awareness Questionnaire, the Gold score, the Clarke questionnaire and the Problem Areas in Diabetes questionnaire. Iterative design resulted in 33 items eliciting responses about awareness of hypoglycaemia when awake/asleep and hypoglycaemia frequency, severity and impact (healthcare utilization). Psychometric analysis identified three subscales reflecting 'impaired awareness', 'symptom level' and 'symptom frequency'. Convergent validity was indicated by strong correlations between the 'impaired awareness' subscale and existing measures of awareness: (Gold: rs =0.75, P < 0.01; Clarke: rs =0.76, P < 0.01). Divergent validity was indicated by weaker correlations with diabetes-related distress (Problem Areas in Diabetes: rs =0.25, P < 0.01) and HbA1c (rs =-0.05, non-significant). The 'impaired awareness' subscale and other items discriminated between those with impaired and intact awareness (Gold score). The 'impaired awareness' subscale and other items contributed significantly to models explaining the occurrence of severe hypoglycaemia and hypoglycaemia when asleep. This preliminary validation shows the Hypoglycaemia Awareness Questionnaire has robust face and content validity; satisfactory structure; internal reliability; convergent, divergent and known groups validity. The impaired awareness subscale and other items contribute significantly to models explaining recall of severe and nocturnal hypoglycaemia. Prospective validation, including determination of a threshold to identify impaired awareness, is now warranted. © 2015 The Authors. Diabetic Medicine © 2015 Diabetes UK.

  15. Using wound care algorithms: a content validation study.

    PubMed

    Beitz, J M; van Rijswijk, L

    1999-09-01

    Valid and reliable heuristic devices facilitating optimal wound care are lacking. The objectives of this study were to establish content validation data for a set of wound care algorithms, to identify their associated strengths and weaknesses, and to gain insight into the wound care decision-making process. Forty-four registered nurse wound care experts were surveyed and interviewed at national and regional educational meetings. Using a cross-sectional study design and an 83-item, 4-point Likert-type scale, this purposive sample was asked to quantify the degree of validity of the algorithms' decisions and components. Participants' comments were tape-recorded, transcribed, and themes were derived. On a scale of 1 to 4, the mean score of the entire instrument was 3.47 (SD +/- 0.87), the instrument's Content Validity Index was 0.86, and the individual Content Validity Index of 34 of 44 participants was > 0.8. Item scores were lower for those related to packing deep wounds (P < .001). No other significant differences were observed. Qualitative data analysis revealed themes of difficulty associated with wound assessment and care issues, that is, the absence of valid and reliable definitions. The wound care algorithms studied proved valid. However, the lack of valid and reliable wound assessment and care definitions hinders optimal use of these instruments. Further research documenting their clinical use is warranted. Research-based practice recommendations should direct the development of future valid and reliable algorithms designed to help nurses provide optimal wound care.

  16. [Design and validation of a questionnaire for psychosocial nursing diagnosis in Primary Care].

    PubMed

    Brito-Brito, Pedro Ruymán; Rodríguez-Álvarez, Cristobalina; Sierra-López, Antonio; Rodríguez-Gómez, José Ángel; Aguirre-Jaime, Armando

    2012-01-01

    To develop a valid, reliable and easy-to-use questionnaire for a psychosocial nursing diagnosis. The study was performed in two phases: first phase, questionnaire design and construction; second phase, validity and reliability tests. A bank of items was constructed using the NANDA classification as a theoretical framework. Each item was assigned a Likert scale or dichotomous response. The combination of responses to the items constituted the diagnostic rules to assign up to 28 labels. A group of experts carried out the validity test for content. Other validated scales were used as reference standards for the criterion validity tests. Forty-five nurses provided the questionnaire to the patients on three separate occasions over a period of three weeks, and the other validated scales only once to 188 randomly selected patients in Primary Care centres in Tenerife (Spain). Validity tests for construct confirmed the six dimensions of the questionnaire with 91% of total variance explained. Validity tests for criterion showed a specificity of 66%-100%, and showed high correlations with the reference scales when the questionnaire was assigning nursing diagnoses. Reliability tests showed agreement of 56%-91% (P<.001), and a 93% internal consistency. The Questionnaire for Psychosocial Nursing Diagnosis was called CdePS, and included 61 items. The CdePS is a valid, reliable and easy-to-use tool in Primary Care centres to improve the assigning of a psychosocial nursing diagnosis. Copyright © 2011 Elsevier España, S.L. All rights reserved.

  17. System-Level Experimental Validations for Supersonic Commercial Transport Aircraft Entering Service in the 2018-2020 Time Period

    NASA Technical Reports Server (NTRS)

    Magee, Todd E.; Fugal, Spencer R.; Fink, Lawrence E.; Adamson, Eric E.; Shaw, Stephen G.

    2015-01-01

    This report describes the work conducted under NASA funding for the Boeing N+2 Supersonic Experimental Validation project to experimentally validate the conceptual design of a supersonic airliner feasible for entry into service in the 2018 -to 2020 timeframe (NASA N+2 generation). The primary goal of the project was to develop a low-boom configuration optimized for minimum sonic boom signature (65 to 70 PLdB). This was a very aggressive goal that could be achieved only through integrated multidisciplinary optimization tools validated in relevant ground and, later, flight environments. The project was split into two phases. Phase I of the project covered the detailed aerodynamic design of a low boom airliner as well as the wind tunnel tests to validate that design (ref. 1). This report covers Phase II of the project, which continued the design methodology development of Phase I with a focus on the propulsion integration aspects as well as the testing involved to validate those designs. One of the major airplane configuration features of the Boeing N+2 low boom design was the overwing nacelle. The location of the nacelle allowed for a minimal effect on the boom signature, however, it added a level of difficulty to designing an inlet with acceptable performance in the overwing flow field. Using the Phase I work as the starting point, the goals of the Phase 2 project were to design and verify inlet performance while maintaining a low-boom signature. The Phase II project was successful in meeting all contract objectives. New modular nacelles were built for the larger Performance Model along with a propulsion rig with an electrically-actuated mass flow plug. Two new mounting struts were built for the smaller Boom Model, along with new nacelles. Propulsion integration testing was performed using an instrumented fan face and a mass flow plug, while boom signatures were measured using a wall-mounted pressure rail. A side study of testing in different wind tunnels was completed as a precursor to the selection of the facilities used for validation testing. As facility schedules allowed, the propulsion testing was done at the NASA Glenn Research Center (GRC) 8 x 6-Foot wind tunnel, while boom and force testing was done at the NASA Ames Research Center (ARC) 9 x 7-Foot wind tunnel. During boom testing, a live balance was used for gathering force data. This report is broken down into nine sections. The first technical section (Section 2) covers the general scope of the Phase II activities, goals, a description of the design and testing efforts, and the project plan and schedule. Section 3 covers the details of the propulsion system concepts and design evolution. A series of short tests to evaluate the suitability of different wind tunnels for boom, propulsion, and force testing was also performed under the Phase 2 effort, with the results covered in Section 4. The propulsion integration testing is covered in Section 5 and the boom and force testing in Section 6. CFD comparisons and analyses are included in Section 7. Section 8 includes the conclusions and lessons learned.

  18. Validation Methods Research for Fault-Tolerant Avionics and Control Systems: Working Group Meeting, 2

    NASA Technical Reports Server (NTRS)

    Gault, J. W. (Editor); Trivedi, K. S. (Editor); Clary, J. B. (Editor)

    1980-01-01

    The validation process comprises the activities required to insure the agreement of system realization with system specification. A preliminary validation methodology for fault tolerant systems documented. A general framework for a validation methodology is presented along with a set of specific tasks intended for the validation of two specimen system, SIFT and FTMP. Two major areas of research are identified. First, are those activities required to support the ongoing development of the validation process itself, and second, are those activities required to support the design, development, and understanding of fault tolerant systems.

  19. The reliability and validity of three questionnaires: The Student Satisfaction and Self-Confidence in Learning Scale, Simulation Design Scale, and Educational Practices Questionnaire.

    PubMed

    Unver, Vesile; Basak, Tulay; Watts, Penni; Gaioso, Vanessa; Moss, Jacqueline; Tastan, Sevinc; Iyigun, Emine; Tosun, Nuran

    2017-02-01

    The purpose of this study was to adapt the "Student Satisfaction and Self-Confidence in Learning Scale" (SCLS), "Simulation Design Scale" (SDS), and "Educational Practices Questionnaire" (EPQ) developed by Jeffries and Rizzolo into Turkish and establish the reliability and the validity of these translated scales. A sample of 87 nursing students participated in this study. These scales were cross-culturally adapted through a process including translation, comparison with original version, back translation, and pretesting. Construct validity was evaluated by factor analysis, and criterion validity was evaluated using the Perceived Learning Scale, Patient Intervention Self-confidence/Competency Scale, and Educational Belief Scale. Cronbach's alpha values were found as 0.77-0.85 for SCLS, 0.73-0.86 for SDS, and 0.61-0.86 for EPQ. The results of this study show that the Turkish versions of all scales are validated and reliable measurement tools.

  20. An observational examination of the literature in diagnostic anatomic pathology.

    PubMed

    Foucar, Elliott; Wick, Mark R

    2005-05-01

    Original research published in the medical literature confronts the reader with three very basic and closely linked questions--are the authors' conclusions true in the contextual setting in which the work was performed (internally valid); if so, are the conclusions also applicable in other practice settings (externally valid); and, if the conclusions of the study are bona fide, do they represent an important contribution to medical practice or are they true-but-insignificant? Most publications attempt to convince readers that the researchers' conclusions are both internally valid and important, and occasionally papers also directly address external validity. Developing standardized methods to facilitate the prospective determination of research importance would be useful to both journals and their readers, but has proven difficult. In contrast, the evidence-based medicine (EBM) movement has had more success with understanding and codifying factors thought to promote research validity. Of the many variables that can influence research validity, research design is the one that has received the most attention. The present paper reviews the contributions of EBM to understanding research validity, looking for areas where EBM's body of knowledge is applicable to the anatomic pathology (AP) literature. As part of this project, the authors performed a pilot observational analysis of a representative sample of the current pertinent literature on diagnostic tissue pathology. The results of that review showed that most of the latter publications employ one of the four categories of "observational" research design that have been delineated by the EBM movement, and that the most common of these observational designs is a "cross-sectional" comparison. Pathologists do not presently use the "experimental" research designs so admired by advocates of EBM. Slightly > 50% of AP observational studies employed statistical evaluations to support their final conclusions. Comparison of the current AP literature with a selected group of papers published in 1977 shows a discernible change over that period that has affected not just technological procedures, but also research design and use of statistics. Although we feel that advocates of EBM deserve credit for bringing attention to the close link between research design and research validity, much of the EBM effort has centered on refining "experimental" methodology, and the complexities of observational research have often been treated in an inappropriately dismissive manner. For advocates of EBM, an observational study is what you are relegated to as a second choice when you are unable to do an experimental study. The latter viewpoint may be true for evaluating new chemotherapeutic agents, but is unacceptable to pathologists, whose research advances are currently completely dependent on well-conducted observational research. Rather than succumb to randomization envy and accept EBM's assertion that observational research is second best, the challenge to AP is to develop and adhere to standards for observational research that will allow our patients to benefit from the full potential of this time tested approach to developing valid insights into disease.

  1. Mercury and Cyanide Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity ofanalytical data generated through the USEPA Contract Laboratory Program (CLP) Statement ofWork (SOW) ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  2. School Age Populations Research Needs - NCS Dietary Assessment Literature Review

    Cancer.gov

    Drawing conclusions about the validity of available dietary assessment instruments in school age children is hampered by the differences in instruments, research design, reference methods, and populations in the validation literature.

  3. Region 9 Superfund Data Evaluation/Validation Guide

    EPA Pesticide Factsheets

    This guidance document is designed by EPARegion 9 Quality Assurance Office to provide assistance to project officers, Superfund contractors, and Superfund grantees in performing timely data evaluation and/or validation of laboratory data.

  4. 38 CFR 1.15 - Standards for program evaluation.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... program operates. (3) Validity. The degree of statistical validity should be assessed within the research... decisions. (4) Reliability. Use of the same research design by others should yield the same findings. (g...

  5. Low/Medium Volatile Data Validation

    EPA Pesticide Factsheets

    Document designed to offer data reviewers guidance in determining the validity of analytical data generated through the US EPA Contract Laboratory Program Statement of Work ISM01.X Inorganic Superfund Methods (Multi-Media, Multi-Concentration)

  6. [Design and Validation of a Questionnaire on Vaccination in Students of Health Sciences, Spain].

    PubMed

    Fernández-Prada, María; Ramos-Martín, Pedro; Madroñal-Menéndez, Jaime; Martínez-Ortega, Carmen; González-Cabrera, Joaquín

    2016-11-07

    Immunization rates among medicine and nursing students -and among health professional in general- during hospital training are low. It is necessary to investigate the causes for these low immunization rates. The objective of this study was to design and validate a questionnaire for exploring the attitudes and behaviours of medicine and nursing students toward immunization of vaccine-preventable diseases. An instrument validation study. The sample included 646 nursing and medicine students at University of Oviedo, Spain. It was a non-ramdom sampling. After the content validation process, a 24-item questionnaire was designed to assess attitudes and behaviours/behavioural intentions. Reliability (ordinal alpha), internal validity (exploratory factor analysis by parellel analysis), ANOVA and mediational model tests were performed. Exploratory factor analysis yielded two factors which accounted for 48.8% of total variance. Ordinal alpha for the total score was 0.92. Differences were observed across academic years in the dimensions of attitudes (F5.447=3.728) and knowledge (F5.448=65.59), but not in behaviours/behavioural intentions (F5.461=1.680). Attitudes demonstrated to be a moderating variable of knowledge and attitudes/behavioural attitudes (Indirect effect B=0.15; SD=0.3; 95% CI:0.09-0.19). We developed a questionnaie based on sufficient evidence of reliability and internal validity. Scores on attitudes and knowledge increase with the academic year. Attitudes act as a moderating variable between knowledge and behaviours/behavioural intentions.

  7. Development of Servo Motor Trainer for Basic Control System in Laboratory of Electrical Engineering Control System Faculty of Engineering Universitas Negeri Surabaya

    NASA Astrophysics Data System (ADS)

    Endryansyah; Wanarti Rusimamto, Puput; Ridianto, Adam; Sugiarto, Hariyadi

    2018-04-01

    In the Department of Electrical Engineering FT Unesa, there are 3 majors: S1 Electrical Engineering Education, S1 Electrical Engineering, and D3 Electrical Engineering. Courses the Basic System Settings go to in the curriculum of the three programs. Team lecturer college of basic system settings seek learning innovation, focused on the development of trainer to student practicum at the laboratory of systems control. Trainer developed is a servo motor along with the lab module that contains a wide variety of theories about the servo motor and guide the practicum. This research type is development research using methods Research & development (R & D). In which the steps are applied in this study is as follows: pay attention to the potential and existing problems, gather information and study the literature, design the product, validate the design, revise the design, a limited trial. The results of the validation of learning device in the form of modules and trainer obtained as follows: score validation of learning device is 3,64; score validation lab module Servo Motor is 3,47; and questionnaire responses of students is 3,73. The result of the whole validation value is located in the interval >of 3.25 s/d 4 with the category of “Very Valid”, so it can be concluded that all instruments have a level of validity “Very Valid” and worthy of use for further learning.

  8. Experimental Validation of an Integrated Controls-Structures Design Methodology

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliot, Kenny B.; Walz, Joseph E.

    1996-01-01

    The first experimental validation of an integrated controls-structures design methodology for a class of large order, flexible space structures is described. Integrated redesign of the controls-structures-interaction evolutionary model, a laboratory testbed at NASA Langley, was described earlier. The redesigned structure was fabricated, assembled in the laboratory, and experimentally tested against the original structure. Experimental results indicate that the structure redesigned using the integrated design methodology requires significantly less average control power than the nominal structure with control-optimized designs, while maintaining the required line-of-sight pointing performance. Thus, the superiority of the integrated design methodology over the conventional design approach is experimentally demonstrated. Furthermore, amenability of the integrated design structure to other control strategies is evaluated, both analytically and experimentally. Using Linear-Quadratic-Guassian optimal dissipative controllers, it is observed that the redesigned structure leads to significantly improved performance with alternate controllers as well.

  9. Implementation and Initial Validation of the APS English Test [and] The APS English-Writing Test at Golden West College: Evidence for Predictive Validity.

    ERIC Educational Resources Information Center

    Isonio, Steven

    In May 1991, Golden West College (California) conducted a validation study of the English portion of the Assessment and Placement Services for Community Colleges (APS), followed by a predictive validity study in July 1991. The initial study was designed to aid in the implementation of the new test at GWC by comparing data on APS use at other…

  10. Novel Composites for Wing and Fuselage Applications. Task 1; Novel Wing Design Concepts

    NASA Technical Reports Server (NTRS)

    Suarez, J. A.; Buttitta, C.; Flanagan, G.; DeSilva, T.; Egensteiner, W.; Bruno, J.; Mahon, J.; Rutkowski, C.; Collins, R.; Fidnarick, R.; hide

    1996-01-01

    Design trade studies were conducted to arrive at advanced wing designs that integrated new material forms with innovative structural concepts and cost-effective fabrication methods. A representative spar was selected for design, fabrication, and test to validate the predicted performance. Textile processes, such as knitting, weaving and stitching, were used to produce fiber preforms that were later fabricated into composite span through epoxy Resin Transfer Molding (RTM), Resin Film Infusion (RFI), and consolidation of commingled thermoplastic and graphite tows. The target design ultimate strain level for these innovative structural design concepts was 6000 mu in. per in. The spars were subjected to four-point beam bending to validate their structural performance. The various material form /processing combination Y-spars were rated for their structural efficiency and acquisition cost. The acquisition cost elements were material, tooling, and labor.

  11. Advanced Concept Studies for Supersonic Commercial Transports Entering Service in the 2018 to 2020 Period

    NASA Technical Reports Server (NTRS)

    Morgenstern, John; Norstrud, Nicole; Sokhey, Jack; Martens, Steve; Alonso, Juan J.

    2013-01-01

    Lockheed Martin Aeronautics Company (LM), working in conjunction with General Electric Global Research (GE GR), Rolls-Royce Liberty Works (RRLW), and Stanford University, herein presents results from the "N+2 Supersonic Validations" contract s initial 22 month phase, addressing the NASA solicitation "Advanced Concept Studies for Supersonic Commercial Transports Entering Service in the 2018 to 2020 Period." This report version adds documentation of an additional three month low boom test task. The key technical objective of this effort was to validate integrated airframe and propulsion technologies and design methodologies. These capabilities aspired to produce a viable supersonic vehicle design with environmental and performance characteristics. Supersonic testing of both airframe and propulsion technologies (including LM3: 97-023 low boom testing and April-June nozzle acoustic testing) verified LM s supersonic low-boom design methodologies and both GE and RRLW's nozzle technologies for future implementation. The N+2 program is aligned with NASA s Supersonic Project and is focused on providing system-level solutions capable of overcoming the environmental and performance/efficiency barriers to practical supersonic flight. NASA proposed "Initial Environmental Targets and Performance Goals for Future Supersonic Civil Aircraft". The LM N+2 studies are built upon LM s prior N+3 100 passenger design studies. The LM N+2 program addresses low boom design and methodology validations with wind tunnel testing, performance and efficiency goals with system level analysis, and low noise validations with two nozzle (GE and RRLW) acoustic tests.

  12. A Surrogate Approach to the Experimental Optimization of Multielement Airfoils

    NASA Technical Reports Server (NTRS)

    Otto, John C.; Landman, Drew; Patera, Anthony T.

    1996-01-01

    The incorporation of experimental test data into the optimization process is accomplished through the use of Bayesian-validated surrogates. In the surrogate approach, a surrogate for the experiment (e.g., a response surface) serves in the optimization process. The validation step of the framework provides a qualitative assessment of the surrogate quality, and bounds the surrogate-for-experiment error on designs "near" surrogate-predicted optimal designs. The utility of the framework is demonstrated through its application to the experimental selection of the trailing edge ap position to achieve a design lift coefficient for a three-element airfoil.

  13. A Valid and Reliable Instrument for Cognitive Complexity Rating Assignment of Chemistry Exam Items

    ERIC Educational Resources Information Center

    Knaus, Karen; Murphy, Kristen; Blecking, Anja; Holme, Thomas

    2011-01-01

    The design and use of a valid and reliable instrument for the assignment of cognitive complexity ratings to chemistry exam items is described in this paper. Use of such an instrument provides a simple method to quantify the cognitive demands of chemistry exam items. Instrument validity was established in two different ways: statistically…

  14. A Proposal on the Validation Model of Equivalence between PBLT and CBLT

    ERIC Educational Resources Information Center

    Chen, Huilin

    2014-01-01

    The validity of the computer-based language test is possibly affected by three factors: computer familiarity, audio-visual cognitive competence, and other discrepancies in construct. Therefore, validating the equivalence between the paper-and-pencil language test and the computer-based language test is a key step in the procedure of designing a…

  15. Validation Study of a Gatekeeping Attitude Index for Social Work Education

    ERIC Educational Resources Information Center

    Tam, Dora M. Y.; Coleman, Heather

    2011-01-01

    This article reports on a study designed to validate the Gatekeeping Attitude Index, a 14-item Likert scaling index. The authors collected data from a convenience sample of social work field instructors (N = 188) with a response rate of 74.0%. Construct validation by exploratory factor analysis identified a 2-factor solution on the index after…

  16. Validating the CDIO Syllabus for Engineering Education Using the Taxonomy of Engineering Competencies

    ERIC Educational Resources Information Center

    Woollacott, L. C.

    2009-01-01

    The CDIO (Conceive-Design-Implement-Operate) syllabus is the most detailed statement on the goals of engineering education currently found in the literature. This paper presents an in-depth validation exercise of the CDIO syllabus using the taxonomy of engineering competencies as a validating instrument. The study explains the attributes that make…

  17. An Evaluation of the Validity and Reliability of a Food Behavior Checklist Modified for Children

    ERIC Educational Resources Information Center

    Branscum, Paul; Sharma, Manoj; Kaye, Gail; Succop, Paul

    2010-01-01

    Objective: The objective of this study was to report the construct validity and internal consistency reliability of the Food Behavior Checklist modified for children (FBC-MC), with low-income, Youth Expanded Food and Nutrition Education Program (EFNEP)-eligible children. Methods: Using a cross-sectional research design, construct validity was…

  18. The Third Round of the Czech Validation of the Motivated Strategies for Learning Questionnaire (MSLQ)

    ERIC Educational Resources Information Center

    Vaculíková, Jitka

    2016-01-01

    The authors present findings on the third round of the Czech validation of the Motivated Strategies for learning questionnaire (MSLQ), originally developed by Pintrich et al. (1991). The validation only covered an area designed to access motivation in self-regulated learning. Data was collected from a sample of university students in regular…

  19. The Contribution of Rubrics to the Validity of Performance Assessment: A Study of the Conservation-Restoration and Design Undergraduate Degrees

    ERIC Educational Resources Information Center

    Menéndez-Varela, José-Luis; Gregori-Giralt, Eva

    2016-01-01

    Rubrics have attained considerable importance in the authentic and sustainable assessment paradigm; nevertheless, few studies have examined their contribution to validity, especially outside the domain of educational studies. This empirical study used a quantitative approach to analyse the validity of a rubrics-based performance assessment. Raters…

  20. Validation of the Electromagnetic Code FACETS for Numerical Simulation of Radar Target Images

    DTIC Science & Technology

    2009-12-01

    Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong...Validation of the electromagnetic code FACETS for numerical simulation of radar target images S. Wong DRDC Ottawa...for simulating radar images of a target is obtained, through direct simulation-to-measurement comparisons. A 3-dimensional computer-aided design

  1. Validation, Edits, and Application Processing System Report: Phase I.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    Findings of phase 1 of a study of the 1979-1980 Basic Educational Opportunity Grants validation, edits, and application processing system are presented. The study was designed to: assess the impact of the validation effort and processing system edits on the correct award of Basic Grants; and assess the characteristics of students most likely to…

  2. The Cerebral Palsy Quality of Life for Children (CP QOL-Child): Evidence of Construct Validity

    ERIC Educational Resources Information Center

    Chen, Kuan-Lin; Wang, Hui-Yi; Tseng, Mei-Hui; Shieh, Jeng-Yi; Lu, Lu; Yao, Kai-Ping Grace; Huang, Chien-Yu

    2013-01-01

    The Cerebral Palsy Quality of Life for Children (CP QOL-Child) is the first health condition-specific questionnaire designed for measuring QOL in children with cerebral palsy (CP). However, its construct validity has not yet been confirmed by confirmatory factor analysis (CFA). Hence, this study assessed the construct validity of the caregiver…

  3. Exploring the Reliability and Validity of the Social-Moral Awareness Test

    ERIC Educational Resources Information Center

    Livesey, Alexandra; Dodd, Karen; Pote, Helen; Marlow, Elizabeth

    2012-01-01

    Background: The aim of the study was to explore the validity of the social-moral awareness test (SMAT) a measure designed for assessing socio-moral rule knowledge and reasoning in people with learning disabilities. Comparisons between Theory of Mind and socio-moral reasoning allowed the exploration of construct validity of the tool. Factor…

  4. Evidence of Construct Validity in Published Achievement Tests.

    ERIC Educational Resources Information Center

    Nolet, Victor; Tindal, Gerald

    Valid interpretation of test scores is the shared responsibility of the test designer and the test user. Test publishers must provide evidence of the validity of the decisions their tests are intended to support, while test users are responsible for analyzing this evidence and subsequently using the test in the manner indicated by the publisher.…

  5. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation.

    PubMed

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-04-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67x3 (67 clusters of three observations) and a 33x6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67x3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis.

  6. Experimental validation of an integrated controls-structures design methodology for a class of flexible space structures

    NASA Technical Reports Server (NTRS)

    Maghami, Peiman G.; Gupta, Sandeep; Elliott, Kenny B.; Joshi, Suresh M.; Walz, Joseph E.

    1994-01-01

    This paper describes the first experimental validation of an optimization-based integrated controls-structures design methodology for a class of flexible space structures. The Controls-Structures-Interaction (CSI) Evolutionary Model, a laboratory test bed at Langley, is redesigned based on the integrated design methodology with two different dissipative control strategies. The redesigned structure is fabricated, assembled in the laboratory, and experimentally compared with the original test structure. Design guides are proposed and used in the integrated design process to ensure that the resulting structure can be fabricated. Experimental results indicate that the integrated design requires greater than 60 percent less average control power (by thruster actuators) than the conventional control-optimized design while maintaining the required line-of-sight performance, thereby confirming the analytical findings about the superiority of the integrated design methodology. Amenability of the integrated design structure to other control strategies is considered and evaluated analytically and experimentally. This work also demonstrates the capabilities of the Langley-developed design tool CSI DESIGN which provides a unified environment for structural and control design.

  7. Estimation of AUC or Partial AUC under Test-Result-Dependent Sampling.

    PubMed

    Wang, Xiaofei; Ma, Junling; George, Stephen; Zhou, Haibo

    2012-01-01

    The area under the ROC curve (AUC) and partial area under the ROC curve (pAUC) are summary measures used to assess the accuracy of a biomarker in discriminating true disease status. The standard sampling approach used in biomarker validation studies is often inefficient and costly, especially when ascertaining the true disease status is costly and invasive. To improve efficiency and reduce the cost of biomarker validation studies, we consider a test-result-dependent sampling (TDS) scheme, in which subject selection for determining the disease state is dependent on the result of a biomarker assay. We first estimate the test-result distribution using data arising from the TDS design. With the estimated empirical test-result distribution, we propose consistent nonparametric estimators for AUC and pAUC and establish the asymptotic properties of the proposed estimators. Simulation studies show that the proposed estimators have good finite sample properties and that the TDS design yields more efficient AUC and pAUC estimates than a simple random sampling (SRS) design. A data example based on an ongoing cancer clinical trial is provided to illustrate the TDS design and the proposed estimators. This work can find broad applications in design and analysis of biomarker validation studies.

  8. Big Data in Designing Clinical Trials: Opportunities and Challenges

    PubMed Central

    Mayo, Charles S.; Matuszak, Martha M.; Schipper, Matthew J.; Jolly, Shruti; Hayman, James A.; Ten Haken, Randall K.

    2017-01-01

    Emergence of big data analytics resource systems (BDARSs) as a part of routine practice in Radiation Oncology is on the horizon. Gradually, individual researchers, vendors, and professional societies are leading initiatives to create and demonstrate use of automated systems. What are the implications for design of clinical trials, as these systems emerge? Gold standard, randomized controlled trials (RCTs) have high internal validity for the patients and settings fitting constraints of the trial, but also have limitations including: reproducibility, generalizability to routine practice, infrequent external validation, selection bias, characterization of confounding factors, ethics, and use for rare events. BDARS present opportunities to augment and extend RCTs. Preliminary modeling using single- and muti-institutional BDARS may lead to better design and less cost. Standardizations in data elements, clinical processes, and nomenclatures used to decrease variability and increase veracity needed for automation and multi-institutional data pooling in BDARS also support ability to add clinical validation phases to clinical trial design and increase participation. However, volume and variety in BDARS present other technical, policy, and conceptual challenges including applicable statistical concepts, cloud-based technologies. In this summary, we will examine both the opportunities and the challenges for use of big data in design of clinical trials. PMID:28913177

  9. Big Data in Designing Clinical Trials: Opportunities and Challenges.

    PubMed

    Mayo, Charles S; Matuszak, Martha M; Schipper, Matthew J; Jolly, Shruti; Hayman, James A; Ten Haken, Randall K

    2017-01-01

    Emergence of big data analytics resource systems (BDARSs) as a part of routine practice in Radiation Oncology is on the horizon. Gradually, individual researchers, vendors, and professional societies are leading initiatives to create and demonstrate use of automated systems. What are the implications for design of clinical trials, as these systems emerge? Gold standard, randomized controlled trials (RCTs) have high internal validity for the patients and settings fitting constraints of the trial, but also have limitations including: reproducibility, generalizability to routine practice, infrequent external validation, selection bias, characterization of confounding factors, ethics, and use for rare events. BDARS present opportunities to augment and extend RCTs. Preliminary modeling using single- and muti-institutional BDARS may lead to better design and less cost. Standardizations in data elements, clinical processes, and nomenclatures used to decrease variability and increase veracity needed for automation and multi-institutional data pooling in BDARS also support ability to add clinical validation phases to clinical trial design and increase participation. However, volume and variety in BDARS present other technical, policy, and conceptual challenges including applicable statistical concepts, cloud-based technologies. In this summary, we will examine both the opportunities and the challenges for use of big data in design of clinical trials.

  10. On Internal Validity in Multiple Baseline Designs

    ERIC Educational Resources Information Center

    Pustejovsky, James E.

    2014-01-01

    Single-case designs are a class of research designs for evaluating intervention effects on individual cases. The designs are widely applied in certain fields, including special education, school psychology, clinical psychology, social work, and applied behavior analysis. The multiple baseline design (MBD) is the most frequently used single-case…

  11. A business rules design framework for a pharmaceutical validation and alert system.

    PubMed

    Boussadi, A; Bousquet, C; Sabatier, B; Caruba, T; Durieux, P; Degoulet, P

    2011-01-01

    Several alert systems have been developed to improve the patient safety aspects of clinical information systems (CIS). Most studies have focused on the evaluation of these systems, with little information provided about the methodology leading to system implementation. We propose here an 'agile' business rule design framework (BRDF) supporting both the design of alerts for the validation of drug prescriptions and the incorporation of the end user into the design process. We analyzed the unified process (UP) design life cycle and defined the activities, subactivities, actors and UML artifacts that could be used to enhance the agility of the proposed framework. We then applied the proposed framework to two different sets of data in the context of the Georges Pompidou University Hospital (HEGP) CIS. We introduced two new subactivities into UP: business rule specification and business rule instantiation activity. The pharmacist made an effective contribution to five of the eight BRDF design activities. Validation of the two new subactivities was effected in the context of drug dosage adaption to the patients' clinical and biological contexts. Pilot experiment shows that business rules modeled with BRDF and implemented as an alert system triggered an alert for 5824 of the 71,413 prescriptions considered (8.16%). A business rule design framework approach meets one of the strategic objectives for decision support design by taking into account three important criteria posing a particular challenge to system designers: 1) business processes, 2) knowledge modeling of the context of application, and 3) the agility of the various design steps.

  12. Establishment and validation for the theoretical model of the vehicle airbag

    NASA Astrophysics Data System (ADS)

    Zhang, Junyuan; Jin, Yang; Xie, Lizhe; Chen, Chao

    2015-05-01

    The current design and optimization of the occupant restraint system (ORS) are based on numerous actual tests and mathematic simulations. These two methods are overly time-consuming and complex for the concept design phase of the ORS, though they're quite effective and accurate. Therefore, a fast and directive method of the design and optimization is needed in the concept design phase of the ORS. Since the airbag system is a crucial part of the ORS, in this paper, a theoretical model for the vehicle airbag is established in order to clarify the interaction between occupants and airbags, and further a fast design and optimization method of airbags in the concept design phase is made based on the proposed theoretical model. First, the theoretical expression of the simplified mechanical relationship between the airbag's design parameters and the occupant response is developed based on classical mechanics, then the momentum theorem and the ideal gas state equation are adopted to illustrate the relationship between airbag's design parameters and occupant response. By using MATLAB software, the iterative algorithm method and discrete variables are applied to the solution of the proposed theoretical model with a random input in a certain scope. And validations by MADYMO software prove the validity and accuracy of this theoretical model in two principal design parameters, the inflated gas mass and vent diameter, within a regular range. This research contributes to a deeper comprehension of the relation between occupants and airbags, further a fast design and optimization method for airbags' principal parameters in the concept design phase, and provides the range of the airbag's initial design parameters for the subsequent CAE simulations and actual tests.

  13. Participatory Design of an Integrated Information System Design to Support Public Health Nurses and Nurse Managers

    PubMed Central

    Reeder, Blaine; Hills, Rebecca A.; Turner, Anne M.; Demiris, George

    2014-01-01

    Objectives The objectives of the study were to use persona-driven and scenario-based design methods to create a conceptual information system design to support public health nursing. Design and Sample We enrolled 19 participants from two local health departments to conduct an information needs assessment, create a conceptual design, and conduct a preliminary design validation. Measures Interviews and thematic analysis were used to characterize information needs and solicit design recommendations from participants. Personas were constructed from participant background information, and scenario-based design was used to create a conceptual information system design. Two focus groups were conducted as a first iteration validation of information needs, personas, and scenarios. Results Eighty-nine information needs were identified. Two personas and 89 scenarios were created. Public health nurses and nurse managers confirmed the accuracy of information needs, personas, scenarios, and the perceived usefulness of proposed features of the conceptual design. Design artifacts were modified based on focus group results. Conclusion Persona-driven design and scenario-based design are feasible methods to design for common work activities in different local health departments. Public health nurses and nurse managers should be engaged in the design of systems that support their work. PMID:24117760

  14. A Case Study for Probabilistic Methods Validation (MSFC Center Director's Discretionary Fund, Project No. 94-26)

    NASA Technical Reports Server (NTRS)

    Price J. M.; Ortega, R.

    1998-01-01

    Probabilistic method is not a universally accepted approach for the design and analysis of aerospace structures. The validity of this approach must be demonstrated to encourage its acceptance as it viable design and analysis tool to estimate structural reliability. The objective of this Study is to develop a well characterized finite population of similar aerospace structures that can be used to (1) validate probabilistic codes, (2) demonstrate the basic principles behind probabilistic methods, (3) formulate general guidelines for characterization of material drivers (such as elastic modulus) when limited data is available, and (4) investigate how the drivers affect the results of sensitivity analysis at the component/failure mode level.

  15. Intent inferencing by an intelligent operator's associate - A validation study

    NASA Technical Reports Server (NTRS)

    Jones, Patricia M.

    1988-01-01

    In the supervisory control of a complex, dynamic system, one potential form of aiding for the human operator is a computer-based operator's associate. The design philosophy of the operator's associate is that of 'amplifying' rather than automating human skills. In particular, the associate possesses understanding and control properties. Understanding allows it to infer operator intentions and thus form the basis for context-dependent advice and reminders; control properties allow the human operator to dynamically delegate individual tasks or subfunctions to the associate. This paper focuses on the design, implementation, and validation of the intent inferencing function. Two validation studies are described which empirically demonstrate the viability of the proposed approach to intent inferencing.

  16. Extending the concept of social validity: behavior analysis for disease prevention and health promotion.

    PubMed

    Winett, R A; Moore, J F; Anderson, E S

    1991-01-01

    A broader definition of social validity is proposed wherein a socially valid behavior-change intervention is directed to a problem of verifiable importance, the intervention is valued and used appropriately by designated target groups, and the intervention as used has sufficient behavioral impact to substantially reduce the probability of the problem's occurrence in target populations. The verifiable importance of a problem is based on epidemiological data, and the value and appropriate use of an intervention are enhanced through the use of conceptual frameworks for social marketing and behavior change and considerable formative and pilot research. Behavioral impact is assessed through efficacy and effectiveness studies. Thus, the social validity of a behavior-change intervention is established through a number of interactive, a priori steps. This approach to defining social validity is related to critical analysis and intervention issues including individual and population perspectives and "top-down" and "bottom-up" approaches to intervention design. This broader definition of social validity is illustrated by a project to reduce the risk of HIV infection among adolescents. Although the various steps involved in creating socially valid interventions can be complicated, time-consuming, and expensive, following all the steps can result in interventions capable of improving a nation's health.

  17. Managing design excellence tools during the development of new orthopaedic implants.

    PubMed

    Défossez, Henri J P; Serhan, Hassan

    2013-11-01

    Design excellence (DEX) tools have been widely used for years in some industries for their potential to facilitate new product development. The medical sector, targeted by cost pressures, has therefore started adopting them. Numerous tools are available; however only appropriate deployment during the new product development stages can optimize the overall process. The primary study objectives were to describe generic tools and illustrate their implementation and management during the development of new orthopaedic implants, and compile a reference package. Secondary objectives were to present the DEX tool investment costs and savings, since the method can require significant resources for which companies must carefully plan. The publicly available DEX method "Define Measure Analyze Design Verify Validate" was adopted and implemented during the development of a new spinal implant. Several tools proved most successful at developing the correct product, addressing clinical needs, and increasing market penetration potential, while reducing design iterations and manufacturing validations. Cost analysis and Pugh Matrix coupled with multi generation planning enabled developing a strong rationale to activate the project, set the vision and goals. improved risk management and product map established a robust technical verification-validation program. Design of experiments and process quantification facilitated design for manufacturing of critical features, as early as the concept phase. Biomechanical testing with analysis of variance provided a validation model with a recognized statistical performance baseline. Within those tools, only certain ones required minimum resources (i.e., business case, multi generational plan, project value proposition, Pugh Matrix, critical To quality process validation techniques), while others required significant investments (i.e., voice of customer, product usage map, improved risk management, design of experiments, biomechanical testing techniques). All used techniques provided savings exceeding investment costs. Some other tools were considered and found less relevant. A matrix summarized the investment costs and generated estimated savings. Globally, all companies can benefit from using DEX by smartly selecting and estimating those tools with best return on investment at the start of the project. For this, a good understanding of the available company resources, background and development strategy are needed. In conclusion, it was possible to illustrate that appropriate management of design excellence tools can greatly facilitate the development of new orthopaedic implant systems.

  18. Clinical audit project in undergraduate medical education curriculum: an assessment validation study

    PubMed Central

    Steketee, Carole; Mak, Donna

    2016-01-01

    Objectives To evaluate the merit of the Clinical Audit Project (CAP) in an assessment program for undergraduate medical education using a systematic assessment validation framework. Methods A cross-sectional assessment validation study at one medical school in Western Australia, with retrospective qualitative analysis of the design, development, implementation and outcomes of the CAP, and quantitative analysis of assessment data from four cohorts of medical students (2011- 2014). Results The CAP is fit for purpose with clear external and internal alignment to expected medical graduate outcomes.  Substantive validity in students’ and examiners’ response processes is ensured through relevant methodological and cognitive processes. Multiple validity features are built-in to the design, planning and implementation process of the CAP.  There is evidence of high internal consistency reliability of CAP scores (Cronbach’s alpha > 0.8) and inter-examiner consistency reliability (intra-class correlation>0.7). Aggregation of CAP scores is psychometrically sound, with high internal consistency indicating one common underlying construct.  Significant but moderate correlations between CAP scores and scores from other assessment modalities indicate validity of extrapolation and alignment between the CAP and the overall target outcomes of medical graduates.  Standard setting, score equating and fair decision rules justify consequential validity of CAP scores interpretation and use. Conclusions This study provides evidence demonstrating that the CAP is a meaningful and valid component in the assessment program. This systematic framework of validation can be adopted for all levels of assessment in medical education, from individual assessment modality, to the validation of an assessment program as a whole.  PMID:27716612

  19. Clinical audit project in undergraduate medical education curriculum: an assessment validation study.

    PubMed

    Tor, Elina; Steketee, Carole; Mak, Donna

    2016-09-24

    To evaluate the merit of the Clinical Audit Project (CAP) in an assessment program for undergraduate medical education using a systematic assessment validation framework. A cross-sectional assessment validation study at one medical school in Western Australia, with retrospective qualitative analysis of the design, development, implementation and outcomes of the CAP, and quantitative analysis of assessment data from four cohorts of medical students (2011- 2014). The CAP is fit for purpose with clear external and internal alignment to expected medical graduate outcomes.  Substantive validity in students' and examiners' response processes is ensured through relevant methodological and cognitive processes. Multiple validity features are built-in to the design, planning and implementation process of the CAP.  There is evidence of high internal consistency reliability of CAP scores (Cronbach's alpha > 0.8) and inter-examiner consistency reliability (intra-class correlation>0.7). Aggregation of CAP scores is psychometrically sound, with high internal consistency indicating one common underlying construct.  Significant but moderate correlations between CAP scores and scores from other assessment modalities indicate validity of extrapolation and alignment between the CAP and the overall target outcomes of medical graduates.  Standard setting, score equating and fair decision rules justify consequential validity of CAP scores interpretation and use. This study provides evidence demonstrating that the CAP is a meaningful and valid component in the assessment program. This systematic framework of validation can be adopted for all levels of assessment in medical education, from individual assessment modality, to the validation of an assessment program as a whole.

  20. Participatory design of an integrated information system design to support public health nurses and nurse managers.

    PubMed

    Reeder, Blaine; Hills, Rebecca A; Turner, Anne M; Demiris, George

    2014-01-01

    The objectives of the study were to use persona-driven and scenario-based design methods to create a conceptual information system design to support public health nursing. We enrolled 19 participants from two local health departments to conduct an information needs assessment, create a conceptual design, and conduct a preliminary design validation. Interviews and thematic analysis were used to characterize information needs and solicit design recommendations from participants. Personas were constructed from participant background information, and scenario-based design was used to create a conceptual information system design. Two focus groups were conducted as a first iteration validation of information needs, personas, and scenarios. Eighty-nine information needs were identified. Two personas and 89 scenarios were created. Public health nurses and nurse managers confirmed the accuracy of information needs, personas, scenarios, and the perceived usefulness of proposed features of the conceptual design. Design artifacts were modified based on focus group results. Persona-driven design and scenario-based design are feasible methods to design for common work activities in different local health departments. Public health nurses and nurse managers should be engaged in the design of systems that support their work. © 2013 Wiley Periodicals, Inc.

  1. Construction and Validation of a Scale to Measure Maslow's Concept of Self-Actualization

    ERIC Educational Resources Information Center

    Jones, Kenneth Melvin; Randolph, Daniel Lee

    1978-01-01

    Designed to measure self-actualization as defined by Abraham Maslow, the Jones Self Actualizing Scale, as assessed in this study, possesses content validity, reliability, and a number of other positive characteristics. (JC)

  2. MISR UAE2 Aerosol Versioning

    Atmospheric Science Data Center

    2013-03-21

    ... The "Beta" designation means particle microphysical property validation is in progress, uncertainty envelopes on particle size distribution, ... UAE-2 campaign activities are part of the validation process, so two versions of the MISR aerosol products are included in this ...

  3. Design of psychosocial factors questionnaires: a systematic measurement approach

    PubMed Central

    Vargas, Angélica; Felknor, Sarah A

    2012-01-01

    Background Evaluation of psychosocial factors requires instruments that measure dynamic complexities. This study explains the design of a set of questionnaires to evaluate work and non-work psychosocial risk factors for stress-related illnesses. Methods The measurement model was based on a review of literature. Content validity was performed by experts and cognitive interviews. Pilot testing was carried out with a convenience sample of 132 workers. Cronbach’s alpha evaluated internal consistency and concurrent validity was estimated by Spearman correlation coefficients. Results Three questionnaires were constructed to evaluate exposure to work and non-work risk factors. Content validity improved the questionnaires coherence with the measurement model. Internal consistency was adequate (α=0.85–0.95). Concurrent validity resulted in moderate correlations of psychosocial factors with stress symptoms. Conclusions Questionnaires´ content reflected a wide spectrum of psychosocial factors sources. Cognitive interviews improved understanding of questions and dimensions. The structure of the measurement model was confirmed. PMID:22628068

  4. Continuous coaxial cable sensors for monitoring of RC structures with electrical time domain reflectometry

    NASA Astrophysics Data System (ADS)

    Chen, Genda; Mu, Huimin; Pommerenke, David; Drewniak, James L.

    2003-08-01

    This study was aimed at developing and validating a new type of coaxial cable sensors that can be used to detect cracks or measure strains in reinforced concrete (RC) structures. The new sensors were designed based on the change in outer conductor configuration under strain effects in contrast to the geometry-based design in conventional coaxial cable sensors. Both numerical simulations and calibration tests with strain gauges of a specific design of the proposed cables were conducted to study the cables' sensitivity. Four designs of the proposed type of sensors were then respectively mounted near the surface of six 3-foot-long RC beams. They were tested in bending to further validate the cables' sensitivity in concrete members. The calibration test results generally agree with the numerical simulations. They showed that the proposed sensors are over 10~50 times more sensitive than conventional cable sensors. The test results of the beams not only validate the sensitivity of the new sensors but also indicate a good correlation with the measured crack width.

  5. Validation of an Instructional Observation Instrument for Teaching English as a Foreign Language in Spain

    ERIC Educational Resources Information Center

    Gomez-Garcia, Maria

    2011-01-01

    The design and validation of a classroom observation instrument to provide formative feedback for teachers of EFL in Spain is the overarching purpose of this study. This study proposes that a valid and reliable classroom observation instrument, based on effective practice in teaching EFL, can be developed and used in Spain to enable teachers to…

  6. The Math Essential Skills Screener--Upper Elementary Version (MESS-U): Studies of Reliability and Validity

    ERIC Educational Resources Information Center

    Erford, Bradley T.; Biddison, Amanda R.

    2006-01-01

    The Math Essential Skills Screener--Upper Elementary Version (MESS-U) is part of a series of screening tests designed to help identify students ages 9-11 who are at risk for mathematics failure. Internal consistency, test-retest reliability, item analysis, decision efficiency, convergent validity and factorial validity of the MESS-U were studied…

  7. Cross-Validation of easyCBM Reading Cut Scores in Washington: 2009-2010. Technical Report #1109

    ERIC Educational Resources Information Center

    Irvin, P. Shawn; Park, Bitnara Jasmine; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Washington state. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 900 students per grade, randomly split into two…

  8. IEEE Validation of the Continuing Education Achievement of Engineers Registry System. Procedures for Use with a CPT 8000 Word Processor and Communications Package.

    ERIC Educational Resources Information Center

    Institute of Electrical and Electronics Engineers, Inc., New York, NY.

    The Institute of Electrical and Electronics Engineers (IEEE) validation program is designed to motivate persons practicing in electrical and electronics engineering to pursue quality technical continuing education courses offered by any responsible sponsor. The rapid acceptance of the validation program necessitated the additional development of a…

  9. Survey Instrument Validity Part II: Validation of a Survey Instrument Examining Athletic Trainers' Knowledge and Practice Beliefs Regarding Exertional Heat Stroke

    ERIC Educational Resources Information Center

    Burton, Laura J.; Mazerolle, Stephanie M.

    2011-01-01

    Objective: The purpose of this article is to discuss the process of developing and validating an instrument to investigate an athletic trainer's attitudes and behaviors regarding the recognition and treatment of exertional heat stroke. Background: Following up from our initial paper, which discussed the process of survey instrument design and…

  10. Establishing the Validity of the Personality Assessment Inventory Drug and Alcohol Scales in a Corrections Sample

    ERIC Educational Resources Information Center

    Patry, Marc W.; Magaletta, Philip R.; Diamond, Pamela M.; Weinman, Beth A.

    2011-01-01

    Although not originally designed for implementation in correctional settings, researchers and clinicians have begun to use the Personality Assessment Inventory (PAI) to assess offenders. A relatively small number of studies have made attempts to validate the alcohol and drug abuse scales of the PAI, and only a very few studies have validated those…

  11. Comparison of airborne passive and active L-band System (PALS) brightness temperature measurements to SMOS observations during the SMAP validation experiment 2012 (SMAPVEX12)

    USDA-ARS?s Scientific Manuscript database

    The purpose of SMAP (Soil Moisture Active Passive) Validation Experiment 2012 (SMAPVEX12) campaign was to collect data for the pre-launch development and validation of SMAP soil moisture algorithms. SMAP is a National Aeronautics and Space Administration’s (NASA) satellite mission designed for the m...

  12. A Validity Agenda for Growth Models: One Size Doesn't Fit All!

    ERIC Educational Resources Information Center

    Patelis, Thanos

    2012-01-01

    This is a keynote presentation given at AERA on developing a validity agenda for growth models in a large scale (e.g., state) setting. The emphasis of this presentation was to indicate that growth models and the validity agenda designed to provide evidence in supporting the claims to be made need to be personalized to meet the local or…

  13. Validity of histopathological grading of articular cartilage from osteoarthritic knee joints

    PubMed Central

    Ostergaard, K.; Andersen, C.; Petersen, J.; Bendtzen, K.; Salter, D.

    1999-01-01

    OBJECTIVES—To determine the validity of the histological-histochemical grading system (HHGS) for osteoarthritic (OA) articular cartilage.
METHODS—Human articular cartilage was obtained from macroscopically normal (n = 13) and OA (n = 21) knee joints. Sections of central and peripheral regions of normal samples were produced. Sections of regions containing severe, moderate, and mild OA changes were produced from each OA sample. A total of 89 sections were graded by means of the HHGS (0-14) twice by three observers.
RESULTS—Average scores for regions designated severe (8.64) and moderate (5.83) OA were less than the expected (10-14 and 6-9, respectively) according to the HHGS, whereas average scores for the region designated mild (5.29) OA and central and peripheral regions (2.19) of normal cartilage were higher than expected (2-5 and 0-1, respectively). The HHGS was capable of differentiating between articular cartilage from macroscopically normal and OA joints and between the region designated severe OA and other regions. However, the HHGS did not adequately differentiate between regions designated mild and moderate OA. Values for sensitivity, specificity, and efficiency for all regions varied considerably.
CONCLUSION—The HHGS is valid for normal and severe OA cartilage, but does not permit distinction between mild and moderate OA changes in articular cartilage.

 Keywords: histopathology; osteoarthritis; reliability; validity PMID:10364898

  14. Box-ticking and Olympic high jumping - Physicians' perceptions and acceptance of national physician validation systems.

    PubMed

    Sehlbach, Carolin; Govaerts, Marjan J B; Mitchell, Sharon; Rohde, Gernot G U; Smeenk, Frank W J M; Driessen, Erik W

    2018-05-24

    National physician validation systems aim to ensure lifelong learning through periodic appraisals of physicians' competence. Their effectiveness is determined by physicians' acceptance of and commitment to the system. This study, therefore, sought to explore physicians' perceptions and self-reported acceptance of validation across three different physician validation systems in Europe. Using a constructivist grounded-theory approach, we conducted semi-structured interviews with 32 respiratory specialists from three countries with markedly different validation systems: Germany, which has a mandatory, credit-based system oriented to continuing professional development; Denmark, with mandatory annual dialogs and ensuing, non-compulsory activities; and the UK, with a mandatory, portfolio-based revalidation system. We analyzed interview data with a view to identifying factors influencing physicians' perceptions and acceptance. Factors that influenced acceptance were the assessment's authenticity and alignment of its requirements with clinical practice, physicians' beliefs about learning, perceived autonomy, and organizational support. Users' acceptance levels determine any system's effectiveness. To support lifelong learning effectively, national physician validation systems must be carefully designed and integrated into daily practice. Involving physicians in their design may render systems more authentic and improve alignment between individual ambitions and the systems' goals, thereby promoting acceptance.

  15. The city of hope-quality of life-ostomy questionnaire: persian translation and validation.

    PubMed

    Anaraki, F; Vafaie, M; Behboo, R; Esmaeilpour, S; Maghsoodi, N; Safaee, A; Grant, M

    2014-07-01

    Since there is no disease-specific instrument for measuring quality-of-life (QOL) in Ostomy patients in Persian language. This study was designed to translate and evaluate the validity and reliability of City of Hope-quality of life-Ostomy questionnaire (COH-QOL-Ostomy questionnaire). This study was designed as cross-sectional study. Reliability of the subscales and the summary scores were demonstrated by intra-class correlation coefficients. Pearson's correlations of an item with its own scale and other scales were calculated to evaluated convergent and discriminant validity. Clinical validity was also evaluated by known-group comparisons. Cronbach's alpha coefficient for all subscales was about 0.70 or higher. Results of interscale correlation were satisfactory and each subscale only measured a single and specified trait. All subscales met the standards of convergent and discriminant validity. Known group comparison analysis showed significant differences in social and spiritual well-being. The findings confirmed the reliability and validity of Persian version of COH-QOL-Ostomy questionnaire. The instrument was also well received by the Iranian patients. It can be considered as a valuable instrument to assess the different aspects of health related quality-of-life in Ostomy patients and used in clinical research in the future.

  16. Sexual behavioral abstine HIV/AIDS questionnaire: Validation study of an Iranian questionnaire.

    PubMed

    Najarkolaei, Fatemeh Rahmati; Niknami, Shamsaddin; Shokravi, Farkhondeh Amin; Tavafian, Sedigheh Sadat; Fesharaki, Mohammad Gholami; Jafari, Mohammad Reza

    2014-01-01

    This study was designed to assess the validity and reliability of the designed sexual, behavioral abstinence, and avoidance of high-risk situation questionnaire (SBAHAQ), with an aim to construct an appropriate development tool in the Iranian population. A descriptive-analytic study was conducted among female undergraduate students of Tehran University, who were selected through cluster random sampling. After reviewing the questionnaires and investigating face and content validity, internal consistency of the questionnaire was assessed by Cronbach's alpha. Explanatory and confirmatory factor analysis was conducted using SPSS and AMOS 16 Software, respectively. The sample consisted of 348 female university students with a mean age of 20.69 ± 1.63 years. The content validity ratio (CVR) coefficient was 0.85 and the reliability of each section of the questionnaire was as follows: Perceived benefit (PB; 0.87), behavioral intention (BI; 0.77), and self-efficacy (SE; 0.85) (Cronbach's alpha totally was 0.83). Explanatory factor analysis showed three factors, including SE, PB, and BI, with the total variance of 61% and Kaiser-Meyer-Olkin (KMO) index of 88%. These factors were also confirmed by confirmatory factor analysis [adjusted goodness of fitness index (AGFI) = 0.939, root mean square error of approximation (RMSEA) = 0.039]. This study showed the designed questionnaire provided adequate construct validity and reliability, and could be adequately used to measure sexual abstinence and avoidance of high-risk situations among female students.

  17. Development and testing of the 'Culture of Care Barometer' (CoCB) in healthcare organisations: a mixed methods study.

    PubMed

    Rafferty, Anne Marie; Philippou, Julia; Fitzpatrick, Joanne M; Pike, Geoff; Ball, Jane

    2017-08-18

    Concerns about care quality have prompted calls to create workplace cultures conducive to high-quality, safe and compassionate care and to provide a supportive environment in which staff can operate effectively. How healthcare organisations assess their culture of care is an important first step in creating such cultures. This article reports on the development and validation of a tool, the Culture of Care Barometer, designed to assess perceptions of a caring culture among healthcare workers preliminary to culture change. An exploratory mixed methods study designed to develop and test the validity of a tool to measure 'culture of care' through focus groups and questionnaires. Questionnaire development was facilitated through: a literature review, experts generating items of interest and focus group discussions with healthcare staff across specialities, roles and seniority within three types of public healthcare organisations in the UK. The tool was designed to be multiprofessional and pilot tested with a sample of 467 nurses and healthcare support workers in acute care and then validated with a sample of 1698 staff working across acute, mental health and community services in England. Exploratory factor analysis was used to identify dimensions underlying the Barometer. Psychometric testing resulted in the development of a 30-item questionnaire linked to four domains with retained items loading to four factors: organisational values (α=0.93, valid n=1568, M=3.7), team support (α=0.93, valid n=1557, M=3.2), relationships with colleagues (α=0.84, valid n=1617, M=4.0) and job constraints (α=0.70, valid n=1616, M=3.3). The study developed a valid and reliable instrument with which to gauge the different attributes of care culture perceived by healthcare staff with potential for organisational benchmarking. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Implementation and validation of a CubeSat laser transmitter

    NASA Astrophysics Data System (ADS)

    Kingsbury, R. W.; Caplan, D. O.; Cahoy, K. L.

    2016-03-01

    The paper presents implementation and validation results for a CubeSat-scale laser transmitter. The master oscillator power amplifier (MOPA) design produces a 1550 nm, 200mW average power optical signal through the use of a directly modulated laser diode and a commercial fiber amplifier. The prototype design produces high-fidelity M-ary pulse position modulated (PPM) waveforms (M=8 to 128), targeting data rates > 10 Mbit/s while meeting a constraining 8W power allocation. We also present the implementation of an avalanche photodiode (APD) receiver with measured transmitter-to-receiver performance within 3 dB of theory. Via loopback, the compact receiver design can provide built-in self-test and calibration capabilities, and supports incremental on-orbit testing of the design.

  19. Design and experimental validation of a flutter suppression controller for the active flexible wing

    NASA Technical Reports Server (NTRS)

    Waszak, Martin R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of an active flutter suppression controller for the Active Flexible Wing wind tunnel model is presented. The design is accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and extensive simulation based analysis. The design approach uses a fundamental understanding of the flutter mechanism to formulate a simple controller structure to meet stringent design specifications. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite modeling errors in predicted flutter dynamic pressure and flutter frequency. The flutter suppression controller was also successfully operated in combination with another controller to perform flutter suppression during rapid rolling maneuvers.

  20. Updated Design Standards and Guidance from the What Works Clearinghouse: Regression Discontinuity Designs and Cluster Designs

    ERIC Educational Resources Information Center

    Cole, Russell; Deke, John; Seftor, Neil

    2016-01-01

    The What Works Clearinghouse (WWC) maintains design standards to identify rigorous, internally valid education research. As education researchers advance new methodologies, the WWC must revise its standards to include an assessment of the new designs. Recently, the WWC has revised standards for two emerging study designs: regression discontinuity…

  1. 10 CFR 52.147 - Duration of design approval.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... 10 Energy 2 2011-01-01 2011-01-01 false Duration of design approval. 52.147 Section 52.147 Energy... Standard Design Approvals § 52.147 Duration of design approval. A standard design approval issued under this subpart is valid for 15 years from the date of issuance and may not be renewed. A design approval...

  2. Verifying Stability of Dynamic Soft-Computing Systems

    NASA Technical Reports Server (NTRS)

    Wen, Wu; Napolitano, Marcello; Callahan, John

    1997-01-01

    Soft computing is a general term for algorithms that learn from human knowledge and mimic human skills. Example of such algorithms are fuzzy inference systems and neural networks. Many applications, especially in control engineering, have demonstrated their appropriateness in building intelligent systems that are flexible and robust. Although recent research have shown that certain class of neuro-fuzzy controllers can be proven bounded and stable, they are implementation dependent and difficult to apply to the design and validation process. Many practitioners adopt the trial and error approach for system validation or resort to exhaustive testing using prototypes. In this paper, we describe our on-going research towards establishing necessary theoretic foundation as well as building practical tools for the verification and validation of soft-computing systems. A unified model for general neuro-fuzzy system is adopted. Classic non-linear system control theory and recent results of its applications to neuro-fuzzy systems are incorporated and applied to the unified model. It is hoped that general tools can be developed to help the designer to visualize and manipulate the regions of stability and boundedness, much the same way Bode plots and Root locus plots have helped conventional control design and validation.

  3. Analytical method development of nifedipine and its degradants binary mixture using high performance liquid chromatography through a quality by design approach

    NASA Astrophysics Data System (ADS)

    Choiri, S.; Ainurofiq, A.; Ratri, R.; Zulmi, M. U.

    2018-03-01

    Nifedipin (NIF) is a photo-labile drug that easily degrades when it exposures a sunlight. This research aimed to develop of an analytical method using a high-performance liquid chromatography and implemented a quality by design approach to obtain effective, efficient, and validated analytical methods of NIF and its degradants. A 22 full factorial design approach with a curvature as a center point was applied to optimize of the analytical condition of NIF and its degradants. Mobile phase composition (MPC) and flow rate (FR) as factors determined on the system suitability parameters. The selected condition was validated by cross-validation using a leave one out technique. Alteration of MPC affected on time retention significantly. Furthermore, an increase of FR reduced the tailing factor. In addition, the interaction of both factors affected on an increase of the theoretical plates and resolution of NIF and its degradants. The selected analytical condition of NIF and its degradants has been validated at range 1 – 16 µg/mL that had good linearity, precision, accuration and efficient due to an analysis time within 10 min.

  4. [Design and validation of a satisfaction survey with pharmaceutical care received in hospital pharmacyconsultation].

    PubMed

    Monje-Agudo, Patricia; Borrego-Izquierdo, Yolanda; Robustillo-Cortés, Ma de Las Aguas; Jiménez-Galán, Rocio; Almeida-González, Carmen V; Morillo-Verdugo, Ramón A

    2015-05-01

    To design and to validate a questionnaire to assess satisfaction with pharmaceutical care (PC) received at the hospital pharmacy. Multicentric study in five andalusian hospital in January 2013. A bibliography search was performed in PUBMED; MESH term; pharmaceutical services, patients satisfaction and questionnaire. Next, the questionnaire was produced by Delphi methodology with ten items and with the following variables; demographics, socials, pharrmacologicals and clinics which the patient was asked for the consequences of the PC in his treatment and illness and for the acceptance with the received service. The patient could answer between one= very insufficient and five= excellent. Before the validation phase questionnaire, a pilot phase was carried out. Descriptive analysis, Cronbach's alpha coefficient and intraclass correlation coefficient (ICC) were performed in both phases. Data analysis was conducted using the SPSS statistical software package release 20.0. In the pilot phase were included 21 questionnaires and 154 of them in validation phase (response index of 100%). In the last phase, 62% (N=96) of patients were men. More than 50% of patients answered "excelent" in all items of questionnaire in both phases. The Cronbach's alpha coefficient and ICC were 0.921 and 0.915 (95%IC: 0.847-0.961) and 0.916 and 0,910 (95%IC: 0.886-0.931) in pilot and validation phases, respectively. A high reliability instrument was designed and validated to evaluate the patient satisfaction with PC received at hospital pharmacy. Copyright AULA MEDICA EDICIONES 2014. Published by AULA MEDICA. All rights reserved.

  5. Design and Development Computer-Based E-Learning Teaching Material for Improving Mathematical Understanding Ability and Spatial Sense of Junior High School Students

    NASA Astrophysics Data System (ADS)

    Nurjanah; Dahlan, J. A.; Wibisono, Y.

    2017-02-01

    This paper aims to make a design and development computer-based e-learning teaching material for improving mathematical understanding ability and spatial sense of junior high school students. Furthermore, the particular aims are (1) getting teaching material design, evaluation model, and intrument to measure mathematical understanding ability and spatial sense of junior high school students; (2) conducting trials computer-based e-learning teaching material model, asessment, and instrument to develop mathematical understanding ability and spatial sense of junior high school students; (3) completing teaching material models of computer-based e-learning, assessment, and develop mathematical understanding ability and spatial sense of junior high school students; (4) resulting research product is teaching materials of computer-based e-learning. Furthermore, the product is an interactive learning disc. The research method is used of this study is developmental research which is conducted by thought experiment and instruction experiment. The result showed that teaching materials could be used very well. This is based on the validation of computer-based e-learning teaching materials, which is validated by 5 multimedia experts. The judgement result of face and content validity of 5 validator shows that the same judgement result to the face and content validity of each item test of mathematical understanding ability and spatial sense. The reliability test of mathematical understanding ability and spatial sense are 0,929 and 0,939. This reliability test is very high. While the validity of both tests have a high and very high criteria.

  6. Marshall Space Flight Center's Virtual Reality Applications Program 1993

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P., II

    1993-01-01

    A Virtual Reality (VR) applications program has been under development at the Marshall Space Flight Center (MSFC) since 1989. Other NASA Centers, most notably Ames Research Center (ARC), have contributed to the development of the VR enabling technologies and VR systems. This VR technology development has now reached a level of maturity where specific applications of VR as a tool can be considered. The objectives of the MSFC VR Applications Program are to develop, validate, and utilize VR as a Human Factors design and operations analysis tool and to assess and evaluate VR as a tool in other applications (e.g., training, operations development, mission support, teleoperations planning, etc.). The long-term goals of this technology program is to enable specialized Human Factors analyses earlier in the hardware and operations development process and develop more effective training and mission support systems. The capability to perform specialized Human Factors analyses earlier in the hardware and operations development process is required to better refine and validate requirements during the requirements definition phase. This leads to a more efficient design process where perturbations caused by late-occurring requirements changes are minimized. A validated set of VR analytical tools must be developed to enable a more efficient process for the design and development of space systems and operations. Similarly, training and mission support systems must exploit state-of-the-art computer-based technologies to maximize training effectiveness and enhance mission support. The approach of the VR Applications Program is to develop and validate appropriate virtual environments and associated object kinematic and behavior attributes for specific classes of applications. These application-specific environments and associated simulations will be validated, where possible, through empirical comparisons with existing, accepted tools and methodologies. These validated VR analytical tools will then be available for use in the design and development of space systems and operations and in training and mission support systems.

  7. 77 FR 18229 - Applications for New Awards; Investing in Innovation Fund, Validation Grants

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-03-27

    ... the appearance of a conflict of interest. Interrupted time series design \\8\\ means a type of quasi... single case design is an adaptation of an interrupted time series design that relies on the comparison of... notice), interrupted time series designs (as defined in this notice), or regression discontinuity designs...

  8. A More Rigorous Quasi-Experimental Alternative to the One-Group Pretest-Posttest Design.

    ERIC Educational Resources Information Center

    Johnson, Craig W.

    1986-01-01

    A simple quasi-experimental design is described which may have utility in a variety of applied and laboratory research settings where ordinarily the one-group pretest-posttest pre-experimental design might otherwise be the procedure of choice. The design approaches the internal validity of true experimental designs while optimizing external…

  9. Topology optimization based design of unilateral NMR for generating a remote homogeneous field.

    PubMed

    Wang, Qi; Gao, Renjing; Liu, Shutian

    2017-06-01

    This paper presents a topology optimization based design method for the design of unilateral nuclear magnetic resonance (NMR), with which a remote homogeneous field can be obtained. The topology optimization is actualized by seeking out the optimal layout of ferromagnetic materials within a given design domain. The design objective is defined as generating a sensitive magnetic field with optimal homogeneity and maximal field strength within a required region of interest (ROI). The sensitivity of the objective function with respect to the design variables is derived and the method for solving the optimization problem is presented. A design example is provided to illustrate the utility of the design method, specifically the ability to improve the quality of the magnetic field over the required ROI by determining the optimal structural topology for the ferromagnetic poles. Both in simulations and experiments, the sensitive region of the magnetic field achieves about 2 times larger than that of the reference design, validating validates the feasibility of the design method. Copyright © 2017. Published by Elsevier Inc.

  10. Formal specification and design techniques for wireless sensor and actuator networks.

    PubMed

    Martínez, Diego; González, Apolinar; Blanes, Francisco; Aquino, Raúl; Simo, José; Crespo, Alfons

    2011-01-01

    A current trend in the development and implementation of industrial applications is to use wireless networks to communicate the system nodes, mainly to increase application flexibility, reliability and portability, as well as to reduce the implementation cost. However, the nondeterministic and concurrent behavior of distributed systems makes their analysis and design complex, often resulting in less than satisfactory performance in simulation and test bed scenarios, which is caused by using imprecise models to analyze, validate and design these systems. Moreover, there are some simulation platforms that do not support these models. This paper presents a design and validation method for Wireless Sensor and Actuator Networks (WSAN) which is supported on a minimal set of wireless components represented in Colored Petri Nets (CPN). In summary, the model presented allows users to verify the design properties and structural behavior of the system.

  11. The reliability and validity of a designed setup for the assessment of static back extensor force and endurance in older women with and without hyperkyphosis.

    PubMed

    Roghani, Taybeh; Khalkhali Zavieh, Minoo; Rahimi, Abbas; Talebian, Saeed; Manshadi, Farideh Dehghan; Akbarzadeh Baghban, Alireza; King, Nicole; Katzman, Wendy

    2018-01-25

    The purpose of this study was to investigate the intra-rater reliability and validity of a designed load cell setup for the measurement of back extensor muscle force and endurance. The study sample included 19 older women with hyperkyphosis, mean age 67.0 ± 5.0 years, and 14 older women without hyperkyphosis, mean age 63.0 ± 6.0 years. Maximum back extensor force and endurance were measured in a sitting position with a designed load cell setup. Tests were performed by the same examiner on two separate days within a 72-hour interval. The intra-rater reliability of the measurements was analyzed using intraclass correlation coefficient (ICC), standard errors of measurement (SEM), and minimal detectable change (MDC). The validity of the setup was determined using Pearson correlation analysis and independent t-test. Using our designed load cell, the values of ICC indicated very high reliability of force measurement (hyperkyphosis group: 0.96, normal group: 0.97) and high reliability of endurance measurement (hyperkyphosis group: 0.82, normal group: 0.89). For all tests, the values of SEM and MDC were low in both groups. A significant correlation between two documented forces (load cell force and target force) and significant differences in the muscle force and endurance among the two groups were found. The measurements of static back muscle force and endurance are reliable and valid with our designed setup in older women with and without hyperkyphosis.

  12. Teaching neurophysiology, neuropharmacology, and experimental design using animal models of psychiatric and neurological disorders.

    PubMed

    Morsink, Maarten C; Dukers, Danny F

    2009-03-01

    Animal models have been widely used for studying the physiology and pharmacology of psychiatric and neurological diseases. The concepts of face, construct, and predictive validity are used as indicators to estimate the extent to which the animal model mimics the disease. Currently, we used these three concepts to design a theoretical assignment to integrate the teaching of neurophysiology, neuropharmacology, and experimental design. For this purpose, seven case studies were developed in which animal models for several psychiatric and neurological diseases were described and in which neuroactive drugs used to treat or study these diseases were introduced. Groups of undergraduate students were assigned to one of these case studies and asked to give a classroom presentation in which 1) the disease and underlying pathophysiology are described, 2) face and construct validity of the animal model are discussed, and 3) a pharmacological experiment with the associated neuroactive drug to assess predictive validity is presented. After evaluation of the presentations, we found that the students had gained considerable insight into disease phenomenology, its underlying neurophysiology, and the mechanism of action of the neuroactive drug. Moreover, the assignment was very useful in the teaching of experimental design, allowing an in-depth discussion of experimental control groups and the prediction of outcomes in these groups if the animal model were to display predictive validity. Finally, the highly positive responses in the student evaluation forms indicated that the assignment was of great interest to the students. Hence, the currently developed case studies constitute a very useful tool for teaching neurophysiology, neuropharmacology, and experimental design.

  13. Validation of the measure automobile emissions model : a statistical analysis

    DOT National Transportation Integrated Search

    2000-09-01

    The Mobile Emissions Assessment System for Urban and Regional Evaluation (MEASURE) model provides an external validation capability for hot stabilized option; the model is one of several new modal emissions models designed to predict hot stabilized e...

  14. User-Centered Design (UCD) Process Description

    DTIC Science & Technology

    2014-12-01

    where critical tasks and decision points are identified. From here, paper wireframe storyboards are sketched and then validated with cognitive...paper wireframe storyboards are sketched and then validated with cognitive walk-throughs. Low-fidelity prototypes are then created and checked

  15. Advanced Concept Modeling

    NASA Technical Reports Server (NTRS)

    Chaput, Armand; Johns, Zachary; Hodges, Todd; Selfridge, Justin; Bevirt, Joeben; Ahuja, Vivek

    2015-01-01

    Advanced Concepts Modeling software validation, analysis, and design. This was a National Institute of Aerospace contract with a lot of pieces. Efforts ranged from software development and validation for structures and aerodynamics, through flight control development, and aeropropulsive analysis, to UAV piloting services.

  16. Adolescent Populations Research Needs - NCS Dietary Assessment Literature Review

    Cancer.gov

    As with school age children, it is difficult to make conclusions about the validity of available dietary assessment instruments for adolescents because of the differences in instruments, research designs, reference methods, and populations in the validation literature.

  17. QSAR-Based Models for Designing Quinazoline/Imidazothiazoles/Pyrazolopyrimidines Based Inhibitors against Wild and Mutant EGFR

    PubMed Central

    Chauhan, Jagat Singh; Dhanda, Sandeep Kumar; Singla, Deepak; Agarwal, Subhash M.; Raghava, Gajendra P. S.

    2014-01-01

    Overexpression of EGFR is responsible for causing a number of cancers, including lung cancer as it activates various downstream signaling pathways. Thus, it is important to control EGFR function in order to treat the cancer patients. It is well established that inhibiting ATP binding within the EGFR kinase domain regulates its function. The existing quinazoline derivative based drugs used for treating lung cancer that inhibits the wild type of EGFR. In this study, we have made a systematic attempt to develop QSAR models for designing quinazoline derivatives that could inhibit wild EGFR and imidazothiazoles/pyrazolopyrimidines derivatives against mutant EGFR. In this study, three types of prediction methods have been developed to design inhibitors against EGFR (wild, mutant and both). First, we developed models for predicting inhibitors against wild type EGFR by training and testing on dataset containing 128 quinazoline based inhibitors. This dataset was divided into two subsets called wild_train and wild_valid containing 103 and 25 inhibitors respectively. The models were trained and tested on wild_train dataset while performance was evaluated on the wild_valid called validation dataset. We achieved a maximum correlation between predicted and experimentally determined inhibition (IC50) of 0.90 on validation dataset. Secondly, we developed models for predicting inhibitors against mutant EGFR (L858R) on mutant_train, and mutant_valid dataset and achieved a maximum correlation between 0.834 to 0.850 on these datasets. Finally, an integrated hybrid model has been developed on a dataset containing wild and mutant inhibitors and got maximum correlation between 0.761 to 0.850 on different datasets. In order to promote open source drug discovery, we developed a webserver for designing inhibitors against wild and mutant EGFR along with providing standalone (http://osddlinux.osdd.net/) and Galaxy (http://osddlinux.osdd.net:8001) version of software. We hope our webserver (http://crdd.osdd.net/oscadd/ntegfr/) will play a vital role in designing new anticancer drugs. PMID:24992720

  18. Flight control optimization from design to assessment application on the Cessna Citation X business aircraft =

    NASA Astrophysics Data System (ADS)

    Boughari, Yamina

    New methodologies have been developed to optimize the integration, testing and certification of flight control systems, an expensive process in the aerospace industry. This thesis investigates the stability of the Cessna Citation X aircraft without control, and then optimizes two different flight controllers from design to validation. The aircraft's model was obtained from the data provided by the Research Aircraft Flight Simulator (RAFS) of the Cessna Citation business aircraft. To increase the stability and control of aircraft systems, optimizations of two different flight control designs were performed: 1) the Linear Quadratic Regulation and the Proportional Integral controllers were optimized using the Differential Evolution algorithm and the level 1 handling qualities as the objective function. The results were validated for the linear and nonlinear aircraft models, and some of the clearance criteria were investigated; and 2) the Hinfinity control method was applied on the stability and control augmentation systems. To minimize the time required for flight control design and its validation, an optimization of the controllers design was performed using the Differential Evolution (DE), and the Genetic algorithms (GA). The DE algorithm proved to be more efficient than the GA. New tools for visualization of the linear validation process were also developed to reduce the time required for the flight controller assessment. Matlab software was used to validate the different optimization algorithms' results. Research platforms of the aircraft's linear and nonlinear models were developed, and compared with the results of flight tests performed on the Research Aircraft Flight Simulator. Some of the clearance criteria of the optimized H-infinity flight controller were evaluated, including its linear stability, eigenvalues, and handling qualities criteria. Nonlinear simulations of the maneuvers criteria were also investigated during this research to assess the Cessna Citation X's flight controller clearance, and therefore, for its anticipated certification.

  19. Verifying and Validating Proposed Models for FSW Process Optimization

    NASA Technical Reports Server (NTRS)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  20. Structurally compliant rocket engine combustion chamber: Experimental and analytical validation

    NASA Technical Reports Server (NTRS)

    Jankovsky, Robert S.; Arya, Vinod K.; Kazaroff, John M.; Halford, Gary R.

    1994-01-01

    A new, structurally compliant rocket engine combustion chamber design has been validated through analysis and experiment. Subscale, tubular channel chambers have been cyclically tested and analytically evaluated. Cyclic lives were determined to have a potential for 1000 percent increase over those of rectangular channel designs, the current state of the art. Greater structural compliance in the circumferential direction gave rise to lower thermal strains during hot firing, resulting in lower thermal strain ratcheting and longer predicted fatigue lives. Thermal, structural, and durability analyses of the combustion chamber design, involving cyclic temperatures, strains, and low-cycle fatigue lives, have corroborated the experimental observations.

  1. Design of a CO2 Twin Rotary Compressor for a Heat Pump Water Heater

    NASA Astrophysics Data System (ADS)

    Ahn, Jong Min; Kim, Woo Young; Kim, Hyun Jin; Cho, Sung Oug; Seo, Jong Cheun

    2010-06-01

    For a CO2 heat pump water heater, one-stage twin rotary compressor has been designed. As a design tool, computer simulation program for the compressor performance has been made. Validation of the simulation program has been carried out for a bench model compressor in a compressor calorimeter. Cooling capacity and the compressor input power were reasonably well compared between the simulation and the calorimeter test. Good agreement on P-V diagram between the simulation and the test was also obtained. With this validated compressor simulation program, parametric study has been performed to arrive at optimum dimensions for the compression chamber.

  2. Automatic Detection of Whole Night Snoring Events Using Non-Contact Microphone

    PubMed Central

    Dafna, Eliran; Tarasiuk, Ariel; Zigel, Yaniv

    2013-01-01

    Objective Although awareness of sleep disorders is increasing, limited information is available on whole night detection of snoring. Our study aimed to develop and validate a robust, high performance, and sensitive whole-night snore detector based on non-contact technology. Design Sounds during polysomnography (PSG) were recorded using a directional condenser microphone placed 1 m above the bed. An AdaBoost classifier was trained and validated on manually labeled snoring and non-snoring acoustic events. Patients Sixty-seven subjects (age 52.5±13.5 years, BMI 30.8±4.7 kg/m2, m/f 40/27) referred for PSG for obstructive sleep apnea diagnoses were prospectively and consecutively recruited. Twenty-five subjects were used for the design study; the validation study was blindly performed on the remaining forty-two subjects. Measurements and Results To train the proposed sound detector, >76,600 acoustic episodes collected in the design study were manually classified by three scorers into snore and non-snore episodes (e.g., bedding noise, coughing, environmental). A feature selection process was applied to select the most discriminative features extracted from time and spectral domains. The average snore/non-snore detection rate (accuracy) for the design group was 98.4% based on a ten-fold cross-validation technique. When tested on the validation group, the average detection rate was 98.2% with sensitivity of 98.0% (snore as a snore) and specificity of 98.3% (noise as noise). Conclusions Audio-based features extracted from time and spectral domains can accurately discriminate between snore and non-snore acoustic events. This audio analysis approach enables detection and analysis of snoring sounds from a full night in order to produce quantified measures for objective follow-up of patients. PMID:24391903

  3. Work design and management in the manufacturing sector: development and validation of the Work Organisation Assessment Questionnaire.

    PubMed

    Griffiths, A; Cox, T; Karanika, M; Khan, S; Tomás, J M

    2006-10-01

    To examine the factor structure, reliability, and validity of a new context-specific questionnaire for the assessment of work and organisational factors. The Work Organisation Assessment Questionnaire (WOAQ) was developed as part of a risk assessment and risk reduction methodology for hazards inherent in the design and management of work in the manufacturing sector. Two studies were conducted. Data were collected from 524 white- and blue-collar employees from a range of manufacturing companies. Exploratory factor analysis was carried out on 28 items that described the most commonly reported failures of work design and management in companies in the manufacturing sector. Concurrent validity data were also collected. A reliability study was conducted with a further 156 employees. Principal component analysis, with varimax rotation, revealed a strong 28-item, five factor structure. The factors were named: quality of relationships with management, reward and recognition, workload, quality of relationships with colleagues, and quality of physical environment. Analyses also revealed a more general summative factor. Results indicated that the questionnaire has good internal consistency and test-retest reliability and validity. Being associated with poor employee health and changes in health related behaviour, the WOAQ factors are possible hazards. It is argued that the strength of those associations offers some estimation of risk. Feedback from the organisations involved indicated that the WOAQ was easy to use and meaningful for them as part of their risk assessment procedures. The studies reported here describe a model of the hazards to employee health and health related behaviour inherent in the design and management of work in the manufacturing sector. It offers an instrument for their assessment. The scales derived which form the WOAQ were shown to be reliable, valid, and meaningful to the user population.

  4. Clinical trial designs for testing biomarker-based personalized therapies

    PubMed Central

    Lai, Tze Leung; Lavori, Philip W; Shih, Mei-Chiung I; Sikic, Branimir I

    2014-01-01

    Background Advances in molecular therapeutics in the past decade have opened up new possibilities for treating cancer patients with personalized therapies, using biomarkers to determine which treatments are most likely to benefit them, but there are difficulties and unresolved issues in the development and validation of biomarker-based personalized therapies. We develop a new clinical trial design to address some of these issues. The goal is to capture the strengths of the frequentist and Bayesian approaches to address this problem in the recent literature and to circumvent their limitations. Methods We use generalized likelihood ratio tests of the intersection null and enriched strategy null hypotheses to derive a novel clinical trial design for the problem of advancing promising biomarker-guided strategies toward eventual validation. We also investigate the usefulness of adaptive randomization (AR) and futility stopping proposed in the recent literature. Results Simulation studies demonstrate the advantages of testing both the narrowly focused enriched strategy null hypothesis related to validating a proposed strategy and the intersection null hypothesis that can accommodate to a potentially successful strategy. AR and early termination of ineffective treatments offer increased probability of receiving the preferred treatment and better response rates for patients in the trial, at the expense of more complicated inference under small-to-moderate total sample sizes and some reduction in power. Limitations The binary response used in the development phase may not be a reliable indicator of treatment benefit on long-term clinical outcomes. In the proposed design, the biomarker-guided strategy (BGS) is not compared to ‘standard of care’, such as physician’s choice that may be informed by patient characteristics. Therefore, a positive result does not imply superiority of the BGS to ‘standard of care’. The proposed design and tests are valid asymptotically. Simulations are used to examine small-to-moderate sample properties. Conclusion Innovative clinical trial designs are needed to address the difficulties and issues in the development and validation of biomarker-based personalized therapies. The article shows the advantages of using likelihood inference and interim analysis to meet the challenges in the sample size needed and in the constantly evolving biomarker landscape and genomic and proteomic technologies. PMID:22397801

  5. Design of a Syntax Validation Tool for Requirements Analysis Using Structured Analysis and Design Technique (SADT)

    DTIC Science & Technology

    1988-09-01

    analysis phase of the software life cycle (16:1-1). While editing a SADT diagram, the tool should be able to check whether or not structured analysis...diag-ams are valid for the SADT’s syntax, produce error messages, do error recovery, and perform editing suggestions. Thus, this tool must have the...directed editors are editors which use the syn- tax of the programming language while editing a program. While text editors treat programs as text, syntax

  6. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  7. Release Fixed Heel Point (FHP) Accommodation Model Verification and Validation (V and V) Plan - Rev A

    DTIC Science & Technology

    2017-01-23

    5e. TASK NUMBER N/A 5f. WORK UNIT NUMBER N/A 7. PERFORMING ORGANIZATION NAME(S) AND ADDRESS(ES) AND ADDRESS(ES) RDECOM-TARDEC-ACT Attn...occupant work space, central 90% of the Soldier population, encumbrance, posture and position, verification and validation, computer aided design...factors engineers could benefit by working with vehicle designers to perform virtual assessments in CAD when there is not enough time and/or funding to

  8. Clinical Design Sciences: A View from Sister Design Efforts.

    ERIC Educational Resources Information Center

    Zaritsky, Raul; Kelly, Anthony E.; Flowers, Woodie; Rogers, Everett; O'Neill, Patrick

    2003-01-01

    Asserts that the social sciences are clinical-like endeavors, and the way that "sister" fields discover and validate their results may inform research practice in education. Describes three fields of design that confront similar societal demands for improvement (engineering product design, research on the diffusion of innovations, and…

  9. Research, Development, and Validation of a School Leader's Resource Guide for the Facilitation of Social Media Use by School Staff

    ERIC Educational Resources Information Center

    Gooch, Deanna L.

    2012-01-01

    Many school leaders do not understand their rights and responsibilities to facilitate social media use by their staff in P-12 education. This dissertation was designed to research, develop, and validate a resource guide school leaders can use to facilitate social media use by school staff. "Research, Development, and Validation of a School…

  10. Assessing the Efficacy of the Measure of Understanding of Macroevolution as a Valid Tool for Undergraduate Non-Science Majors

    ERIC Educational Resources Information Center

    Romine, William Lee; Walter, Emily Marie

    2014-01-01

    Efficacy of the Measure of Understanding of Macroevolution (MUM) as a measurement tool has been a point of contention among scholars needing a valid measure for knowledge of macroevolution. We explored the structure and construct validity of the MUM using Rasch methodologies in the context of a general education biology course designed with an…

  11. Cross-Validation of easyCBM Reading Cut Scores in Oregon: 2009-2010. Technical Report #1108

    ERIC Educational Resources Information Center

    Park, Bitnara Jasmine; Irvin, P. Shawn; Anderson, Daniel; Alonzo, Julie; Tindal, Gerald

    2011-01-01

    This technical report presents results from a cross-validation study designed to identify optimal cut scores when using easyCBM[R] reading tests in Oregon. The cross-validation study analyzes data from the 2009-2010 academic year for easyCBM[R] reading measures. A sample of approximately 2,000 students per grade, randomly split into two groups of…

  12. The Predictive Validity of the Tilburg Frailty Indicator: Disability, Health Care Utilization, and Quality of Life in a Population at Risk

    ERIC Educational Resources Information Center

    Gobbens, Robbert J. J.; van Assen, Marcel A. L. M.; Luijkx, Katrien G.; Schols, Jos M. G. A.

    2012-01-01

    Purpose: To assess the predictive validity of frailty and its domains (physical, psychological, and social), as measured by the Tilburg Frailty Indicator (TFI), for the adverse outcomes disability, health care utilization, and quality of life. Design and Methods: The predictive validity of the TFI was tested in a representative sample of 484…

  13. Phase 1 Validation Testing and Simulation for the WEC-Sim Open Source Code

    NASA Astrophysics Data System (ADS)

    Ruehl, K.; Michelen, C.; Gunawan, B.; Bosma, B.; Simmons, A.; Lomonaco, P.

    2015-12-01

    WEC-Sim is an open source code to model wave energy converters performance in operational waves, developed by Sandia and NREL and funded by the US DOE. The code is a time-domain modeling tool developed in MATLAB/SIMULINK using the multibody dynamics solver SimMechanics, and solves the WEC's governing equations of motion using the Cummins time-domain impulse response formulation in 6 degrees of freedom. The WEC-Sim code has undergone verification through code-to-code comparisons; however validation of the code has been limited to publicly available experimental data sets. While these data sets provide preliminary code validation, the experimental tests were not explicitly designed for code validation, and as a result are limited in their ability to validate the full functionality of the WEC-Sim code. Therefore, dedicated physical model tests for WEC-Sim validation have been performed. This presentation provides an overview of the WEC-Sim validation experimental wave tank tests performed at the Oregon State University's Directional Wave Basin at Hinsdale Wave Research Laboratory. Phase 1 of experimental testing was focused on device characterization and completed in Fall 2015. Phase 2 is focused on WEC performance and scheduled for Winter 2015/2016. These experimental tests were designed explicitly to validate the performance of WEC-Sim code, and its new feature additions. Upon completion, the WEC-Sim validation data set will be made publicly available to the wave energy community. For the physical model test, a controllable model of a floating wave energy converter has been designed and constructed. The instrumentation includes state-of-the-art devices to measure pressure fields, motions in 6 DOF, multi-axial load cells, torque transducers, position transducers, and encoders. The model also incorporates a fully programmable Power-Take-Off system which can be used to generate or absorb wave energy. Numerical simulations of the experiments using WEC-Sim will be presented. These simulations highlight the code features included in the latest release of WEC-Sim (v1.2), including: wave directionality, nonlinear hydrostatics and hydrodynamics, user-defined wave elevation time-series, state space radiation, and WEC-Sim compatibility with BEMIO (open source AQWA/WAMI/NEMOH coefficient parser).

  14. Modification and Validation of Conceptual Design Aerodynamic Prediction Method HASC95 With VTXCHN

    NASA Technical Reports Server (NTRS)

    Albright, Alan E.; Dixon, Charles J.; Hegedus, Martin C.

    1996-01-01

    A conceptual/preliminary design level subsonic aerodynamic prediction code HASC (High Angle of Attack Stability and Control) has been improved in several areas, validated, and documented. The improved code includes improved methodologies for increased accuracy and robustness, and simplified input/output files. An engineering method called VTXCHN (Vortex Chine) for prediciting nose vortex shedding from circular and non-circular forebodies with sharp chine edges has been improved and integrated into the HASC code. This report contains a summary of modifications, description of the code, user's guide, and validation of HASC. Appendices include discussion of a new HASC utility code, listings of sample input and output files, and a discussion of the application of HASC to buffet analysis.

  15. Employing Design and Development Research (DDR): Approaches in the Design and Development of Online Arabic Vocabulary Learning Games Prototype

    ERIC Educational Resources Information Center

    Sahrir, Muhammad Sabri; Alias, Nor Aziah; Ismail, Zawawi; Osman, Nurulhuda

    2012-01-01

    The design and development research, first proposed by Brown and Collins in the 1990s, is currently among the well-known methods in educational research to test theory and validate its practicality. The method is also known as developmental research, design research, design-based research, formative research and design-cased and possesses…

  16. A guided search genetic algorithm using mined rules for optimal affective product design

    NASA Astrophysics Data System (ADS)

    Fung, Chris K. Y.; Kwong, C. K.; Chan, Kit Yan; Jiang, H.

    2014-08-01

    Affective design is an important aspect of new product development, especially for consumer products, to achieve a competitive edge in the marketplace. It can help companies to develop new products that can better satisfy the emotional needs of customers. However, product designers usually encounter difficulties in determining the optimal settings of the design attributes for affective design. In this article, a novel guided search genetic algorithm (GA) approach is proposed to determine the optimal design attribute settings for affective design. The optimization model formulated based on the proposed approach applied constraints and guided search operators, which were formulated based on mined rules, to guide the GA search and to achieve desirable solutions. A case study on the affective design of mobile phones was conducted to illustrate the proposed approach and validate its effectiveness. Validation tests were conducted, and the results show that the guided search GA approach outperforms the GA approach without the guided search strategy in terms of GA convergence and computational time. In addition, the guided search optimization model is capable of improving GA to generate good solutions for affective design.

  17. Block 2 Solid Rocket Motor (SRM) conceptual design study. Volume 1: Appendices

    NASA Technical Reports Server (NTRS)

    1986-01-01

    The design studies task implements the primary objective of developing a Block II Solid Rocket Motor (SRM) design offering improved flight safety and reliability. The SRM literature was reviewed. The Preliminary Development and Validation Plan is presented.

  18. Validation and implementation of bridge design specifications for barge impact loading.

    DOT National Transportation Integrated Search

    2014-07-01

    Since 1991 in the United States, the design of highway bridges to resist collisions by errant waterway vessels has been carried out : in accordance with design provisions published by AASHTO. These provisions have remained largely unchanged for more ...

  19. True Experimental Design.

    ERIC Educational Resources Information Center

    Huck, Schuyler W.

    1991-01-01

    This poem, with stanzas in limerick form, refers humorously to the many threats to validity posed by problems in research design, including problems of sample selection, data collection, and data analysis. (SLD)

  20. 24 CFR 598.425 - Validation of designation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... URBAN DEVELOPMENT COMMUNITY FACILITIES URBAN EMPOWERMENT ZONES: ROUND TWO AND THREE DESIGNATIONS Post... of any Empowerment Zone. (b) HUD may approve an Empowerment Zone's request for boundary modification...

  1. Validity of Cognitive Load Measures in Simulation-Based Training: A Systematic Review.

    PubMed

    Naismith, Laura M; Cavalcanti, Rodrigo B

    2015-11-01

    Cognitive load theory (CLT) provides a rich framework to inform instructional design. Despite the applicability of CLT to simulation-based medical training, findings from multimedia learning have not been consistently replicated in this context. This lack of transferability may be related to issues in measuring cognitive load (CL) during simulation. The authors conducted a review of CLT studies across simulation training contexts to assess the validity evidence for different CL measures. PRISMA standards were followed. For 48 studies selected from a search of MEDLINE, EMBASE, PsycInfo, CINAHL, and ERIC databases, information was extracted about study aims, methods, validity evidence of measures, and findings. Studies were categorized on the basis of findings and prevalence of validity evidence collected, and statistical comparisons between measurement types and research domains were pursued. CL during simulation training has been measured in diverse populations including medical trainees, pilots, and university students. Most studies (71%; 34) used self-report measures; others included secondary task performance, physiological indices, and observer ratings. Correlations between CL and learning varied from positive to negative. Overall validity evidence for CL measures was low (mean score 1.55/5). Studies reporting greater validity evidence were more likely to report that high CL impaired learning. The authors found evidence that inconsistent correlations between CL and learning may be related to issues of validity in CL measures. Further research would benefit from rigorous documentation of validity and from triangulating measures of CL. This can better inform CLT instructional design for simulation-based medical training.

  2. Test-retest reliability and cross validation of the functioning everyday with a wheelchair instrument.

    PubMed

    Mills, Tamara L; Holm, Margo B; Schmeler, Mark

    2007-01-01

    The purpose of this study was to establish the test-retest reliability and content validity of an outcomes tool designed to measure the effectiveness of seating-mobility interventions on the functional performance of individuals who use wheelchairs or scooters as their primary seating-mobility device. The instrument, Functioning Everyday With a Wheelchair (FEW), is a questionnaire designed to measure perceived user function related to wheelchair/scooter use. Using consumer-generated items, FEW Beta Version 1.0 was developed and test-retest reliability was established. Cross-validation of FEW Beta Version 1.0 was then carried out with five samples of seating-mobility users to establish content validity. Based on the content validity study, FEW Version 2.0 was developed and administered to seating-mobility consumers to examine its test-retest reliability. FEW Beta Version 1.0 yielded an intraclass correlation coefficient (ICC) Model (3,k) of .92, p < .001, and the content validity results revealed that FEW Beta Version 1.0 captured 55% of seating-mobility goals reported by consumers across five samples. FEW Version 2.0 yielded ICC(3,k) = .86, p < .001, and captured 98.5% of consumers' seating-mobility goals. The cross-validation study identified new categories of seating-mobility goals for inclusion in FEW Version 2.0, and the content validity of FEW Version 2.0 was confirmed. FEW Beta Version 1.0 and FEW Version 2.0 were highly stable in their measurement of participants' seating-mobility goals over a 1-week interval.

  3. Thermo-mechanical evaluation of carbon-carbon primary structure for SSTO vehicles

    NASA Astrophysics Data System (ADS)

    Croop, Harold C.; Lowndes, Holland B.; Hahn, Steven E.; Barthel, Chris A.

    1998-01-01

    An advanced development program to demonstrate carbon-carbon composite structure for use as primary load carrying structure has entered the experimental validation phase. The component being evaluated is a wing torque box section for a single-stage-to-orbit (SSTO) vehicle. The validation or demonstration component features an advanced carbon-carbon design incorporating 3D woven graphite preforms, integral spars, oxidation inhibited matrix, chemical vapor deposited (CVD) oxidation protection coating, and ceramic matrix composite fasteners. The validation component represents the culmination of a four phase design and fabrication development effort. Extensive developmental testing was performed to verify material properties and integrity of basic design features before committing to fabrication of the full scale box. The wing box component is now being set up for testing in the Air Force Research Laboratory Structural Test Facility at Wright-Patterson Air Force Base, Ohio. One of the important developmental tests performed in support of the design and planned testing of the full scale box was the fabrication and test of a skin/spar trial subcomponent. The trial subcomponent incorporated critical features of the full scale wing box design. This paper discusses the results of the trial subcomponent test which served as a pathfinder for the upcoming full scale box test.

  4. BioNetCAD: design, simulation and experimental validation of synthetic biochemical networks

    PubMed Central

    Rialle, Stéphanie; Felicori, Liza; Dias-Lopes, Camila; Pérès, Sabine; El Atia, Sanaâ; Thierry, Alain R.; Amar, Patrick; Molina, Franck

    2010-01-01

    Motivation: Synthetic biology studies how to design and construct biological systems with functions that do not exist in nature. Biochemical networks, although easier to control, have been used less frequently than genetic networks as a base to build a synthetic system. To date, no clear engineering principles exist to design such cell-free biochemical networks. Results: We describe a methodology for the construction of synthetic biochemical networks based on three main steps: design, simulation and experimental validation. We developed BioNetCAD to help users to go through these steps. BioNetCAD allows designing abstract networks that can be implemented thanks to CompuBioTicDB, a database of parts for synthetic biology. BioNetCAD enables also simulations with the HSim software and the classical Ordinary Differential Equations (ODE). We demonstrate with a case study that BioNetCAD can rationalize and reduce further experimental validation during the construction of a biochemical network. Availability and implementation: BioNetCAD is freely available at http://www.sysdiag.cnrs.fr/BioNetCAD. It is implemented in Java and supported on MS Windows. CompuBioTicDB is freely accessible at http://compubiotic.sysdiag.cnrs.fr/ Contact: stephanie.rialle@sysdiag.cnrs.fr; franck.molina@sysdiag.cnrs.fr Supplementary information: Supplementary data are available at Bioinformatics online. PMID:20628073

  5. Development and testing of a double length pets for the CLIC experimental area

    NASA Astrophysics Data System (ADS)

    Sánchez, L.; Carrillo, D.; Gavela, D.; Lara, A.; Rodríguez, E.; Gutiérrez, J. L.; Calero, J.; Toral, F.; Samoshkin, A.; Gudkov, D.; Riddone, G.

    2014-05-01

    CLIC (compact linear collider) is a future e+e- collider based on normal-conducting technology, currently under study at CERN. Its design is based on a novel two-beam acceleration scheme. The main beam gets RF power extracted from a drive beam through power extraction and transfer structures (PETS). The technical feasibility of CLIC is currently being proved by its Third Test Facility (CTF3) which includes the CLIC experimental area (CLEX). Two Double Length CLIC PETS will be installed in CLEX to validate their performance with beam. This paper is focused on the engineering design, fabrication and validation of this PETS first prototype. The design consists of eight identical bars, separated by radial slots in which damping material is located to absorb transverse wakefields, and two compact couplers placed at both ends of the bars to extract the generated power. The PETS bars are housed inside a vacuum tank designed to make the PETS as compact as possible. Several joint techniques such as vacuum brazing, electron beam and arc welding were used to complete the assembly. Finally, several tests such as dimensional control and leak testing were carried out to validate design and fabrication methods. In addition, RF measurements at low power were made to study frequency tuning.

  6. User-Centered Iterative Design of a Collaborative Virtual Environment

    DTIC Science & Technology

    2001-03-01

    cognitive task analysis methods to study land navigators. This study was intended to validate the use of user-centered design methodologies for the design of...have explored the cognitive aspects of collaborative human way finding and design for collaborative virtual environments. Further investigation of design paradigms should include cognitive task analysis and behavioral task analysis.

  7. An Analytic Creativity Assessment Scale for Digital Game Story Design: Construct Validity, Internal Consistency and Interrater Reliability

    ERIC Educational Resources Information Center

    Chuang, Tsung-Yen; Huang, Yun-Hsuan

    2015-01-01

    Mobile technology has rapidly made digital games a popular entertainment to this digital generation, and thus digital game design received considerable attention in both the game industry and design education. Digital game design involves diverse dimensions in which digital game story design (DGSD) particularly attracts our interest, as the…

  8. Assessing Reflective Thinking in Solving Design Problems: The Development of a Questionnaire

    ERIC Educational Resources Information Center

    Hong, Yi-Chun; Choi, Ikseon

    2015-01-01

    Reflection is a critical factor in solving design problems. Using good methods to observe designers' reflection is essential to inform the design of the learning environments that support the development of design problem-solving skills. In this study, we have developed and validated a novel self-reporting questionnaire as an efficient instrument…

  9. Applied Virtual Reality Research and Applications at NASA/Marshall Space Flight Center

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P.

    1995-01-01

    A Virtual Reality (VR) applications program has been under development at NASA/Marshall Space Flight Center (MSFC) since 1989. The objectives of the MSFC VR Applications Program are to develop, assess, validate, and utilize VR in hardware development, operations development and support, mission operations training and science training. Before this technology can be utilized with confidence in these applications, it must be validated for each particular class of application. That is, the precision and reliability with which it maps onto real settings and scenarios, representative of a class, must be calculated and assessed. The approach of the MSFC VR Applications Program is to develop and validate appropriate virtual environments and associated object kinematic and behavior attributes for specific classes of applications. These application-specific environments and associated simulations will be validated, where possible, through empirical comparisons with existing, accepted tools and methodologies. These validated VR analytical tools will then be available for use in the design and development of space systems and operations and in training and mission support systems. Specific validation studies for selected classes of applications have been completed or are currently underway. These include macro-ergonomic "control-room class" design analysis, Spacelab stowage reconfiguration training, a full-body micro-gravity functional reach simulator, and a gross anatomy teaching simulator. This paper describes the MSFC VR Applications Program and the validation studies.

  10. CFD Validation with Experiment and Verification with Physics of a Propellant Damping Device

    NASA Technical Reports Server (NTRS)

    Yang, H. Q.; Peugeot, John

    2011-01-01

    This paper will document our effort in validating a coupled fluid-structure interaction CFD tool in predicting a damping device performance in the laboratory condition. Consistently good comparisons of "blind" CFD predictions against experimental data under various operation conditions, design parameters, and cryogenic environment will be presented. The power of the coupled CFD-structures interaction code in explaining some unexpected phenomena of the device observed during the technology development will be illustrated. The evolution of the damper device design inside the LOX tank will be used to demonstrate the contribution of the tool in understanding, optimization and implementation of LOX damper in Ares I vehicle. It is due to the present validation effort, the LOX damper technology has matured to TRL 5. The present effort has also contributed to the transition of the technology from an early conceptual observation to the baseline design of thrust oscillation mitigation for the Ares I within a 10 month period.

  11. A practical guide to surveys and questionnaires.

    PubMed

    Slattery, Eric L; Voelker, Courtney C J; Nussenbaum, Brian; Rich, Jason T; Paniello, Randal C; Neely, J Gail

    2011-06-01

    Surveys with questionnaires play a vital role in decision and policy making in society. Within medicine, including otolaryngology, surveys with questionnaires may be the only method for gathering data on rare or unusual events. In addition, questionnaires can be developed and validated to be used as outcome measures in clinical trials and other clinical research architecture. Consequently, it is fundamentally important that such tools be properly developed and validated. Just asking questions that have not gone through rigorous design and development may be misleading and unfair at best; at worst, they can result in under- or overtreatment and unnecessary expense. Furthermore, it is important that consumers of the data produced by these instruments understand the principles of questionnaire design to interpret results in an optimal and meaningful way. This article presents a practical guide for understanding the methodologies of survey and questionnaire design, including the concepts of validity and reliability, how surveys are administered and implemented, and, finally, biases and pitfalls of surveys.

  12. Assessing college-level learning difficulties and "at riskness" for learning disabilities and ADHD: development and validation of the learning difficulties assessment.

    PubMed

    Kane, Steven T; Walker, John H; Schmidt, George R

    2011-01-01

    This article describes the development and validation of the Learning Difficulties Assessment (LDA), a normed and web-based survey that assesses perceived difficulties with reading, writing, spelling, mathematics, listening, concentration, memory, organizational skills, sense of control, and anxiety in college students. The LDA is designed to (a) map individual learning strengths and weaknesses, (b) provide users with a comparative sense of their academic skills, (c) integrate research in user-interface design to assist those with reading and learning challenges, and (d) identify individuals who may be at risk for learning disabilities and attention-deficit/hyperactivity disorder (ADHD) and who should thus be further assessed. Data from a large-scale 5-year study describing the instrument's validity as a screening tool for learning disabilities and ADHD are presented. This article also describes unique characteristics of the LDA including its user-interface design, normative characteristics, and use as a no-cost screening tool for identifying college students at risk for learning disorders and ADHD.

  13. Cluster designs to assess the prevalence of acute malnutrition by lot quality assurance sampling: a validation study by computer simulation

    PubMed Central

    Olives, Casey; Pagano, Marcello; Deitchler, Megan; Hedt, Bethany L; Egge, Kari; Valadez, Joseph J

    2009-01-01

    Traditional lot quality assurance sampling (LQAS) methods require simple random sampling to guarantee valid results. However, cluster sampling has been proposed to reduce the number of random starting points. This study uses simulations to examine the classification error of two such designs, a 67×3 (67 clusters of three observations) and a 33×6 (33 clusters of six observations) sampling scheme to assess the prevalence of global acute malnutrition (GAM). Further, we explore the use of a 67×3 sequential sampling scheme for LQAS classification of GAM prevalence. Results indicate that, for independent clusters with moderate intracluster correlation for the GAM outcome, the three sampling designs maintain approximate validity for LQAS analysis. Sequential sampling can substantially reduce the average sample size that is required for data collection. The presence of intercluster correlation can impact dramatically the classification error that is associated with LQAS analysis. PMID:20011037

  14. Validation of Finite-Element Models of Persistent-Current Effects in Nb 3Sn Accelerator Magnets

    DOE PAGES

    Wang, X.; Ambrosio, G.; Chlachidze, G.; ...

    2015-01-06

    Persistent magnetization currents are induced in superconducting filaments during the current ramping in magnets. The resulting perturbation to the design magnetic field leads to field quality degradation, in particular at low field where the effect is stronger relative to the main field. The effects observed in NbTi accelerator magnets were reproduced well with the critical-state model. However, this approach becomes less accurate for the calculation of the persistent-current effects observed in Nb 3Sn accelerator magnets. Here a finite-element method based on the measured strand magnetization is validated against three state-of-art Nb3Sn accelerator magnets featuring different subelement diameters, critical currents, magnetmore » designs and measurement temperatures. The temperature dependence of the persistent-current effects is reproduced. Based on the validated model, the impact of conductor design on the persistent current effects is discussed. The performance, limitations and possible improvements of the approach are also discussed.« less

  15. The Development of Statistics Textbook Supported with ICT and Portfolio-Based Assessment

    NASA Astrophysics Data System (ADS)

    Hendikawati, Putriaji; Yuni Arini, Florentina

    2016-02-01

    This research was development research that aimed to develop and produce a Statistics textbook model that supported with information and communication technology (ICT) and Portfolio-Based Assessment. This book was designed for students of mathematics at the college to improve students’ ability in mathematical connection and communication. There were three stages in this research i.e. define, design, and develop. The textbooks consisted of 10 chapters which each chapter contains introduction, core materials and include examples and exercises. The textbook developed phase begins with the early stages of designed the book (draft 1) which then validated by experts. Revision of draft 1 produced draft 2 which then limited test for readability test book. Furthermore, revision of draft 2 produced textbook draft 3 which simulated on a small sample to produce a valid model textbook. The data were analysed with descriptive statistics. The analysis showed that the Statistics textbook model that supported with ICT and Portfolio-Based Assessment valid and fill up the criteria of practicality.

  16. Optimal cooperative control synthesis of active displays

    NASA Technical Reports Server (NTRS)

    Garg, S.; Schmidt, D. K.

    1985-01-01

    A technique is developed that is intended to provide a systematic approach to synthesizing display augmentation for optimal manual control in complex, closed-loop tasks. A cooperative control synthesis technique, previously developed to design pilot-optimal control augmentation for the plant, is extended to incorporate the simultaneous design of performance enhancing displays. The technique utilizes an optimal control model of the man in the loop. It is applied to the design of a quickening control law for a display and a simple K/s(2) plant, and then to an F-15 type aircraft in a multi-channel task. Utilizing the closed loop modeling and analysis procedures, the results from the display design algorithm are evaluated and an analytical validation is performed. Experimental validation is recommended for future efforts.

  17. Flutter suppression for the Active Flexible Wing - Control system design and experimental validation

    NASA Technical Reports Server (NTRS)

    Waszak, M. R.; Srinathkumar, S.

    1992-01-01

    The synthesis and experimental validation of a control law for an active flutter suppression system for the Active Flexible Wing wind-tunnel model is presented. The design was accomplished with traditional root locus and Nyquist methods using interactive computer graphics tools and with extensive use of simulation-based analysis. The design approach relied on a fundamental understanding of the flutter mechanism to formulate understanding of the flutter mechanism to formulate a simple control law structure. Experimentally, the flutter suppression controller succeeded in simultaneous suppression of two flutter modes, significantly increasing the flutter dynamic pressure despite errors in the design model. The flutter suppression controller was also successfully operated in combination with a rolling maneuver controller to perform flutter suppression during rapid rolling maneuvers.

  18. A methodology for the validated design space exploration of fuel cell powered unmanned aerial vehicles

    NASA Astrophysics Data System (ADS)

    Moffitt, Blake Almy

    Unmanned Aerial Vehicles (UAVs) are the most dynamic growth sector of the aerospace industry today. The need to provide persistent intelligence, surveillance, and reconnaissance for military operations is driving the planned acquisition of over 5,000 UAVs over the next five years. The most pressing need is for quiet, small UAVs with endurance beyond what is capable with advanced batteries or small internal combustion propulsion systems. Fuel cell systems demonstrate high efficiency, high specific energy, low noise, low temperature operation, modularity, and rapid refuelability making them a promising enabler of the small, quiet, and persistent UAVs that military planners are seeking. Despite the perceived benefits, the actual near-term performance of fuel cell powered UAVs is unknown. Until the auto industry began spending billions of dollars in research, fuel cell systems were too heavy for useful flight applications. However, the last decade has seen rapid development with fuel cell gravimetric and volumetric power density nearly doubling every 2--3 years. As a result, a few design studies and demonstrator aircraft have appeared, but overall the design methodology and vehicles are still in their infancy. The design of fuel cell aircraft poses many challenges. Fuel cells differ fundamentally from combustion based propulsion in how they generate power and interact with other aircraft subsystems. As a result, traditional multidisciplinary analysis (MDA) codes are inappropriate. Building new MDAs is difficult since fuel cells are rapidly changing in design, and various competitive architectures exist for balance of plant, hydrogen storage, and all electric aircraft subsystems. In addition, fuel cell design and performance data is closely protected which makes validation difficult and uncertainty significant. Finally, low specific power and high volumes compared to traditional combustion based propulsion result in more highly constrained design spaces that are problematic for design space exploration. To begin addressing the current gaps in fuel cell aircraft development, a methodology has been developed to explore and characterize the near-term performance of fuel cell powered UAVs. The first step of the methodology is the development of a valid MDA. This is accomplished by using propagated uncertainty estimates to guide the decomposition of a MDA into key contributing analyses (CAs) that can be individually refined and validated to increase the overall accuracy of the MDA. To assist in MDA development, a flexible framework for simultaneously solving the CAs is specified. This enables the MDA to be easily adapted to changes in technology and the changes in data that occur throughout a design process. Various CAs that model a polymer electrolyte membrane fuel cell (PEMFC) UAV are developed, validated, and shown to be in agreement with hardware-in-the-loop simulations of a fully developed fuel cell propulsion system. After creating a valid MDA, the final step of the methodology is the synthesis of the MDA with an uncertainty propagation analysis, an optimization routine, and a chance constrained problem formulation. This synthesis allows an efficient calculation of the probabilistic constraint boundaries and Pareto frontiers that will govern the design space and influence design decisions relating to optimization and uncertainty mitigation. A key element of the methodology is uncertainty propagation. The methodology uses Systems Sensitivity Analysis (SSA) to estimate the uncertainty of key performance metrics due to uncertainties in design variables and uncertainties in the accuracy of the CAs. A summary of SSA is provided and key rules for properly decomposing a MDA for use with SSA are provided. Verification of SSA uncertainty estimates via Monte Carlo simulations is provided for both an example problem as well as a detailed MDA of a fuel cell UAV. Implementation of the methodology was performed on a small fuel cell UAV designed to carry a 2.2 kg payload with 24 hours of endurance. Uncertainty distributions for both design variables and the CAs were estimated based on experimental results and were found to dominate the design space. To reduce uncertainty and test the flexibility of the MDA framework, CAs were replaced with either empirical, or semi-empirical relationships during the optimization process. The final design was validated via a hardware-in-the loop simulation. Finally, the fuel cell UAV probabilistic design space was studied. A graphical representation of the design space was generated and the optima due to deterministic and probabilistic constraints were identified. The methodology was used to identify Pareto frontiers of the design space which were shown on contour plots of the design space. Unanticipated discontinuities of the Pareto fronts were observed as different constraints became active providing useful information on which to base design and development decisions.

  19. SLS Navigation Model-Based Design Approach

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed

  20. Design of an S band narrow-band bandpass BAW filter

    NASA Astrophysics Data System (ADS)

    Gao, Yang; Zhao, Kun-li; Han, Chao

    2017-11-01

    An S band narrowband bandpass filter BAW with center frequency 2.460 GHz, bandwidth 41MHz, band insertion loss - 1.154 dB, the passband ripple 0.9 dB, the out of band rejection about -42.5dB@2.385 GHz; -45.5dB@2.506 GHz was designed for potential UAV measurement and control applications. According to the design specifications, the design is as follows: each FBAR's stack was designed in BAW filter by using Mason model. Each FBAR's shape was designed with the method of apodization electrode. The layout of BAW filter was designed. The acoustic-electromagnetic cosimulation model was built to validate the performance of the designed BAW filter. The presented design procedure is a common one, and there are two characteristics: 1) an A and EM co-simulation method is used for the final BAW filter performance validation in the design stage, thus ensures over-optimistic designs by the bare 1D Mason model are found and rejected in time; 2) An in-house developed auto-layout method is used to get compact BAW filter layout, which simplifies iterative error-and-try work here and output necessary in-plane geometry information to the A and EM cosimulation model.

  1. Facilitated Communication: An Experimental Evaluation.

    ERIC Educational Resources Information Center

    Regal, Robert A.; And Others

    1994-01-01

    Nineteen adults with developmental disabilities, judged competent in facilitated communication, participated in a validation study using an information passing design requiring short-term recall of stimulus cards with shapes, colors, and numbers. Results failed to validate facilitated communication for the group as a whole, any individual…

  2. Generalizability and Validity of a Mathematics Performance Assessment.

    ERIC Educational Resources Information Center

    Lane, Suzanne; And Others

    1996-01-01

    Evidence from test results of 3,604 sixth and seventh graders is provided for the generalizability and validity of the Quantitative Understanding: Amplifying Student Achievement and Reasoning (QUASAR) Cognitive Assessment Instrument, which is designed to measure program outcomes and growth in mathematics. (SLD)

  3. Formal Specification and Design Techniques for Wireless Sensor and Actuator Networks

    PubMed Central

    Martínez, Diego; González, Apolinar; Blanes, Francisco; Aquino, Raúl; Simo, José; Crespo, Alfons

    2011-01-01

    A current trend in the development and implementation of industrial applications is to use wireless networks to communicate the system nodes, mainly to increase application flexibility, reliability and portability, as well as to reduce the implementation cost. However, the nondeterministic and concurrent behavior of distributed systems makes their analysis and design complex, often resulting in less than satisfactory performance in simulation and test bed scenarios, which is caused by using imprecise models to analyze, validate and design these systems. Moreover, there are some simulation platforms that do not support these models. This paper presents a design and validation method for Wireless Sensor and Actuator Networks (WSAN) which is supported on a minimal set of wireless components represented in Colored Petri Nets (CPN). In summary, the model presented allows users to verify the design properties and structural behavior of the system. PMID:22344203

  4. A design procedure for the handling qualities optimization of the X-29A aircraft

    NASA Technical Reports Server (NTRS)

    Bosworth, John T.; Cox, Timothy H.

    1989-01-01

    A design technique for handling qualities improvement was developed for the X-29A aircraft. As with any new aircraft, the X-29A control law designers were presented with a relatively high degree of uncertainty in their mathematical models. The presence of uncertainties, and the high level of static instability of the X-29A caused the control law designers to stress stability and robustness over handling qualities. During flight test, the mathematical models of the vehicle were validated or corrected to match the vehicle dynamic behavior. The updated models were then used to fine tune the control system to provide fighter-like handling characteristics. A design methodology was developed which works within the existing control system architecture to provide improved handling qualities and acceptable stability with a minimum of cost in both implementation as well as software verification and validation.

  5. Invited Commentary: Beware the Test-Negative Design.

    PubMed

    Westreich, Daniel; Hudgens, Michael G

    2016-09-01

    In this issue of the Journal, Sullivan et al. (Am J Epidemiol. 2016;184(5):345-353) carefully examine the theoretical justification for use of the test-negative design, a common observational study design, in assessing the effectiveness of influenza vaccination. Using modern causal inference methods (in particular, directed acyclic graphs), they describe different threats to the validity of inferences drawn about the effect of vaccination from test-negative design studies. These threats include confounding, selection bias, and measurement error in either the exposure or the outcome. While confounding and measurement error are common in observational studies, the potential for selection bias inherent in the test-negative design brings into question the validity of inferences drawn from such studies. © The Author 2016. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  6. Merlin: Computer-Aided Oligonucleotide Design for Large Scale Genome Engineering with MAGE.

    PubMed

    Quintin, Michael; Ma, Natalie J; Ahmed, Samir; Bhatia, Swapnil; Lewis, Aaron; Isaacs, Farren J; Densmore, Douglas

    2016-06-17

    Genome engineering technologies now enable precise manipulation of organism genotype, but can be limited in scalability by their design requirements. Here we describe Merlin ( http://merlincad.org ), an open-source web-based tool to assist biologists in designing experiments using multiplex automated genome engineering (MAGE). Merlin provides methods to generate pools of single-stranded DNA oligonucleotides (oligos) for MAGE experiments by performing free energy calculation and BLAST scoring on a sliding window spanning the targeted site. These oligos are designed not only to improve recombination efficiency, but also to minimize off-target interactions. The application further assists experiment planning by reporting predicted allelic replacement rates after multiple MAGE cycles, and enables rapid result validation by generating primer sequences for multiplexed allele-specific colony PCR. Here we describe the Merlin oligo and primer design procedures and validate their functionality compared to OptMAGE by eliminating seven AvrII restriction sites from the Escherichia coli genome.

  7. Structural Test Laboratory | Water Power | NREL

    Science.gov Websites

    Structural Test Laboratory Structural Test Laboratory NREL engineers design and configure structural components can validate models, demonstrate system reliability, inform design margins, and assess , including mass and center of gravity, to ensure compliance with design goals Dynamic Characterization Use

  8. 10 CFR 830.122 - Quality assurance criteria.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... control design interfaces. (4) Verify or validate the adequacy of design products using individuals or... quality problems. (2) Identify, control, and correct items, services, and processes that do not meet..., specify requirements, or establish design. (2) Specify, prepare, review, approve, and maintain records. (e...

  9. 10 CFR 830.122 - Quality assurance criteria.

    Code of Federal Regulations, 2013 CFR

    2013-01-01

    ... control design interfaces. (4) Verify or validate the adequacy of design products using individuals or... quality problems. (2) Identify, control, and correct items, services, and processes that do not meet..., specify requirements, or establish design. (2) Specify, prepare, review, approve, and maintain records. (e...

  10. 10 CFR 830.122 - Quality assurance criteria.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... control design interfaces. (4) Verify or validate the adequacy of design products using individuals or... quality problems. (2) Identify, control, and correct items, services, and processes that do not meet..., specify requirements, or establish design. (2) Specify, prepare, review, approve, and maintain records. (e...

  11. 10 CFR 830.122 - Quality assurance criteria.

    Code of Federal Regulations, 2012 CFR

    2012-01-01

    ... control design interfaces. (4) Verify or validate the adequacy of design products using individuals or... quality problems. (2) Identify, control, and correct items, services, and processes that do not meet..., specify requirements, or establish design. (2) Specify, prepare, review, approve, and maintain records. (e...

  12. 10 CFR 830.122 - Quality assurance criteria.

    Code of Federal Regulations, 2011 CFR

    2011-01-01

    ... control design interfaces. (4) Verify or validate the adequacy of design products using individuals or... quality problems. (2) Identify, control, and correct items, services, and processes that do not meet..., specify requirements, or establish design. (2) Specify, prepare, review, approve, and maintain records. (e...

  13. [Design and validation of a questionnaire to assess the level of general knowledge on eating disorders in students of Health Sciences].

    PubMed

    Sánchez Socarrás, Violeida; Aguilar Martínez, Alicia; Vaqué Crusellas, Cristina; Milá Villarroel, Raimon; González Rivas, Fabián

    2016-01-01

    To design and validate a questionnaire to assess the level of knowledge regarding eating disorders in college students. Observational, prospective, and longitudinal study, with the design of the questionnaire based on a conceptual review and validation by a cognitive pre-test and pilot test-retest, with analysis of the psychometric properties in each application. University Foundation of Bages, Barcelona. Marco community care. A total of 140 students from Health Sciences; 53 women and 87 men with a mean age of 21.87 years; 28 participated in the pre-test and 112 in the test-retests, 110 students completed the study. Validity and stability study using Cronbach α and Pearson product-moment correlation coefficient statistics; relationship skills with sex and type of study, non-parametric statistical Mann-Whitney and Kruskal-Wallis tests; for demographic variables, absolute or percentage frequencies, as well as mean, central tendency and standard deviation as measures of dispersion were calculated. The statistical significance level was 95% confidence. The questionnaire was obtained that had 10 questions divided into four dimensions (classification, demographics characteristics of patients, risk factors and clinical manifestations of eating disorders). The scale showed good internal consistency in its final version (Cronbach α=0.724) and adequate stability (Pearson correlation 0.749). The designed tool can be accurately used to assess Health Sciences students' knowledge of eating disorders. Copyright © 2015 Elsevier España, S.L.U. All rights reserved.

  14. Design and Validation of an Infrared Badal Optometer for Laser Speckle (IBOLS)

    PubMed Central

    Teel, Danielle F. W.; Copland, R. James; Jacobs, Robert J.; Wells, Thad; Neal, Daniel R.; Thibos, Larry N.

    2009-01-01

    Purpose To validate the design of an infrared wavefront aberrometer with a Badal optometer employing the principle of laser speckle generated by a spinning disk and infrared light. The instrument was designed for subjective meridional refraction in infrared light by human patients. Methods Validation employed a model eye with known refractive error determined with an objective infrared wavefront aberrometer. The model eye was used to produce a speckle pattern on an artificial retina with controlled amounts of ametropia introduced with auxiliary ophthalmic lenses. A human observer performed the psychophysical task of observing the speckle pattern (with the aid of a video camera sensitive to infrared radiation) formed on the artificial retina. Refraction was performed by adjusting the vergence of incident light with the Badal optometer to nullify the motion of laser speckle. Validation of the method was performed for different levels of spherical ametropia and for various configurations of an astigmatic model eye. Results Subjective measurements of meridional refractive error over the range −4D to + 4D agreed with astigmatic refractive errors predicted by the power of the model eye in the meridian of motion of the spinning disk. Conclusions Use of a Badal optometer to control laser speckle is a valid method for determining subjective refractive error at infrared wavelengths. Such an instrument will be useful for comparing objective measures of refractive error obtained for the human eye with autorefractors and wavefront aberrometers that employ infrared radiation. PMID:18772719

  15. Implementing Lumberjacks and Black Swans Into Model-Based Tools to Support Human-Automation Interaction.

    PubMed

    Sebok, Angelia; Wickens, Christopher D

    2017-03-01

    The objectives were to (a) implement theoretical perspectives regarding human-automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance. Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation. The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system. Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions. The three model-based tools offer useful ways to predict operator performance in complex systems. The three tools offer ways to predict the effects of different automation designs on operator performance.

  16. Sexual behavioral abstine HIV/AIDS questionnaire: Validation study of an Iranian questionnaire

    PubMed Central

    Najarkolaei, Fatemeh Rahmati; Niknami, Shamsaddin; Shokravi, Farkhondeh Amin; Tavafian, Sedigheh Sadat; Fesharaki, Mohammad Gholami; Jafari, Mohammad Reza

    2014-01-01

    Background: This study was designed to assess the validity and reliability of the designed sexual, behavioral abstinence, and avoidance of high-risk situation questionnaire (SBAHAQ), with an aim to construct an appropriate development tool in the Iranian population. Materials and Methods: A descriptive–analytic study was conducted among female undergraduate students of Tehran University, who were selected through cluster random sampling. After reviewing the questionnaires and investigating face and content validity, internal consistency of the questionnaire was assessed by Cronbach's alpha. Explanatory and confirmatory factor analysis was conducted using SPSS and AMOS 16 Software, respectively. Results: The sample consisted of 348 female university students with a mean age of 20.69 ± 1.63 years. The content validity ratio (CVR) coefficient was 0.85 and the reliability of each section of the questionnaire was as follows: Perceived benefit (PB; 0.87), behavioral intention (BI; 0.77), and self-efficacy (SE; 0.85) (Cronbach's alpha totally was 0.83). Explanatory factor analysis showed three factors, including SE, PB, and BI, with the total variance of 61% and Kaiser–Meyer–Olkin (KMO) index of 88%. These factors were also confirmed by confirmatory factor analysis [adjusted goodness of fitness index (AGFI) = 0.939, root mean square error of approximation (RMSEA) = 0.039]. Conclusion: This study showed the designed questionnaire provided adequate construct validity and reliability, and could be adequately used to measure sexual abstinence and avoidance of high-risk situations among female students. PMID:24741650

  17. The COA360: a tool for assessing the cultural competency of healthcare organizations.

    PubMed

    LaVeist, Thomas A; Relosa, Rachel; Sawaya, Nadia

    2008-01-01

    The U.S. Census Bureau projects that by 2050, non-Hispanic whites will be in the numerical minority. This rapid diversification requires healthcare organizations to pay closer attention to cross-cultural issues if they are to meet the healthcare needs of the nation and continue to maintain a high standard of care. Although scorecards and benchmarking are widely used to gauge healthcare organizations' performance in various areas, these tools have been underused in relation to cultural preparedness or initiatives. The likely reason for this is the lack of a validated tool specifically designed to examine cultural competency. Existing validated cultural competency instruments evaluate individuals, not organizations. In this article, we discuss a study to validate the Cultural Competency Organizational Assessment--360 or the COA360, an instrument designed to appraise a healthcare organization's cultural competence. The Office of Minority Health and the Joint Commission have each developed standards for measuring the cultural competency of organizations. The COA360 is designed to assess adherence to both of these sets of standards. For this validation study, we enlisted a panel of national experts. The panel rated each dimension of the COA360, and the combination of items for each of the scale's 14 dimensions was rated above 4.13 (on 5-point scale). Our conclusion points to the validity of the COA360. As such, it is a valuable tool not only for assessing a healthcare organization's cultural readiness but also for benchmarking its progress in addressing cultural and diversity issues.

  18. The impact of underreporting and overreporting on the validity of the Personality Inventory for DSM-5 (PID-5): A simulation analog design investigation.

    PubMed

    Dhillon, Sonya; Bagby, R Michael; Kushner, Shauna C; Burchett, Danielle

    2017-04-01

    The Personality Inventory for DSM-5 (PID-5) is a 220-item self-report instrument that assesses the alternative model of personality psychopathology in Section III (Emerging Measures and Models) of DSM-5 . Despite its relatively recent introduction, the PID-5 has generated an impressive accumulation of studies examining its psychometric properties, and the instrument is also already widely and frequently used in research studies. Although the PID-5 is psychometrically sound overall, reviews of this instrument express concern that this scale does not possess validity scales to detect invalidating levels of response bias, such as underreporting and overreporting. McGee Ng et al. (2016), using a "known-groups" (partial) criterion design, demonstrated that both underreporting and overreporting grossly affect mean scores on PID-5 scales. In the current investigation, we replicate these findings using an analog simulation design. An important extension to this replication study was the finding that the construct validity of the PID-5 was also significantly compromised by response bias, with statistically significant attenuation noted in validity coefficients of the PID-5 domain scales with scales from other instruments measuring congruent constructs. This attenuation was found for underreporting and overreporting bias. We believe there is a need to develop validity scales to screen for data-distorting response bias in research contexts and in clinical assessments where response bias is likely or otherwise suspected. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  19. What to Do With "Moderate" Reliability and Validity Coefficients?

    PubMed

    Post, Marcel W

    2016-07-01

    Clinimetric studies may use criteria for test-retest reliability and convergent validity such that correlation coefficients as low as .40 are supportive of reliability and validity. It can be argued that moderate (.40-.60) correlations should not be interpreted in this way and that reliability coefficients <.70 should be considered as indicative of unreliability. Convergent validity coefficients in the .40 to .60 or .40 to .70 range should be considered as indications of validity problems, or as inconclusive at best. Studies on reliability and convergent should be designed in such a way that it is realistic to expect high reliability and validity coefficients. Multitrait multimethod approaches are preferred to study construct (convergent-divergent) validity. Copyright © 2016 American Congress of Rehabilitation Medicine. Published by Elsevier Inc. All rights reserved.

  20. Teachers' Perceptions of Fairness, Well-Being and Burnout: A Contribution to the Validation of the Organizational Justice Index by Hoy and Tarter

    ERIC Educational Resources Information Center

    Capone, Vincenza; Petrillo, Giovanna

    2016-01-01

    Purpose: The purpose of this paper is to contribute to the validation of the Organizational Justice Index (OJI) by Hoy and Tarter (2004), a self-report questionnaire for teachers' perceptions of fairness in the operation and administration of schools. Design/methodology/approach: In two studies the authors validated the Italian version of the OJI.…

  1. Design process and preliminary psychometric study of a video game to detect cognitive impairment in senior adults.

    PubMed

    Valladares-Rodriguez, Sonia; Perez-Rodriguez, Roberto; Facal, David; Fernandez-Iglesias, Manuel J; Anido-Rifon, Luis; Mouriño-Garcia, Marcos

    2017-01-01

    Assessment of episodic memory has been traditionally used to evaluate potential cognitive impairments in senior adults. Typically, episodic memory evaluation is based on personal interviews and pen-and-paper tests. This article presents the design, development and a preliminary validation of a novel digital game to assess episodic memory intended to overcome the limitations of traditional methods, such as the cost of its administration, its intrusive character, the lack of early detection capabilities, the lack of ecological validity, the learning effect and the existence of confounding factors. Our proposal is based on the gamification of the California Verbal Learning Test (CVLT) and it has been designed to comply with the psychometric characteristics of reliability and validity. Two qualitative focus groups and a first pilot experiment were carried out to validate the proposal. A more ecological, non-intrusive and better administrable tool to perform cognitive assessment was developed. Initial evidence from the focus groups and pilot experiment confirmed the developed game's usability and offered promising results insofar its psychometric validity is concerned. Moreover, the potential of this game for the cognitive classification of senior adults was confirmed, and administration time is dramatically reduced with respect to pen-and-paper tests. Additional research is needed to improve the resolution of the game for the identification of specific cognitive impairments, as well as to achieve a complete validation of the psychometric properties of the digital game. Initial evidence show that serious games can be used as an instrument to assess the cognitive status of senior adults, and even to predict the onset of mild cognitive impairments or Alzheimer's disease.

  2. Design process and preliminary psychometric study of a video game to detect cognitive impairment in senior adults

    PubMed Central

    Perez-Rodriguez, Roberto; Facal, David; Fernandez-Iglesias, Manuel J.; Anido-Rifon, Luis; Mouriño-Garcia, Marcos

    2017-01-01

    Introduction Assessment of episodic memory has been traditionally used to evaluate potential cognitive impairments in senior adults. Typically, episodic memory evaluation is based on personal interviews and pen-and-paper tests. This article presents the design, development and a preliminary validation of a novel digital game to assess episodic memory intended to overcome the limitations of traditional methods, such as the cost of its administration, its intrusive character, the lack of early detection capabilities, the lack of ecological validity, the learning effect and the existence of confounding factors. Materials and Methods Our proposal is based on the gamification of the California Verbal Learning Test (CVLT) and it has been designed to comply with the psychometric characteristics of reliability and validity. Two qualitative focus groups and a first pilot experiment were carried out to validate the proposal. Results A more ecological, non-intrusive and better administrable tool to perform cognitive assessment was developed. Initial evidence from the focus groups and pilot experiment confirmed the developed game’s usability and offered promising results insofar its psychometric validity is concerned. Moreover, the potential of this game for the cognitive classification of senior adults was confirmed, and administration time is dramatically reduced with respect to pen-and-paper tests. Limitations Additional research is needed to improve the resolution of the game for the identification of specific cognitive impairments, as well as to achieve a complete validation of the psychometric properties of the digital game. Conclusion Initial evidence show that serious games can be used as an instrument to assess the cognitive status of senior adults, and even to predict the onset of mild cognitive impairments or Alzheimer’s disease. PMID:28674661

  3. Sleep-Wake Evaluation from Whole-Night Non-Contact Audio Recordings of Breathing Sounds

    PubMed Central

    Dafna, Eliran; Tarasiuk, Ariel; Zigel, Yaniv

    2015-01-01

    Study Objectives To develop and validate a novel non-contact system for whole-night sleep evaluation using breathing sounds analysis (BSA). Design Whole-night breathing sounds (using ambient microphone) and polysomnography (PSG) were simultaneously collected at a sleep laboratory (mean recording time 7.1 hours). A set of acoustic features quantifying breathing pattern were developed to distinguish between sleep and wake epochs (30 sec segments). Epochs (n = 59,108 design study and n = 68,560 validation study) were classified using AdaBoost classifier and validated epoch-by-epoch for sensitivity, specificity, positive and negative predictive values, accuracy, and Cohen's kappa. Sleep quality parameters were calculated based on the sleep/wake classifications and compared with PSG for validity. Setting University affiliated sleep-wake disorder center and biomedical signal processing laboratory. Patients One hundred and fifty patients (age 54.0±14.8 years, BMI 31.6±5.5 kg/m2, m/f 97/53) referred for PSG were prospectively and consecutively recruited. The system was trained (design study) on 80 subjects; validation study was blindly performed on the additional 70 subjects. Measurements and Results Epoch-by-epoch accuracy rate for the validation study was 83.3% with sensitivity of 92.2% (sleep as sleep), specificity of 56.6% (awake as awake), and Cohen's kappa of 0.508. Comparing sleep quality parameters of BSA and PSG demonstrate average error of sleep latency, total sleep time, wake after sleep onset, and sleep efficiency of 16.6 min, 35.8 min, and 29.6 min, and 8%, respectively. Conclusions This study provides evidence that sleep-wake activity and sleep quality parameters can be reliably estimated solely using breathing sound analysis. This study highlights the potential of this innovative approach to measure sleep in research and clinical circumstances. PMID:25710495

  4. The Cambridge Otology Quality of Life Questionnaire: an otology-specific patient-recorded outcome measure. A paper describing the instrument design and a report of preliminary reliability and validity.

    PubMed

    Martin, T P C; Moualed, D; Paul, A; Ronan, N; Tysome, J R; Donnelly, N P; Cook, R; Axon, P R

    2015-04-01

    The Cambridge Otology Quality of Life Questionnaire (COQOL) is a patient-recorded outcome measurement (PROM) designed to quantify the quality of life of patients attending otology clinics. Item-reduction model. A systematically designed long-form version (74 items) was tested with patient focus groups before being presented to adult otology patients (n. 137). Preliminary item analysis tested reliability, reducing the COQOL to 24 questions. This was then presented in conjunction with the SF-36 (V1) questionnaire to a total of 203 patients. Subsequently, these were re-presented at T + 3 months, and patients recorded whether they felt their condition had improved, deteriorated or remained the same. Non-responders were contacted by post. A correlation between COQOL scores and patient perception of change was examined to analyse content validity. Teaching hospital and university psychology department. Adult patients attending otology clinics with a wide range of otological conditions. Item reliability measured by item–total correlation, internal consistency and test– retest reliability. Validity measured by correlation between COQOL scores and patient-reported symptom change. Reliability: the COQOL showed excellent internal consistency at both initial presentation (a = 0.90) and 3 months later (a = 0.93). Validity: One-way analysis of variance showed a significant difference between groups reporting change and those reporting no change in quality of life (F(2, 80) = 5.866, P < 0.01). The COQOL is the first otology-specific PROM. Initial studies demonstrate excellent reliability and encouraging preliminary criterion validity: further studies will allow a deeper validation of the instrument.

  5. The City of Hope-Quality of Life-Ostomy Questionnaire: Persian Translation and Validation

    PubMed Central

    Anaraki, F; Vafaie, M; Behboo, R; Esmaeilpour, S; Maghsoodi, N; Safaee, A; Grant, M

    2014-01-01

    Background: Since there is no disease-specific instrument for measuring quality-of-life (QOL) in Ostomy patients in Persian language. Aim: This study was designed to translate and evaluate the validity and reliability of City of Hope-quality of life-Ostomy questionnaire (COH-QOL-Ostomy questionnaire). Subjects and Methods: This study was designed as cross-sectional study. Reliability of the subscales and the summary scores were demonstrated by intra-class correlation coefficients. Pearson's correlations of an item with its own scale and other scales were calculated to evaluated convergent and discriminant validity. Clinical validity was also evaluated by known-group comparisons. Results: Cronbach's alpha coefficient for all subscales was about 0.70 or higher. Results of interscale correlation were satisfactory and each subscale only measured a single and specified trait. All subscales met the standards of convergent and discriminant validity. Known group comparison analysis showed significant differences in social and spiritual well-being. Conclusion: The findings confirmed the reliability and validity of Persian version of COH-QOL-Ostomy questionnaire. The instrument was also well received by the Iranian patients. It can be considered as a valuable instrument to assess the different aspects of health related quality-of-life in Ostomy patients and used in clinical research in the future. PMID:25221719

  6. Scaling Studies for Advanced High Temperature Reactor Concepts, Final Technical Report: October 2014—December 2017

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Woods, Brian; Gutowska, Izabela; Chiger, Howard

    Computer simulations of nuclear reactor thermal-hydraulic phenomena are often used in the design and licensing of nuclear reactor systems. In order to assess the accuracy of these computer simulations, computer codes and methods are often validated against experimental data. This experimental data must be of sufficiently high quality in order to conduct a robust validation exercise. In addition, this experimental data is generally collected at experimental facilities that are of a smaller scale than the reactor systems that are being simulated due to cost considerations. Therefore, smaller scale test facilities must be designed and constructed in such a fashion tomore » ensure that the prototypical behavior of a particular nuclear reactor system is preserved. The work completed through this project has resulted in scaling analyses and conceptual design development for a test facility capable of collecting code validation data for the following high temperature gas reactor systems and events— 1. Passive natural circulation core cooling system, 2. pebble bed gas reactor concept, 3. General Atomics Energy Multiplier Module reactor, and 4. prismatic block design steam-water ingress event. In the event that code validation data for these systems or events is needed in the future, significant progress in the design of an appropriate integral-type test facility has already been completed as a result of this project. Where applicable, the next step would be to begin the detailed design development and material procurement. As part of this project applicable scaling analyses were completed and test facility design requirements developed. Conceptual designs were developed for the implementation of these design requirements at the Oregon State University (OSU) High Temperature Test Facility (HTTF). The original HTTF is based on a ¼-scale model of a high temperature gas reactor concept with the capability for both forced and natural circulation flow through a prismatic core with an electrical heat source. The peak core region temperature capability is 1400°C. As part of this project, an inventory of test facilities that could be used for these experimental programs was completed. Several of these facilities showed some promise, however, upon further investigation it became clear that only the OSU HTTF had the power and/or peak temperature limits that would allow for the experimental programs envisioned herein. Thus the conceptual design and feasibility study development focused on examining the feasibility of configuring the current HTTF to collect validation data for these experimental programs. In addition to the scaling analyses and conceptual design development, a test plan was developed for the envisioned modified test facility. This test plan included a discussion on an appropriate shakedown test program as well as the specific matrix tests. Finally, a feasibility study was completed to determine the cost and schedule considerations that would be important to any test program developed to investigate these designs and events.« less

  7. Development and validation of a nutritional education pamphlet for low literacy pediatric oncology caregivers in Central America.

    PubMed

    Garcia, Melissa; Chismark, Elisabeth A; Mosby, Terezie; Day, Sara W

    2010-12-01

    A culturally appropriate nutrition education pamphlet was developed and validated for low-literacy caregivers in Honduras, El Salvador, and Guatemala. The pamphlet was developed after a preliminary survey of pediatric oncology nurses in the 3 countries to assess the need for education materials, caregiver literacy levels, and local eating habits. Experts in nutrition and low-literacy patient education and pediatric oncology nurses validated the pamphlet's content and design. The pamphlet was validated positively and has been circulated to pediatric oncology caregivers in Central America.

  8. Designing, Evaluating, and Deploying Automated Scoring Systems with Validity in Mind: Methodological Design Decisions

    ERIC Educational Resources Information Center

    Rupp, André A.

    2018-01-01

    This article discusses critical methodological design decisions for collecting, interpreting, and synthesizing empirical evidence during the design, deployment, and operational quality-control phases for automated scoring systems. The discussion is inspired by work on operational large-scale systems for automated essay scoring but many of the…

  9. The Importance of Longitudinal Pretest-Posttest Designs in Estimating College Impact

    ERIC Educational Resources Information Center

    Seifert, Tricia A.; Pascarella, Ernest T.; Erkel, Sherri I.; Goodman, Kathleen M.

    2010-01-01

    In this chapter, the authors discuss the issue of research design in conducting inquiry on college impact and demonstrate the importance of longitudinal pretest-posttest designs in maximizing the internal validity of findings. They begin by discussing the strengths and weaknesses of different types of research design in the college impact…

  10. A Primer on Experimental and Quasi-experimental Design.

    ERIC Educational Resources Information Center

    Dawson, Thomas E.

    Counseling psychology is a relatively new field that is gaining autonomy and respect. Unfortunately, research efforts in the field may lack an appropriate research design. This paper considers some of the more common types of research design and the associated threats to their validity. An example of each design type is drawn from the counseling…

  11. Designing research: ex post facto designs.

    PubMed

    Giuffre, M

    1997-06-01

    The research design is the overall plan or structure of the study. The goal of a good research design is to insure internal validity and answer the question being asked. The only clear rule in selecting a design is that the question dictates the design. Over the next few issues this column will cover types of research designs and their inherent strengths and weaknesses. This article discusses ex post facto research.

  12. The educational game design on relation and functionmaterials

    NASA Astrophysics Data System (ADS)

    Pramuditya, S. A.; Noto, M. S.; Syaefullah, D.

    2018-05-01

    Information technology development is certainly very helpful and important for life, especially for education. Media is always associated with technology. Media is considered important because as a tool in the learning process both inside and outside the classroom and can also be used in the framework of communication and interaction with teachers and students in the learning process. Smartphone technology is currently growing very rapidly, especially for Android platform. Game is one of the entertainment media that becomes an option to eliminate boring or just to spend a time. Educational games specifically designed to teach users a particular learning, developing concepts and understanding and guiding them in training their abilities and motivating them to play it. Game of mathematics education is a game inserted by mathematics learning content. This article discusses development research of designing educational game. The purpose of this research was to produce educational games on relation and function, which should be valid and practical. This research adapts the development model of ADDIE, restricted by analysis, design, and development. Data were collected from validation and practical sheets then were analysed descriptively. Based on the results of data analysis, our educational game was valid and practical.

  13. Preparation, validation and user-testing of pictogram-based patient information leaflets for hemodialysis patients.

    PubMed

    Mateti, Uday Venkat; Nagappa, Anantha Naik; Attur, Ravindra Prabhu; Bairy, Manohar; Nagaraju, Shankar Prasad; Mallayasamy, Surulivelrajan; Vilakkathala, Rajesh; Guddattu, Vasudev; Balkrishnan, Rajesh

    2015-11-01

    Patient information leaflets are universally-accepted resources to educate the patients/users about their medications, disease and lifestyle modification. The objective of the study was to prepare, validate and perform user-testing of pictogram-based patient information leaflets (P-PILs) among hemodialysis (HD) patients. The P-PILs are prepared by referring to the primary, secondary and tertiary resources. The content and pictograms of the leaflet have been validated by an expert committee consisting of three nephrologists and two academic pharmacists. The Baker Able Leaflet Design has been applied to develop the layout and design of the P-PILs. Quasi-experimental pre- and post-test design without control group was conducted on 81 HD patients for user-testing of P-PILs. The mean Baker Able Leaflet Design assessment score for English version of the leaflet was 28, and 26 for Kannada version. The overall user-testing knowledge assessment mean scores were observed to have significantly improved from 44.25 to 69.62 with p value <0.001. The overall user opinion of content and legibility of the leaflets was good. Pictogram-based patient information leaflets can be considered an effective educational tool for HD patients.

  14. Tone Noise Predictions for a Spacecraft Cabin Ventilation Fan Ingesting Distorted Inflow and the Challenges of Validation

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Shook, Tony D.; Astler, Douglas T.; Bittinger, Samantha A.

    2011-01-01

    A fan tone noise prediction code has been developed at NASA Glenn Research Center that is capable of estimating duct mode sound power levels for a fan ingesting distorted inflow. This code was used to predict the circumferential and radial mode sound power levels in the inlet and exhaust duct of an axial spacecraft cabin ventilation fan. Noise predictions at fan design rotational speed were generated. Three fan inflow conditions were studied: an undistorted inflow, a circumferentially symmetric inflow distortion pattern (cylindrical rods inserted radially into the flowpath at 15deg, 135deg, and 255deg), and a circumferentially asymmetric inflow distortion pattern (rods located at 15deg, 52deg and 173deg). Noise predictions indicate that tones are produced for the distorted inflow cases that are not present when the fan operates with an undistorted inflow. Experimental data are needed to validate these acoustic predictions, as well as the aerodynamic performance predictions. Given the aerodynamic design of the spacecraft cabin ventilation fan, a mechanical and electrical conceptual design study was conducted. Design features of a fan suitable for obtaining detailed acoustic and aerodynamic measurements needed to validate predictions are discussed.

  15. Tone Noise Predictions for a Spacecraft Cabin Ventilation Fan Ingesting Distorted Inflow and the Challenges of Validation

    NASA Technical Reports Server (NTRS)

    Koch, L. Danielle; Shook, Tony D.; Astler, Douglas T.; Bittinger, Samantha A.

    2012-01-01

    A fan tone noise prediction code has been developed at NASA Glenn Research Center that is capable of estimating duct mode sound power levels for a fan ingesting distorted inflow. This code was used to predict the circumferential and radial mode sound power levels in the inlet and exhaust duct of an axial spacecraft cabin ventilation fan. Noise predictions at fan design rotational speed were generated. Three fan inflow conditions were studied: an undistorted inflow, a circumferentially symmetric inflow distortion pattern (cylindrical rods inserted radially into the flowpath at 15deg, 135deg, and 255deg), and a circumferentially asymmetric inflow distortion pattern (rods located at 15deg, 52deg and 173deg). Noise predictions indicate that tones are produced for the distorted inflow cases that are not present when the fan operates with an undistorted inflow. Experimental data are needed to validate these acoustic predictions, as well as the aerodynamic performance predictions. Given the aerodynamic design of the spacecraft cabin ventilation fan, a mechanical and electrical conceptual design study was conducted. Design features of a fan suitable for obtaining detailed acoustic and aerodynamic measurements needed to validate predictions are discussed.

  16. Development and Validation of HPLC-DAD and UHPLC-DAD Methods for the Simultaneous Determination of Guanylhydrazone Derivatives Employing a Factorial Design.

    PubMed

    Azevedo de Brito, Wanessa; Gomes Dantas, Monique; Andrade Nogueira, Fernando Henrique; Ferreira da Silva-Júnior, Edeildo; Xavier de Araújo-Júnior, João; Aquino, Thiago Mendonça de; Adélia Nogueira Ribeiro, Êurica; da Silva Solon, Lilian Grace; Soares Aragão, Cícero Flávio; Barreto Gomes, Ana Paula

    2017-08-30

    Guanylhydrazones are molecules with great pharmacological potential in various therapeutic areas, including antitumoral activity. Factorial design is an excellent tool in the optimization of a chromatographic method, because it is possible quickly change factors such as temperature, mobile phase composition, mobile phase pH, column length, among others to establish the optimal conditions of analysis. The aim of the present work was to develop and validate a HPLC and UHPLC methods for the simultaneous determination of guanylhydrazones with anticancer activity employing experimental design. Precise, exact, linear and robust HPLC and UHPLC methods were developed and validated for the simultaneous quantification of the guanylhydrazones LQM10, LQM14, and LQM17. The UHPLC method was more economic, with a four times less solvent consumption, and 20 times less injection volume, what allowed better column performance. Comparing the empirical approach employed in the HPLC method development to the DoE approach employed in the UHPLC method development, we can conclude that the factorial design made the method development faster, more practical and rational. This resulted in methods that can be employed in the analysis, evaluation and quality control of these new synthetic guanylhydrazones.

  17. Application of computational methods for the design of BACE-1 inhibitors: validation of in silico modelling.

    PubMed

    Bajda, Marek; Jończyk, Jakub; Malawska, Barbara; Filipek, Sławomir

    2014-03-24

    β-Secretase (BACE-1) constitutes an important target for search of anti-Alzheimer's drugs. The first inhibitors of this enzyme were peptidic compounds with high molecular weight and low bioavailability. Therefore, the search for new efficient non-peptidic inhibitors has been undertaken by many scientific groups. We started our work from the development of in silico methodology for the design of novel BACE-1 ligands. It was validated on the basis of crystal structures of complexes with inhibitors, redocking, cross-docking and training/test sets of reference ligands. The presented procedure of assessment of the novel compounds as β-secretase inhibitors could be widely used in the design process.

  18. Statistical Methods for Rapid Aerothermal Analysis and Design Technology: Validation

    NASA Technical Reports Server (NTRS)

    DePriest, Douglas; Morgan, Carolyn

    2003-01-01

    The cost and safety goals for NASA s next generation of reusable launch vehicle (RLV) will require that rapid high-fidelity aerothermodynamic design tools be used early in the design cycle. To meet these requirements, it is desirable to identify adequate statistical models that quantify and improve the accuracy, extend the applicability, and enable combined analyses using existing prediction tools. The initial research work focused on establishing suitable candidate models for these purposes. The second phase is focused on assessing the performance of these models to accurately predict the heat rate for a given candidate data set. This validation work compared models and methods that may be useful in predicting the heat rate.

  19. Design and Validation of High Date Rate Ka-Band Software Defined Radio for Small Satellite

    NASA Technical Reports Server (NTRS)

    Xia, Tian

    2016-01-01

    The Design and Validation of High Date Rate Ka- Band Software Defined Radio for Small Satellite project will develop a novel Ka-band software defined radio (SDR) that is capable of establishing high data rate inter-satellite links with a throughput of 500 megabits per second (Mb/s) and providing millimeter ranging precision. The system will be designed to operate with high performance and reliability that is robust against various interference effects and network anomalies. The Ka-band radio resulting from this work will improve upon state of the art Ka-band radios in terms of dimensional size, mass and power dissipation, which limit their use in small satellites.

  20. Development and validation of an online interactive, multimedia wound care algorithms program.

    PubMed

    Beitz, Janice M; van Rijswijk, Lia

    2012-01-01

    To provide education based on evidence-based and validated wound care algorithms we designed and implemented an interactive, Web-based learning program for teaching wound care. A mixed methods quantitative pilot study design with qualitative components was used to test and ascertain the ease of use, validity, and reliability of the online program. A convenience sample of 56 RN wound experts (formally educated, certified in wound care, or both) participated. The interactive, online program consists of a user introduction, interactive assessment of 15 acute and chronic wound photos, user feedback about the percentage correct, partially correct, or incorrect algorithm and dressing choices and a user survey. After giving consent, participants accessed the online program, provided answers to the demographic survey, and completed the assessment module and photographic test, along with a posttest survey. The construct validity of the online interactive program was strong. Eighty-five percent (85%) of algorithm and 87% of dressing choices were fully correct even though some programming design issues were identified. Online study results were consistently better than previously conducted comparable paper-pencil study results. Using a 5-point Likert-type scale, participants rated the program's value and ease of use as 3.88 (valuable to very valuable) and 3.97 (easy to very easy), respectively. Similarly the research process was described qualitatively as "enjoyable" and "exciting." This digital program was well received indicating its "perceived benefits" for nonexpert users, which may help reduce barriers to implementing safe, evidence-based care. Ongoing research using larger sample sizes may help refine the program or algorithms while identifying clinician educational needs. Initial design imperfections and programming problems identified also underscored the importance of testing all paper and Web-based programs designed to educate health care professionals or guide patient care.

  1. 10 CFR 712.32 - Designated Physician.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... designation, of a physical, mental/personality disorder, or health condition that might affect his or her... school of medicine or osteopathy; (2) Have a valid, unrestricted state license to practice medicine in...

  2. Validation of an Instrument and Testing Protocol for Measuring the Combinatorial Analysis Schema.

    ERIC Educational Resources Information Center

    Staver, John R.; Harty, Harold

    1979-01-01

    Designs a testing situation to examine the presence of combinatorial analysis, to establish construct validity in the use of an instrument, Combinatorial Analysis Behavior Observation Scheme (CABOS), and to investigate the presence of the schema in young adolescents. (Author/GA)

  3. The Validation of a Software Evaluation Instrument.

    ERIC Educational Resources Information Center

    Schmitt, Dorren Rafael

    This study, conducted at six southern universities, analyzed the validity and reliability of a researcher developed instrument designed to evaluate educational software in secondary mathematics. The instrument called the Instrument for Software Evaluation for Educators uses measurement scales, presents a summary section of the evaluation, and…

  4. Polychlorinated Biphenyl (PCB) Aroclor Data Validation, SOP No. HW-37A Revision 0; SOM02.2

    EPA Pesticide Factsheets

    This document is designed to offer the data reviewer guidance in determining the validity of analytical data generated through the USEPA Contract Laboratory Program (CLP) Statement of Work (SOW) for Multi-Media, Multi-Concentration Organics Analysis

  5. Validation of the Federal Aviation Administration Air Traffic Control Specialist Pre-Training Screen.

    DOT National Transportation Integrated Search

    1994-02-01

    Two formal validation studies of the Air Traffic Control Specialist Pre Training Screen (ATCS/PTS), a 5 day computer administered test battery, are described. The ATCS/PTS was designed to replace the 9 week US Federal Aviation Administration (FAA) Ac...

  6. Development and Validation of the Personality Assessment Questionnaire: Test Manual.

    ERIC Educational Resources Information Center

    Rohner, Ronald P.; And Others

    Data are presented evaluating the validity and reliability of the Personality Assessment Questionnaire (PAQ), a self-report questionnaire designed to elicit respondents' perceptions of themselves with respect to seven personality and behavioral dispositions: hostility and aggression, dependence, self-esteem, self-adequacy, emotional…

  7. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol

    PubMed Central

    Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-01-01

    Introduction The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. Methods and analysis As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. Ethics and dissemination The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. PMID:28827239

  8. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol.

    PubMed

    Klussmann, Andre; Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-08-21

    The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  9. Evaluation of written medicine information: validation of the Consumer Information Rating Form.

    PubMed

    Koo, Michelle M; Krass, Ines; Aslani, Parisa

    2007-06-01

    The Consumer Information Rating Form (CIRF) was developed as a direct method for measuring consumers' perceptions of the comprehensibility, utility, and design quality of written medicine information. The validity and reliability of the CIRF were evaluated in a small convenience consumer sample in the US. Its validity and reliability have yet to be established in a larger sample of consumers who are on chronic therapy in different settings. To determine the validity and reliability of the CIRF in Australian consumers on chronic therapy. Consumers read and subsequently evaluated a Consumer Medicine Information (CMI) leaflet for one of their own medications, using an adapted version of the CIRF. The construct validity and internal reliability of the adapted version of the CIRF were tested using principal components analysis (PCA) and Cronbach's alpha, respectively. The adapted CIRF was completed by 282 consumers (aged 19-90 y; median 66; interquartile range 53-75 y; 60.3% females). Most respondents spoke primarily English at home (85.5%), had attained at least secondary education (84%), and had adequate health literacy levels (88.2%). Consumers rated CMI easy to read, understand, and navigate, but less easy to remember and keep. Most also found it to be useful and to contain the right amount of information. The design aspects also scored favorably, although CMI did score relatively poorly in terms of its attractiveness and tone (whether alarming or not). PCA yielded 3 factors (explaining 59.3% of the total variance) identical to those in the original CIRF: comprehensibility, utility, and design quality. All factors demonstrated good reliability (Cronbach's alpha 0.74, 0.92, and 0.75, respectively). The CIRF appears to be a robust instrument for assessing consumers' perceptions of written medicine information. However, validity always needs to be reestablished when using a previously validated measure in a different population.

  10. Challenges of NDE Simulation Tool Challenges of NDE Simulation Tool

    NASA Technical Reports Server (NTRS)

    Leckey, Cara A. C.; Juarez, Peter D.; Seebo, Jeffrey P.; Frank, Ashley L.

    2015-01-01

    Realistic nondestructive evaluation (NDE) simulation tools enable inspection optimization and predictions of inspectability for new aerospace materials and designs. NDE simulation tools may someday aid in the design and certification of advanced aerospace components; potentially shortening the time from material development to implementation by industry and government. Furthermore, modeling and simulation are expected to play a significant future role in validating the capabilities and limitations of guided wave based structural health monitoring (SHM) systems. The current state-of-the-art in ultrasonic NDE/SHM simulation cannot rapidly simulate damage detection techniques for large scale, complex geometry composite components/vehicles with realistic damage types. This paper discusses some of the challenges of model development and validation for composites, such as the level of realism and scale of simulation needed for NASA' applications. Ongoing model development work is described along with examples of model validation studies. The paper will also discuss examples of the use of simulation tools at NASA to develop new damage characterization methods, and associated challenges of validating those methods.

  11. Temperature field simulation and phantom validation of a Two-armed Spiral Antenna for microwave thermotherapy.

    PubMed

    Du, Yongxing; Zhang, Lingze; Sang, Lulu; Wu, Daocheng

    2016-04-29

    In this paper, an Archimedean planar spiral antenna for the application of thermotherapy was designed. This type of antenna was chosen for its compact structure, flexible application and wide heating area. The temperature field generated by the use of this Two-armed Spiral Antenna in a muscle-equivalent phantom was simulated and subsequently validated by experimentation. First, the specific absorption rate (SAR) of the field was calculated using the Finite Element Method (FEM) by Ansoft's High Frequency Structure Simulation (HFSS). Then, the temperature elevation in the phantom was simulated by an explicit finite difference approximation of the bioheat equation (BHE). The temperature distribution was then validated by a phantom heating experiment. The results showed that this antenna had a good heating ability and a wide heating area. A comparison between the calculation and the measurement showed a fair agreement in the temperature elevation. The validated model could be applied for the analysis of electromagnetic-temperature distribution in phantoms during the process of antenna design or thermotherapy experimentation.

  12. The water balance questionnaire: design, reliability and validity of a questionnaire to evaluate water balance in the general population.

    PubMed

    Malisova, Olga; Bountziouka, Vassiliki; Panagiotakos, Demosthenes B; Zampelas, Antonis; Kapsokefalou, Maria

    2012-03-01

    There is a need to develop a questionnaire as a research tool for the evaluation of water balance in the general population. The water balance questionnaire (WBQ) was designed to evaluate water intake from fluid and solid foods and drinking water, and water loss from urine, faeces and sweat at sedentary conditions and physical activity. For validation purposes, the WBQ was administrated in 40 apparently healthy participants aged 22-57 years (37.5% males). Hydration indices in urine (24 h volume, osmolality, specific gravity, pH, colour) were measured through established procedures. Furthermore, the questionnaire was administered twice to 175 subjects to evaluate its reliability. Kendall's τ-b and the Bland and Altman method were used to assess the questionnaire's validity and reliability. The proposed WBQ to assess water balance in healthy individuals was found to be valid and reliable, and it could thus be a useful tool in future projects that aim to evaluate water balance.

  13. Development of code evaluation criteria for assessing predictive capability and performance

    NASA Technical Reports Server (NTRS)

    Lin, Shyi-Jang; Barson, S. L.; Sindir, M. M.; Prueger, G. H.

    1993-01-01

    Computational Fluid Dynamics (CFD), because of its unique ability to predict complex three-dimensional flows, is being applied with increasing frequency in the aerospace industry. Currently, no consistent code validation procedure is applied within the industry. Such a procedure is needed to increase confidence in CFD and reduce risk in the use of these codes as a design and analysis tool. This final contract report defines classifications for three levels of code validation, directly relating the use of CFD codes to the engineering design cycle. Evaluation criteria by which codes are measured and classified are recommended and discussed. Criteria for selecting experimental data against which CFD results can be compared are outlined. A four phase CFD code validation procedure is described in detail. Finally, the code validation procedure is demonstrated through application of the REACT CFD code to a series of cases culminating in a code to data comparison on the Space Shuttle Main Engine High Pressure Fuel Turbopump Impeller.

  14. The design organization test: further demonstration of reliability and validity as a brief measure of visuospatial ability.

    PubMed

    Killgore, William D S; Gogel, Hannah

    2014-01-01

    Neuropsychological assessments are frequently time-consuming and fatiguing for patients. Brief screening evaluations may reduce test duration and allow more efficient use of time by permitting greater attention toward neuropsychological domains showing probable deficits. The Design Organization Test (DOT) was initially developed as a 2-min paper-and-pencil alternative for the Block Design (BD) subtest of the Wechsler scales. Although initially validated for clinical neurologic patients, we sought to further establish the reliability and validity of this test in a healthy, more diverse population. Two alternate versions of the DOT and the Wechsler Abbreviated Scale of Intelligence (WASI) were administered to 61 healthy adult participants. The DOT showed high alternate forms reliability (r = .90-.92), and the two versions yielded equivalent levels of performance. The DOT was highly correlated with BD (r = .76-.79) and was significantly correlated with all subscales of the WASI. The DOT proved useful when used in lieu of BD in the calculation of WASI IQ scores. Findings support the reliability and validity of the DOT as a measure of visuospatial ability and suggest its potential worth as an efficient estimate of intellectual functioning in situations where lengthier tests may be inappropriate or unfeasible.

  15. 2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.

    2009-01-01

    A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.

  16. Adulteration of diesel/biodiesel blends by vegetable oil as determined by Fourier transform (FT) near infrared spectrometry and FT-Raman spectroscopy.

    PubMed

    Oliveira, Flavia C C; Brandão, Christian R R; Ramalho, Hugo F; da Costa, Leonardo A F; Suarez, Paulo A Z; Rubim, Joel C

    2007-03-28

    In this work it has been shown that the routine ASTM methods (ASTM 4052, ASTM D 445, ASTM D 4737, ASTM D 93, and ASTM D 86) recommended by the ANP (the Brazilian National Agency for Petroleum, Natural Gas and Biofuels) to determine the quality of diesel/biodiesel blends are not suitable to prevent the adulteration of B2 or B5 blends with vegetable oils. Considering the previous and actual problems with fuel adulterations in Brazil, we have investigated the application of vibrational spectroscopy (Fourier transform (FT) near infrared spectrometry and FT-Raman) to identify adulterations of B2 and B5 blends with vegetable oils. Partial least square regression (PLS), principal component regression (PCR), and artificial neural network (ANN) calibration models were designed and their relative performances were evaluated by external validation using the F-test. The PCR, PLS, and ANN calibration models based on the Fourier transform (FT) near infrared spectrometry and FT-Raman spectroscopy were designed using 120 samples. Other 62 samples were used in the validation and external validation, for a total of 182 samples. The results have shown that among the designed calibration models, the ANN/FT-Raman presented the best accuracy (0.028%, w/w) for samples used in the external validation.

  17. Overall uncertainty measurement for near infrared analysis of cryptotanshinone in tanshinone extract

    NASA Astrophysics Data System (ADS)

    Xue, Zhong; Xu, Bing; Shi, Xinyuan; Yang, Chan; Cui, Xianglong; Luo, Gan; Qiao, Yanjiang

    2017-01-01

    This study presented a new strategy of overall uncertainty measurement for near infrared (NIR) quantitative analysis of cryptotanshinone in tanshinone extract powders. The overall uncertainty of NIR analysis from validation data of precision, trueness and robustness study was fully investigated and discussed. Quality by design (QbD) elements, such as risk assessment and design of experiment (DOE) were utilized to organize the validation data. An "I × J × K" (series I, the number of repetitions J and level of concentrations K) full factorial design was used to calculate uncertainty from the precision and trueness data. And a 27-4 Plackett-Burmann matrix with four different influence factors resulted from the failure mode and effect analysis (FMEA) analysis was adapted for the robustness study. The overall uncertainty profile was introduced as a graphical decision making tool to evaluate the validity of NIR method over the predefined concentration range. In comparison with the T. Saffaj's method (Analyst, 2013, 138, 4677.) for overall uncertainty assessment, the proposed approach gave almost the same results, demonstrating that the proposed method was reasonable and valid. Moreover, the proposed method can help identify critical factors that influence the NIR prediction performance, which could be used for further optimization of the NIR analytical procedures in routine use.

  18. [The design and content validity of an infection control health education program for adolescents with cancer].

    PubMed

    Wang, Huey-Yuh; Chen, Yueh-Chih; Lin, Dong-Tsamn; Gau, Bih-Shya

    2005-06-01

    The purpose of this article is to describe the process of designing an Infection Control Health Education Program (ICP) for adolescents with cancer, to describe the content of that program, and to evaluate its validity. The program consisted of an audiovisual "Infection Control Health Education Program in Video Compact Disc (VCD)" and "Self-Care Daily Checklist (SCDC)". The VCD was developed from systematic literature reviews and consultations with experts in pediatric oncology care. It addresses the main issues of infection control among adolescents. The content of the SCDC was designed to enhance adolescents' self-care capabilities by means of twice daily self-recording. The response format for content validity of the VCD and SCDC was a 5-point Likert scale. The mean score for content validity was 4.72 for the VCD and 4.82 for the SCDC. The percentage of expert agreement was 99% for the VCD and 98% for the SCDC. In summary, the VCD was effective in improving adolescents' capacity for self-care and the extensive reinforcement SCDC was also shown to be useful. In a subsequent pilot study, the authors used this program to increase adolescent cancer patients' self-care knowledge and behavior for, and decrease their levels of secondary infection.

  19. Portuguese community pharmacists' attitudes to and knowledge of antibiotic misuse: questionnaire development and reliability.

    PubMed

    Roque, Fátima; Soares, Sara; Breitenfeld, Luiza; Gonzalez-Gonzalez, Cristian; Figueiras, Adolfo; Herdeiro, Maria Teresa

    2014-01-01

    To develop and evaluate the reliability of a self-administered questionnaire designed to assess the attitudes and knowledge of community pharmacists in Portugal about microbial resistance and the antibiotic dispensing process. This study was divided into the following three stages: (1) design of the questionnaire, which included a literature review and a qualitative study with focus-group sessions; (2) assessment of face and content validity, using a panel of experts and a pre-test of community pharmacists; and, (3) pilot study and reliability analysis, which included a test-retest study covering fifty practising pharmacists based at community pharmacies in five districts situated in Northern Portugal. Questionnaire reproducibility was quantified using the intraclass correlation coefficient (ICC; 95% confidence interval) computed by means of one-way analysis of variance (ANOVA). Internal consistency was evaluated using Cronbach's alpha. The correlation coefficients were fair to good (ICC>0.4) for all statements (scale-items) regarding knowledge of and attitudes to antibiotic resistance, and ranged from fair to good to excellent for statements about situations in which pharmacists acknowledged that antibiotics were sometimes dispensed without a medical prescription (ICC>0.8). Cronbach's alpha for this section was 0.716. The questionnaire designed in this study is valid and reliable in terms of content validity, face validity and reproducibility.

  20. Design and content validation of a set of SMS to promote seeking of specialized mental health care within the Allillanchu Project.

    PubMed

    Toyama, M; Diez-Canseco, F; Busse, P; Del Mastro, I; Miranda, J J

    2018-01-01

    The aim of this study was to design and develop a set of, short message service (SMS) to promote specialized mental health care seeking within the framework of the Allillanchu Project. The design phase consisted of 39 interviews with potential recipients of the SMS, about use of cellphones, and perceptions and motivations towards seeking mental health care. After the data collection, the research team developed a set of seven SMS for validation. The content validation phase consisted of 24 interviews. The participants answered questions regarding their understanding of the SMS contents and rated its appeal. The seven SMS subjected to content validation were tailored to the recipient using their name. The reminder message included the working hours of the psychology service at the patient's health center. The motivational messages addressed perceived barriers and benefits when seeking mental health services. The average appeal score of the seven SMS was 9.0 (SD±0.4) of 10 points. Participants did not make significant suggestions to change the wording of the messages. Five SMS were chosen to be used. This approach is likely to be applicable to other similar low-resource settings, and the methodology used can be adapted to develop SMS for other chronic conditions.

Top