Sample records for validation process included

  1. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  2. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  3. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  4. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  5. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  6. Validation Process Methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lewis, John E.; English, Christine M.; Gesick, Joshua C.

    This report documents the validation process as applied to projects awarded through Funding Opportunity Announcements (FOAs) within the U.S. Department of Energy Bioenergy Technologies Office (DOE-BETO). It describes the procedures used to protect and verify project data, as well as the systematic framework used to evaluate and track performance metrics throughout the life of the project. This report also describes the procedures used to validate the proposed process design, cost data, analysis methodologies, and supporting documentation provided by the recipients.

  7. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  8. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  9. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  10. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  11. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  12. 76 FR 4360 - Guidance for Industry on Process Validation: General Principles and Practices; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... elements of process validation for the manufacture of human and animal drug and biological products... process validation for the manufacture of human and animal drug and biological products, including APIs. This guidance describes process validation activities in three stages: In Stage 1, Process Design, the...

  13. Assessment Methodology for Process Validation Lifecycle Stage 3A.

    PubMed

    Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Chen, Shu; Ingram, Marzena; Spes, Jana

    2017-07-01

    The paper introduces evaluation methodologies and associated statistical approaches for process validation lifecycle Stage 3A. The assessment tools proposed can be applied to newly developed and launched small molecule as well as bio-pharma products, where substantial process and product knowledge has been gathered. The following elements may be included in Stage 3A: number of 3A batch determination; evaluation of critical material attributes, critical process parameters, critical quality attributes; in vivo in vitro correlation; estimation of inherent process variability (IPV) and PaCS index; process capability and quality dashboard (PCQd); and enhanced control strategy. US FDA guidance on Process Validation: General Principles and Practices, January 2011 encourages applying previous credible experience with suitably similar products and processes. A complete Stage 3A evaluation is a valuable resource for product development and future risk mitigation of similar products and processes. Elements of 3A assessment were developed to address industry and regulatory guidance requirements. The conclusions made provide sufficient information to make a scientific and risk-based decision on product robustness.

  14. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

    PubMed

    Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L

    2012-11-01

    Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

  15. Verifying and Validating Proposed Models for FSW Process Optimization

    NASA Technical Reports Server (NTRS)

    Schneider, Judith

    2008-01-01

    This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms

  16. STR-validator: an open source platform for validation and process control.

    PubMed

    Hansson, Oskar; Gill, Peter; Egeland, Thore

    2014-11-01

    This paper addresses two problems faced when short tandem repeat (STR) systems are validated for forensic purposes: (1) validation is extremely time consuming and expensive, and (2) there is strong consensus about what to validate but not how. The first problem is solved by powerful data processing functions to automate calculations. Utilising an easy-to-use graphical user interface, strvalidator (hereafter referred to as STR-validator) can greatly increase the speed of validation. The second problem is exemplified by a series of analyses, and subsequent comparison with published material, highlighting the need for a common validation platform. If adopted by the forensic community STR-validator has the potential to standardise the analysis of validation data. This would not only facilitate information exchange but also increase the pace at which laboratories are able to switch to new technology. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  17. Guidance on validation and qualification of processes and operations involving radiopharmaceuticals.

    PubMed

    Todde, S; Peitl, P Kolenc; Elsinga, P; Koziorowski, J; Ferrari, V; Ocak, E M; Hjelstuen, O; Patt, M; Mindt, T L; Behe, M

    2017-01-01

    Validation and qualification activities are nowadays an integral part of the day by day routine work in a radiopharmacy. This document is meant as an Appendix of Part B of the EANM "Guidelines on Good Radiopharmacy Practice (GRPP)" issued by the Radiopharmacy Committee of the EANM, covering the qualification and validation aspects related to the small-scale "in house" preparation of radiopharmaceuticals. The aim is to provide more detailed and practice-oriented guidance to those who are involved in the small-scale preparation of radiopharmaceuticals which are not intended for commercial purposes or distribution. The present guideline covers the validation and qualification activities following the well-known "validation chain", that begins with editing the general Validation Master Plan document, includes all the required documentation (e.g. User Requirement Specification, Qualification protocols, etc.), and leads to the qualification of the equipment used in the preparation and quality control of radiopharmaceuticals, until the final step of Process Validation. A specific guidance to the qualification and validation activities specifically addressed to small-scale hospital/academia radiopharmacies is here provided. Additional information, including practical examples, are also available.

  18. Validity in the hiring and evaluation process.

    PubMed

    Gregg, Robert E

    2006-01-01

    Validity means "based on sound principles." Hiring decisions, discharges, and layoffs are often challenged in court. Unfortunately the employer's defenses are too often found "invalid." The Americans With Disabilities Act requires the employer to show a "validated" hiring process. Defense of discharges or layoffs often focuses on validity of the employer's decision. This article explains the elements of validity needed for sound and defendable employment decisions.

  19. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches

  20. Using 'big data' to validate claims made in the pharmaceutical approval process.

    PubMed

    Wasser, Thomas; Haynes, Kevin; Barron, John; Cziraky, Mark

    2015-01-01

    Big Data in the healthcare setting refers to the storage, assimilation, and analysis of large quantities of information regarding patient care. These data can be collected and stored in a wide variety of ways including electronic medical records collected at the patient bedside, or through medical records that are coded and passed to insurance companies for reimbursement. When these data are processed it is possible to validate claims as a part of the regulatory review process regarding the anticipated performance of medications and devices. In order to analyze properly claims by manufacturers and others, there is a need to express claims in terms that are testable in a timeframe that is useful and meaningful to formulary committees. Claims for the comparative benefits and costs, including budget impact, of products and devices need to be expressed in measurable terms, ideally in the context of submission or validation protocols. Claims should be either consistent with accessible Big Data or able to support observational studies where Big Data identifies target populations. Protocols should identify, in disaggregated terms, key variables that would lead to direct or proxy validation. Once these variables are identified, Big Data can be used to query massive quantities of data in the validation process. Research can be passive or active in nature. Passive, where the data are collected retrospectively; active where the researcher is prospectively looking for indicators of co-morbid conditions, side-effects or adverse events, testing these indicators to determine if claims are within desired ranges set forth by the manufacturer. Additionally, Big Data can be used to assess the effectiveness of therapy through health insurance records. This, for example, could indicate that disease or co-morbid conditions cease to be treated. Understanding the basic strengths and weaknesses of Big Data in the claim validation process provides a glimpse of the value that this research

  1. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    PubMed

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  2. Validation of the Vanderbilt Holistic Face Processing Test.

    PubMed

    Wang, Chao-Chih; Ross, David A; Gauthier, Isabel; Richler, Jennifer J

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.

  3. Validation of the Vanderbilt Holistic Face Processing Test

    PubMed Central

    Wang, Chao-Chih; Ross, David A.; Gauthier, Isabel; Richler, Jennifer J.

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1. PMID:27933014

  4. FDA 2011 process validation guidance: lifecycle compliance model.

    PubMed

    Campbell, Cliff

    2014-01-01

    This article has been written as a contribution to the industry's efforts in migrating from a document-driven to a data-driven compliance mindset. A combination of target product profile, control engineering, and general sum principle techniques is presented as the basis of a simple but scalable lifecycle compliance model in support of modernized process validation. Unit operations and significant variables occupy pole position within the model, documentation requirements being treated as a derivative or consequence of the modeling process. The quality system is repositioned as a subordinate of system quality, this being defined as the integral of related "system qualities". The article represents a structured interpretation of the U.S. Food and Drug Administration's 2011 Guidance for Industry on Process Validation and is based on the author's educational background and his manufacturing/consulting experience in the validation field. The U.S. Food and Drug Administration's Guidance for Industry on Process Validation (2011) provides a wide-ranging and rigorous outline of compliant drug manufacturing requirements relative to its 20(th) century predecessor (1987). Its declared focus is patient safety, and it identifies three inter-related (and obvious) stages of the compliance lifecycle. Firstly, processes must be designed, both from a technical and quality perspective. Secondly, processes must be qualified, providing evidence that the manufacturing facility is fully "roadworthy" and fit for its intended purpose. Thirdly, processes must be verified, meaning that commercial batches must be monitored to ensure that processes remain in a state of control throughout their lifetime.

  5. A Process Analytical Technology (PAT) approach to control a new API manufacturing process: development, validation and implementation.

    PubMed

    Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric

    2014-03-01

    Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. © 2013 Published by Elsevier B.V.

  6. Development and Validation of Pathogen Environmental Monitoring Programs for Small Cheese Processing Facilities.

    PubMed

    Beno, Sarah M; Stasiewicz, Matthew J; Andrus, Alexis D; Ralyea, Robert D; Kent, David J; Martin, Nicole H; Wiedmann, Martin; Boor, Kathryn J

    2016-12-01

    Pathogen environmental monitoring programs (EMPs) are essential for food processing facilities of all sizes that produce ready-to-eat food products exposed to the processing environment. We developed, implemented, and evaluated EMPs targeting Listeria spp. and Salmonella in nine small cheese processing facilities, including seven farmstead facilities. Individual EMPs with monthly sample collection protocols were designed specifically for each facility. Salmonella was detected in only one facility, with likely introduction from the adjacent farm indicated by pulsed-field gel electrophoresis data. Listeria spp. were isolated from all nine facilities during routine sampling. The overall Listeria spp. (other than Listeria monocytogenes ) and L. monocytogenes prevalences in the 4,430 environmental samples collected were 6.03 and 1.35%, respectively. Molecular characterization and subtyping data suggested persistence of a given Listeria spp. strain in seven facilities and persistence of L. monocytogenes in four facilities. To assess routine sampling plans, validation sampling for Listeria spp. was performed in seven facilities after at least 6 months of routine sampling. This validation sampling was performed by independent individuals and included collection of 50 to 150 samples per facility, based on statistical sample size calculations. Two of the facilities had a significantly higher frequency of detection of Listeria spp. during the validation sampling than during routine sampling, whereas two other facilities had significantly lower frequencies of detection. This study provides a model for a science- and statistics-based approach to developing and validating pathogen EMPs.

  7. Including hydrological self-regulating processes in peatland models: Effects on peatmoss drought projections.

    PubMed

    Nijp, Jelmer J; Metselaar, Klaas; Limpens, Juul; Teutschbein, Claudia; Peichl, Matthias; Nilsson, Mats B; Berendse, Frank; van der Zee, Sjoerd E A T M

    2017-02-15

    The water content of the topsoil is one of the key factors controlling biogeochemical processes, greenhouse gas emissions and biosphere - atmosphere interactions in many ecosystems, particularly in northern peatlands. In these wetland ecosystems, the water content of the photosynthetic active peatmoss layer is crucial for ecosystem functioning and carbon sequestration, and is sensitive to future shifts in rainfall and drought characteristics. Current peatland models differ in the degree in which hydrological feedbacks are included, but how this affects peatmoss drought projections is unknown. The aim of this paper was to systematically test whether the level of hydrological detail in models could bias projections of water content and drought stress for peatmoss in northern peatlands using downscaled projections for rainfall and potential evapotranspiration in the current (1991-2020) and future climate (2061-2090). We considered four model variants that either include or exclude moss (rain)water storage and peat volume change, as these are two central processes in the hydrological self-regulation of peatmoss carpets. Model performance was validated using field data of a peatland in northern Sweden. Including moss water storage as well as peat volume change resulted in a significant improvement of model performance, despite the extra parameters added. The best performance was achieved if both processes were included. Including moss water storage and peat volume change consistently reduced projected peatmoss drought frequency with >50%, relative to the model excluding both processes. Projected peatmoss drought frequency in the growing season was 17% smaller under future climate than current climate, but was unaffected by including the hydrological self-regulating processes. Our results suggest that ignoring these two fine-scale processes important in hydrological self-regulation of northern peatlands will have large consequences for projected climate change impact on

  8. Validation, Edits, and Application Processing System Report: Phase I.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    Findings of phase 1 of a study of the 1979-1980 Basic Educational Opportunity Grants validation, edits, and application processing system are presented. The study was designed to: assess the impact of the validation effort and processing system edits on the correct award of Basic Grants; and assess the characteristics of students most likely to…

  9. Examining the validity of self-reports on scales measuring students' strategic processing.

    PubMed

    Samuelstuen, Marit S; Bråten, Ivar

    2007-06-01

    Self-report inventories trying to measure strategic processing at a global level have been much used in both basic and applied research. However, the validity of global strategy scores is open to question because such inventories assess strategy perceptions outside the context of specific task performance. The primary aim was to examine the criterion-related and construct validity of the global strategy data obtained with the Cross-Curricular Competencies (CCC) scale. Additionally, we wanted to compare the validity of these data with the validity of data obtained with a task-specific self-report inventory focusing on the same types of strategies. The sample included 269 10th-grade students from 12 different junior high schools. Global strategy use as assessed with the CCC was compared with task-specific strategy use reported in three different reading situations. Moreover, relationships between scores on the CCC and scores on measures of text comprehension were examined and compared with relationships between scores on the task-specific strategy measure and the same comprehension measures. The comparison between the CCC strategy scores and the task-specific strategy scores suggested only modest criterion-related validity for the data obtained with the global strategy inventory. The CCC strategy scores were also not related to the text comprehension measures, indicating poor construct validity. In contrast, the task-specific strategy scores were positively related to the comprehension measures, indicating good construct validity. Attempts to measure strategic processing at a global level seem to have limited validity and utility.

  10. Risk perception and information processing: the development and validation of a questionnaire to assess self-reported information processing.

    PubMed

    Smerecnik, Chris M R; Mesters, Ilse; Candel, Math J J M; De Vries, Hein; De Vries, Nanne K

    2012-01-01

    The role of information processing in understanding people's responses to risk information has recently received substantial attention. One limitation of this research concerns the unavailability of a validated questionnaire of information processing. This article presents two studies in which we describe the development and validation of the Information-Processing Questionnaire to meet that need. Study 1 describes the development and initial validation of the questionnaire. Participants were randomized to either a systematic processing or a heuristic processing condition after which they completed a manipulation check and the initial 15-item questionnaire and again two weeks later. The questionnaire was subjected to factor reliability and validity analyses on both measurement times for purposes of cross-validation of the results. A two-factor solution was observed representing a systematic processing and a heuristic processing subscale. The resulting scale showed good reliability and validity, with the systematic condition scoring significantly higher on the systematic subscale and the heuristic processing condition significantly higher on the heuristic subscale. Study 2 sought to further validate the questionnaire in a field study. Results of the second study corresponded with those of Study 1 and provided further evidence of the validity of the Information-Processing Questionnaire. The availability of this information-processing scale will be a valuable asset for future research and may provide researchers with new research opportunities. © 2011 Society for Risk Analysis.

  11. Eye-Tracking as a Tool in Process-Oriented Reading Test Validation

    ERIC Educational Resources Information Center

    Solheim, Oddny Judith; Uppstad, Per Henning

    2011-01-01

    The present paper addresses the continuous need for methodological reflection on how to validate inferences made on the basis of test scores. Validation is a process that requires many lines of evidence. In this article we discuss the potential of eye tracking methodology in process-oriented reading test validation. Methodological considerations…

  12. Management of the General Process of Parenteral Nutrition Using mHealth Technologies: Evaluation and Validation Study

    PubMed Central

    2018-01-01

    Background Any system applied to the control of parenteral nutrition (PN) ought to prove that the process meets the established requirements and include a repository of records to allow evaluation of the information about PN processes at any time. Objective The goal of the research was to evaluate the mobile health (mHealth) app and validate its effectiveness in monitoring the management of the PN process. Methods We studied the evaluation and validation of the general process of PN using an mHealth app. The units of analysis were the PN bags prepared and administered at the Son Espases University Hospital, Palma, Spain, from June 1 to September 6, 2016. For the evaluation of the app, we used the Poststudy System Usability Questionnaire and subsequent analysis with the Cronbach alpha coefficient. Validation was performed by checking the compliance of control for all operations on each of the stages (validation and transcription of the prescription, preparation, conservation, and administration) and by monitoring the operative control points and critical control points. Results The results obtained from 387 bags were analyzed, with 30 interruptions of administration. The fulfillment of stages was 100%, including noncritical nonconformities in the storage control. The average deviation in the weight of the bags was less than 5%, and the infusion time did not present deviations greater than 1 hour. Conclusions The developed app successfully passed the evaluation and validation tests and was implemented to perform the monitoring procedures for the overall PN process. A new mobile solution to manage the quality and traceability of sensitive medicines such as blood-derivative drugs and hazardous drugs derived from this project is currently being deployed. PMID:29615389

  13. How Many Batches Are Needed for Process Validation under the New FDA Guidance?

    PubMed

    Yang, Harry

    2013-01-01

    The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and they highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance. The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of

  14. Validity of a questionnaire measuring motives for choosing foods including sustainable concerns.

    PubMed

    Sautron, Valérie; Péneau, Sandrine; Camilleri, Géraldine M; Muller, Laurent; Ruffieux, Bernard; Hercberg, Serge; Méjean, Caroline

    2015-04-01

    Since the 1990s, sustainability of diet has become an increasingly important concern for consumers. However, there is no validated multidimensional measurement of motivation in the choice of foods including a concern for sustainability currently available. In the present study, we developed a questionnaire that measures food choice motives during purchasing, and we tested its psychometric properties. The questionnaire included 104 items divided into four predefined dimensions (environmental, health and well-being, economic and miscellaneous). It was administered to 1000 randomly selected subjects participating in the Nutrinet-Santé cohort study. Among 637 responders, one-third found the questionnaire complex or too long, while one-quarter found it difficult to fill in. Its underlying structure was determined by exploratory factor analysis and then internally validated by confirmatory factor analysis. Reliability was also assessed by internal consistency of selected dimensions and test-retest repeatability. After selecting the most relevant items, first-order analysis highlighted nine main dimensions: labeled ethics and environment, local and traditional production, taste, price, environmental limitations, health, convenience, innovation and absence of contaminants. The model demonstrated excellent internal validity (adjusted goodness of fit index = 0.97; standardized root mean square residuals = 0.07) and satisfactory reliability (internal consistency = 0.96, test-retest repeatability coefficient ranged between 0.31 and 0.68 over a mean 4-week period). This study enabled precise identification of the various dimensions in food choice motives and proposed an original, internally valid tool applicable to large populations for assessing consumer food motivation during purchasing, particularly in terms of sustainability. Copyright © 2014 Elsevier Ltd. All rights reserved.

  15. The Role of Structural Models in the Solar Sail Flight Validation Process

    NASA Technical Reports Server (NTRS)

    Johnston, John D.

    2004-01-01

    NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.

  16. Management of the General Process of Parenteral Nutrition Using mHealth Technologies: Evaluation and Validation Study.

    PubMed

    Cervera Peris, Mercedes; Alonso Rorís, Víctor Manuel; Santos Gago, Juan Manuel; Álvarez Sabucedo, Luis; Wanden-Berghe, Carmina; Sanz-Valero, Javier

    2018-04-03

    Any system applied to the control of parenteral nutrition (PN) ought to prove that the process meets the established requirements and include a repository of records to allow evaluation of the information about PN processes at any time. The goal of the research was to evaluate the mobile health (mHealth) app and validate its effectiveness in monitoring the management of the PN process. We studied the evaluation and validation of the general process of PN using an mHealth app. The units of analysis were the PN bags prepared and administered at the Son Espases University Hospital, Palma, Spain, from June 1 to September 6, 2016. For the evaluation of the app, we used the Poststudy System Usability Questionnaire and subsequent analysis with the Cronbach alpha coefficient. Validation was performed by checking the compliance of control for all operations on each of the stages (validation and transcription of the prescription, preparation, conservation, and administration) and by monitoring the operative control points and critical control points. The results obtained from 387 bags were analyzed, with 30 interruptions of administration. The fulfillment of stages was 100%, including noncritical nonconformities in the storage control. The average deviation in the weight of the bags was less than 5%, and the infusion time did not present deviations greater than 1 hour. The developed app successfully passed the evaluation and validation tests and was implemented to perform the monitoring procedures for the overall PN process. A new mobile solution to manage the quality and traceability of sensitive medicines such as blood-derivative drugs and hazardous drugs derived from this project is currently being deployed. ©Mercedes Cervera Peris, Víctor Manuel Alonso Rorís, Juan Manuel Santos Gago, Luis Álvarez Sabucedo, Carmina Wanden-Berghe, Javier Sanz-Valero. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 03.04.2018.

  17. Validation of a pulsed electric field process to pasteurize strawberry puree

    USDA-ARS?s Scientific Manuscript database

    An inexpensive data acquisition method was developed to validate the exact number and shape of the pulses applied during pulsed electric fields (PEF) processing. The novel validation method was evaluated in conjunction with developing a pasteurization PEF process for strawberry puree. Both buffered...

  18. Construct Validity in TOEFL iBT Speaking Tasks: Insights from Natural Language Processing

    ERIC Educational Resources Information Center

    Kyle, Kristopher; Crossley, Scott A.; McNamara, Danielle S.

    2016-01-01

    This study explores the construct validity of speaking tasks included in the TOEFL iBT (e.g., integrated and independent speaking tasks). Specifically, advanced natural language processing (NLP) tools, MANOVA difference statistics, and discriminant function analyses (DFA) are used to assess the degree to which and in what ways responses to these…

  19. Content validation using an expert panel: assessment process for assistive technology adopted by farmers with disabilities.

    PubMed

    Mathew, S N; Field, W E; French, B F

    2011-07-01

    This article reports the use of an expert panel to perform content validation of an experimental assessment process for the safety of assistive technology (AT) adopted by farmers with disabilities. The validation process was conducted by a panel of six experts experienced in the subject matter, i.e., design, use, and assessment of AT for farmers with disabilities. The exercise included an evaluation session and two focus group sessions. The evaluation session consisted of using the assessment process under consideration by the panel to evaluate a set of nine ATs fabricated by a farmer on his farm site. The expert panel also participated in the focus group sessions conducted immediately before and after the evaluation session. The resulting data were analyzed using discursive analysis, and the results were incorporated into the final assessment process. The method and the results are presented with recommendations for the use of expert panels in research projects and validation of assessment tools.

  20. Increased importance of the documented development stage in process validation.

    PubMed

    Mohammed-Ziegler, Ildikó; Medgyesi, Ildikó

    2012-07-01

    Current trends in pharmaceutical quality assurance moved when the Federal Drug Administration (FDA) of the USA published its new guideline on process validation in 2011. This guidance introduced the lifecycle approach of process validation. In this short communication some typical changes from the point of view of practice of API production are addressed in the light of inspection experiences. Some details are compared with the European regulations.

  1. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    PubMed

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; v air = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  2. A systematic review of validated methods for identifying anaphylaxis, including anaphylactic shock and angioneurotic edema, using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of anaphylaxis. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis health outcome of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify anaphylaxis and including validation estimates of the coding algorithms. Our search revealed limited literature focusing on anaphylaxis that provided administrative and claims data-based algorithms and validation estimates. Only four studies identified via literature searches provided validated algorithms; however, two additional studies were identified by Mini-Sentinel collaborators and were incorporated. The International Classification of Diseases, Ninth Revision, codes varied, as did the positive predictive value, depending on the cohort characteristics and the specific codes used to identify anaphylaxis. Research needs to be conducted on designing validation studies to test anaphylaxis algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  3. Funding for the 2ND IAEA technical meeting on fusion data processing, validation and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenwald, Martin

    The International Atomic Energy Agency (IAEA) will organize the second Technical Meeting on Fusion Da Processing, Validation and Analysis from 30 May to 02 June, 2017, in Cambridge, MA USA. The meeting w be hosted by the MIT Plasma Science and Fusion Center (PSFC). The objective of the meeting is to provide a platform where a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucialmore » for a knowledge based understanding of the physical processes governing the dynamics of these plasmas. The meeting will aim at fostering, in particular, discussions of research and development results that set out or underline trends observed in the current major fusion confinement devices. General information on the IAEA, including its mission and organization, can be found at the IAEA websit Uncertainty quantification (UQ) Model selection, validation, and verification (V&V) Probability theory and statistical analysis Inverse problems & equilibrium reconstru ction Integrated data analysis Real time data analysis Machine learning Signal/image proc essing & pattern recognition Experimental design and synthetic diagnostics Data management« less

  4. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  5. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  6. From primed construct to motivated behavior: validation processes in goal pursuit.

    PubMed

    Demarree, Kenneth G; Loersch, Chris; Briñol, Pablo; Petty, Richard E; Payne, B Keith; Rucker, Derek D

    2012-12-01

    Past research has found that primes can automatically initiate unconscious goal striving. Recent models of priming have suggested that this effect can be moderated by validation processes. According to a goal-validation perspective, primes should cause changes in one's motivational state to the extent people have confidence in the prime-related mental content. Across three experiments, we provided the first direct empirical evidence for this goal-validation account. Using a variety of goal priming manipulations (cooperation vs. competition, achievement, and self-improvement vs. saving money) and validity inductions (power, ease, and writing about confidence), we demonstrated that the impact of goal primes on behavior occurs to a greater extent when conditions foster confidence (vs. doubt) in mental contents. Indeed, when conditions foster doubt, goal priming effects are eliminated or counter to the implications of the prime. The implications of these findings for research on goal priming and validation processes are discussed.

  7. Validation of the process criteria for assessment of a hospital nursing service.

    PubMed

    Feldman, Liliane Bauer; Cunha, Isabel Cristina Kowal Olm; D'Innocenzo, Maria

    2013-01-01

    to validate an instrument containing process criteria for assessment of a hospital nursing service based on the National Accreditation Organization program. a descriptive, quantitative methodological study performed in stages. An instrument constructed with 69 process criteria was assessed by 49 nurses from accredited hospitals in 2009, according to a Likert scale, and validated by 16 judges through Delphi rounds in 2010. the original instrument assessed by nurses with 69 process criteria was judged by the degree of importance, and changed to 39 criteria. In the first Delphi round, the 39 criteria reached consensus among the 19 judges, with a medium reliability by Cronbach's alpha. In the second round, 40 converging criteria were validated by 16 judges, with high reliability. The criteria addressed management, costs, teaching, education, indicators, protocols, human resources, communication, among others. the 40 process criteria formed a validated instrument to assess the hospital nursing service which, when measured, can better direct interventions by nurses in reaching and strengthening outcomes.

  8. In silico target prediction for elucidating the mode of action of herbicides including prospective validation.

    PubMed

    Chiddarwar, Rucha K; Rohrer, Sebastian G; Wolf, Antje; Tresch, Stefan; Wollenhaupt, Sabrina; Bender, Andreas

    2017-01-01

    The rapid emergence of pesticide resistance has given rise to a demand for herbicides with new mode of action (MoA). In the agrochemical sector, with the availability of experimental high throughput screening (HTS) data, it is now possible to utilize in silico target prediction methods in the early discovery phase to suggest the MoA of a compound via data mining of bioactivity data. While having been established in the pharmaceutical context, in the agrochemical area this approach poses rather different challenges, as we have found in this work, partially due to different chemistry, but even more so due to different (usually smaller) amounts of data, and different ways of conducting HTS. With the aim to apply computational methods for facilitating herbicide target identification, 48,000 bioactivity data against 16 herbicide targets were processed to train Laplacian modified Naïve Bayesian (NB) classification models. The herbicide target prediction model ("HerbiMod") is an ensemble of 16 binary classification models which are evaluated by internal, external and prospective validation sets. In addition to the experimental inactives, 10,000 random agrochemical inactives were included in the training process, which showed to improve the overall balanced accuracy of our models up to 40%. For all the models, performance in terms of balanced accuracy of≥80% was achieved in five-fold cross validation. Ranking target predictions was addressed by means of z-scores which improved predictivity over using raw scores alone. An external testset of 247 compounds from ChEMBL and a prospective testset of 394 compounds from BASF SE tested against five well studied herbicide targets (ACC, ALS, HPPD, PDS and PROTOX) were used for further validation. Only 4% of the compounds in the external testset lied in the applicability domain and extrapolation (and correct prediction) was hence impossible, which on one hand was surprising, and on the other hand illustrated the utilization of

  9. Validation of the TTM processes of change measure for physical activity in an adult French sample.

    PubMed

    Bernard, Paquito; Romain, Ahmed-Jérôme; Trouillet, Raphael; Gernigon, Christophe; Nigg, Claudio; Ninot, Gregory

    2014-04-01

    Processes of change (POC) are constructs from the transtheoretical model that propose to examine how people engage in a behavior. However, there is no consensus about a leading model explaining POC and there is no validated French POC scale in physical activity This study aimed to compare the different existing models to validate a French POC scale. Three studies, with 748 subjects included, were carried out to translate the items and evaluate their clarity (study 1, n = 77), to assess the factorial validity (n = 200) and invariance/equivalence (study 2, n = 471), and to analyze the concurrent validity by stage × process analyses (study 3, n = 671). Two models displayed adequate fit to the data; however, based on the Akaike information criterion, the fully correlated five-factor model appeared as the most appropriate to measure POC in physical activity. The invariance/equivalence was also confirmed across genders and student status. Four of the five existing factors discriminated pre-action and post-action stages. These data support the validation of the POC questionnaire in physical activity among a French sample. More research is needed to explore the longitudinal properties of this scale.

  10. Bayesian assurance and sample size determination in the process validation life-cycle.

    PubMed

    Faya, Paul; Seaman, John W; Stamey, James D

    2017-01-01

    Validation of pharmaceutical manufacturing processes is a regulatory requirement and plays a key role in the assurance of drug quality, safety, and efficacy. The FDA guidance on process validation recommends a life-cycle approach which involves process design, qualification, and verification. The European Medicines Agency makes similar recommendations. The main purpose of process validation is to establish scientific evidence that a process is capable of consistently delivering a quality product. A major challenge faced by manufacturers is the determination of the number of batches to be used for the qualification stage. In this article, we present a Bayesian assurance and sample size determination approach where prior process knowledge and data are used to determine the number of batches. An example is presented in which potency uniformity data is evaluated using a process capability metric. By using the posterior predictive distribution, we simulate qualification data and make a decision on the number of batches required for a desired level of assurance.

  11. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the

  12. [Failure modes and effects analysis in the prescription, validation and dispensing process].

    PubMed

    Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T

    2012-01-01

    To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.

  13. Model-based verification and validation of the SMAP uplink processes

    NASA Astrophysics Data System (ADS)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  14. Validation of the manufacturing process used to produce long-acting recombinant factor IX Fc fusion protein

    PubMed Central

    McCue, J; Osborne, D; Dumont, J; Peters, R; Mei, B; Pierce, G F; Kobayashi, K; Euwart, D

    2014-01-01

    Recombinant factor IX Fc (rFIXFc) fusion protein is the first of a new class of bioengineered long-acting factors approved for the treatment and prevention of bleeding episodes in haemophilia B. The aim of this work was to describe the manufacturing process for rFIXFc, to assess product quality and to evaluate the capacity of the process to remove impurities and viruses. This manufacturing process utilized a transferable and scalable platform approach established for therapeutic antibody manufacturing and adapted for production of the rFIXFc molecule. rFIXFc was produced using a process free of human- and animal-derived raw materials and a host cell line derived from human embryonic kidney (HEK) 293H cells. The process employed multi-step purification and viral clearance processing, including use of a protein A affinity capture chromatography step, which binds to the Fc portion of the rFIXFc molecule with high affinity and specificity, and a 15 nm pore size virus removal nanofilter. Process validation studies were performed to evaluate identity, purity, activity and safety. The manufacturing process produced rFIXFc with consistent product quality and high purity. Impurity clearance validation studies demonstrated robust and reproducible removal of process-related impurities and adventitious viruses. The rFIXFc manufacturing process produces a highly pure product, free of non-human glycan structures. Validation studies demonstrate that this product is produced with consistent quality and purity. In addition, the scalability and transferability of this process are key attributes to ensure consistent and continuous supply of rFIXFc. PMID:24811361

  15. Validity, Reliability, and Equity Issues in an Observational Talent Assessment Process in the Performing Arts

    ERIC Educational Resources Information Center

    Oreck, Barry A.; Owen, Steven V.; Baum, Susan M.

    2003-01-01

    The lack of valid, research-based methods to identify potential artistic talent hampers the inclusion of the arts in programs for the gifted and talented. The Talent Assessment Process in Dance, Music, and Theater (D/M/T TAP) was designed to identify potential performing arts talent in diverse populations, including bilingual and special education…

  16. Adaptation and validation of indicators concerning the sterilization process of supplies in Primary Health Care services.

    PubMed

    Passos, Isis Pienta Batista Dias; Padoveze, Maria Clara; Roseira, Camila Eugênia; de Figueiredo, Rosely Moralez

    2015-01-01

    to adapt and validate, by expert consensus, a set of indicators used to assess the sterilization process of dental, medical and hospital supplies to be used in PHC services. qualitative methodological study performed in two stages. The first stage included a focal group composed of experts to adapt the indicators to be used in PHC. In the second stage, the indicators were validated using a 4-point Likert scale, which was completed by judges. A Content Validity Index of ≥ 0.75 was considered to show approval of the indicators. the adaptations implemented by the focal group mainly referred to the physical structure, inclusion of dental care professionals, inclusion of chemical disinfection, and replacement of the hot air and moist heat sterilization methods. The validation stage resulted in an index of 0.96, which ranged from 0.90 to 1.00, for the components of the indicators. the judges considered the indicators after adaptation to be validated. Even though there may be differences among items processed around the world, there certainly are common characteristics, especially in countries with economic and cultural environments similar to Brazil. The inclusion of these indicators to assess the safety of healthcare supplies used in PHC services should be considered.

  17. Brazilian Center for the Validation of Alternative Methods (BraCVAM) and the process of validation in Brazil.

    PubMed

    Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra

    2016-03-01

    The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation. 2016 FRAME.

  18. Refinement and Further Validation of the Decisional Process Inventory.

    ERIC Educational Resources Information Center

    Hartung, Paul J.; Marco, Cynthia D.

    1998-01-01

    The Decisional Process Inventory is a Gestalt theory-based measure of career decision-making and level of career indecision. Results from a sample of 183 undergraduates supported its content, construct, and concurrent validity. (SK)

  19. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  20. Validation of the manufacturing process used to produce long-acting recombinant factor IX Fc fusion protein.

    PubMed

    McCue, J; Osborne, D; Dumont, J; Peters, R; Mei, B; Pierce, G F; Kobayashi, K; Euwart, D

    2014-07-01

    Recombinant factor IX Fc (rFIXFc) fusion protein is the first of a new class of bioengineered long-acting factors approved for the treatment and prevention of bleeding episodes in haemophilia B. The aim of this work was to describe the manufacturing process for rFIXFc, to assess product quality and to evaluate the capacity of the process to remove impurities and viruses. This manufacturing process utilized a transferable and scalable platform approach established for therapeutic antibody manufacturing and adapted for production of the rFIXFc molecule. rFIXFc was produced using a process free of human- and animal-derived raw materials and a host cell line derived from human embryonic kidney (HEK) 293H cells. The process employed multi-step purification and viral clearance processing, including use of a protein A affinity capture chromatography step, which binds to the Fc portion of the rFIXFc molecule with high affinity and specificity, and a 15 nm pore size virus removal nanofilter. Process validation studies were performed to evaluate identity, purity, activity and safety. The manufacturing process produced rFIXFc with consistent product quality and high purity. Impurity clearance validation studies demonstrated robust and reproducible removal of process-related impurities and adventitious viruses. The rFIXFc manufacturing process produces a highly pure product, free of non-human glycan structures. Validation studies demonstrate that this product is produced with consistent quality and purity. In addition, the scalability and transferability of this process are key attributes to ensure consistent and continuous supply of rFIXFc. © 2014 The Authors. Haemophilia Published by John Wiley & Sons Ltd.

  1. A Supervised Learning Process to Validate Online Disease Reports for Use in Predictive Models.

    PubMed

    Patching, Helena M M; Hudson, Laurence M; Cooke, Warrick; Garcia, Andres J; Hay, Simon I; Roberts, Mark; Moyes, Catherine L

    2015-12-01

    Pathogen distribution models that predict spatial variation in disease occurrence require data from a large number of geographic locations to generate disease risk maps. Traditionally, this process has used data from public health reporting systems; however, using online reports of new infections could speed up the process dramatically. Data from both public health systems and online sources must be validated before they can be used, but no mechanisms exist to validate data from online media reports. We have developed a supervised learning process to validate geolocated disease outbreak data in a timely manner. The process uses three input features, the data source and two metrics derived from the location of each disease occurrence. The location of disease occurrence provides information on the probability of disease occurrence at that location based on environmental and socioeconomic factors and the distance within or outside the current known disease extent. The process also uses validation scores, generated by disease experts who review a subset of the data, to build a training data set. The aim of the supervised learning process is to generate validation scores that can be used as weights going into the pathogen distribution model. After analyzing the three input features and testing the performance of alternative processes, we selected a cascade of ensembles comprising logistic regressors. Parameter values for the training data subset size, number of predictors, and number of layers in the cascade were tested before the process was deployed. The final configuration was tested using data for two contrasting diseases (dengue and cholera), and 66%-79% of data points were assigned a validation score. The remaining data points are scored by the experts, and the results inform the training data set for the next set of predictors, as well as going to the pathogen distribution model. The new supervised learning process has been implemented within our live site and is

  2. Validation of column-based chromatography processes for the purification of proteins. Technical report No. 14.

    PubMed

    2008-01-01

    PDA Technical Report No. 14 has been written to provide current best practices, such as application of risk-based decision making, based in sound science to provide a foundation for the validation of column-based chromatography processes and to expand upon information provided in Technical Report No. 42, Process Validation of Protein Manufacturing. The intent of this technical report is to provide an integrated validation life-cycle approach that begins with the use of process development data for the definition of operational parameters as a basis for validation, confirmation, and/or minor adjustment to these parameters at manufacturing scale during production of conformance batches and maintenance of the validated state throughout the product's life cycle.

  3. The process of processing: exploring the validity of Neisser's perceptual cycle model with accounts from critical decision-making in the cockpit.

    PubMed

    Plant, Katherine L; Stanton, Neville A

    2015-01-01

    The perceptual cycle model (PCM) has been widely applied in ergonomics research in domains including road, rail and aviation. The PCM assumes that information processing occurs in a cyclical manner drawing on top-down and bottom-up influences to produce perceptual exploration and actions. However, the validity of the model has not been addressed. This paper explores the construct validity of the PCM in the context of aeronautical decision-making. The critical decision method was used to interview 20 helicopter pilots about critical decision-making. The data were qualitatively analysed using an established coding scheme, and composite PCMs for incident phases were constructed. It was found that the PCM provided a mutually exclusive and exhaustive classification of the information-processing cycles for dealing with critical incidents. However, a counter-cycle was also discovered which has been attributed to skill-based behaviour, characteristic of experts. The practical applications and future research questions are discussed. Practitioner Summary: This paper explores whether information processing, when dealing with critical incidents, occurs in the manner anticipated by the perceptual cycle model. In addition to the traditional processing cycle, a reciprocal counter-cycle was found. This research can be utilised by those who use the model as an accident analysis framework.

  4. Validation of a 30-year-old process for the manufacture of L-asparaginase from Erwinia chrysanthemi.

    PubMed

    Gervais, David; Allison, Nigel; Jennings, Alan; Jones, Shane; Marks, Trevor

    2013-04-01

    A 30-year-old manufacturing process for the biologic product L-asparaginase from the plant pathogen Erwinia chrysanthemi was rigorously qualified and validated, with a high level of agreement between validation data and the 6-year process database. L-Asparaginase exists in its native state as a tetrameric protein and is used as a chemotherapeutic agent in the treatment regimen for Acute Lymphoblastic Leukaemia (ALL). The manufacturing process involves fermentation of the production organism, extraction and purification of the L-asparaginase to make drug substance (DS), and finally formulation and lyophilisation to generate drug product (DP). The extensive manufacturing experience with the product was used to establish ranges for all process parameters and product quality attributes. The product and in-process intermediates were rigorously characterised, and new assays, such as size-exclusion and reversed-phase UPLC, were developed, validated, and used to analyse several pre-validation batches. Finally, three prospective process validation batches were manufactured and product quality data generated using both the existing and the new analytical methods. These data demonstrated the process to be robust, highly reproducible and consistent, and the validation was successful, contributing to the granting of an FDA product license in November, 2011.

  5. Post-processing of seismic parameter data based on valid seismic event determination

    DOEpatents

    McEvilly, Thomas V.

    1985-01-01

    An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.

  6. Clinical validation of the nursing diagnosis of dysfunctional family processes related to alcoholism.

    PubMed

    Mangueira, Suzana de Oliveira; Lopes, Marcos Venícios de Oliveira

    2016-10-01

    To evaluate the clinical validity indicators for the nursing diagnosis of dysfunctional family processes related to alcohol abuse. Alcoholism is a chronic disease that negatively affects family relationships. Studies on the nursing diagnosis of dysfunctional family processes are scarce in the literature. This diagnosis is currently composed of 115 defining characteristics, hindering their use in practice and highlighting the need for clinical validation. This was a diagnostic accuracy study. A sample of 110 alcoholics admitted to a reference centre for alcohol treatment was assessed during the second half of 2013 for the presence or absence of the defining characteristics of the diagnosis. Operational definitions were created for each defining characteristic based on concept analysis and experts evaluated the content of these definitions. Diagnostic accuracy measures were calculated from latent class models with random effects. All 89 clinical indicators were found in the sample and a set of 24 clinical indicators was identified as clinically valid for a diagnostic screening for family dysfunction from the report of alcoholics. Main clinical indicators with high specificity included sexual abuse, disturbance in academic performance in children and manipulation. The main indicators that showed high sensitivity values were distress, loss, anxiety, low self-esteem, confusion, embarrassment, insecurity, anger, loneliness, deterioration in family relationships and disturbance in family dynamics. Eighteen clinical indicators showed a high capacity for diagnostic screening for alcoholics (high sensitivity) and six indicators can be used for confirmatory diagnosis (high specificity). © 2016 John Wiley & Sons Ltd.

  7. Recommendations for elaboration, transcultural adaptation and validation process of tests in Speech, Hearing and Language Pathology.

    PubMed

    Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de

    2017-06-08

    to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.

  8. Validation of the Evidence-Based Practice Process Assessment Scale

    ERIC Educational Resources Information Center

    Rubin, Allen; Parrish, Danielle E.

    2011-01-01

    Objective: This report describes the reliability, validity, and sensitivity of a scale that assesses practitioners' perceived familiarity with, attitudes of, and implementation of the evidence-based practice (EBP) process. Method: Social work practitioners and second-year master of social works (MSW) students (N = 511) were surveyed in four sites…

  9. NOAA Unique CrIS/ATMS Processing System (NUCAPS) Environmental Data Record and Validation

    NASA Astrophysics Data System (ADS)

    Liu, Q.; Nalli, N. R.; Gambacorta, A.; Iturbide, F.; Tan, C.; Zhang, K.; Wilson, M.; Reale, A.; Sun, B.; Mollner, A.

    2015-12-01

    This presentation introduces the NOAA sounding products to AGU community. The NOAA Unique CrIS/ATMS Processing System (NUCAPS) operationally generates vertical profiles of atmospheric temperature (AVTP), moisture (AVMP), carbonate products (CO, CO2, and CH4) and other trace gases as well as outgoing long-wave radiation (OLR). These products have been publicly released through NOAA CLASS from April 8, 2014 to present. This paper presents the validation of these products. For AVTP and AVMP are validated by comparing against ECMWF analysis data and dedicated radiosondes. The dedicated radiosondes achieve higher quality and reach higher altitudes than conventional radiosondes. In addition, the launch times of dedicated radiosondes specifically fit Suomi NPP overpass times within 1 hour generally. We also use ground based lidar data provided by collaborators (The Aerospace Corporation) to validate the retrieved temperature profiles above 100 hPa up to 1 hPa. Both NOAA VALAR and NPROVS validation systems are applied. The Suomi NPP FM5-Ed1A OLR from CERES prior to the end of May 2012 is available now for us to validate real-time CrIS OLR environmental data records (EDRs) for NOAA/CPC operational precipitation verification. However, the quality of CrIS sensor data records (SDRs) for this time frame on CLASS is suboptimal and many granules (more than three-quarters) are invalid. Using the current offline ADL reprocessed CrIS SDR data from NOAA/STAR AIT, which includes all CrIS SDR improvements to date, we have subsequently obtained a well-distributed OLR EDR. This paper will also discuss the validation of the CrIS infrared ozone profile.

  10. The Multiple-Use of Accountability Assessments: Implications for the Process of Validation

    ERIC Educational Resources Information Center

    Koch, Martha J.

    2014-01-01

    Implications of the multiple-use of accountability assessments for the process of validation are examined. Multiple-use refers to the simultaneous use of results from a single administration of an assessment for its intended use and for one or more additional uses. A theoretical discussion of the issues for validation which emerge from…

  11. Validity of High School Physic Module With Character Values Using Process Skill Approach In STKIP PGRI West Sumatera

    NASA Astrophysics Data System (ADS)

    Anaperta, M.; Helendra, H.; Zulva, R.

    2018-04-01

    This study aims to describe the validity of physics module with Character Oriented Values Using Process Approach Skills at Dynamic Electrical Material in high school physics / MA and SMK. The type of research is development research. The module development model uses the development model proposed by Plomp which consists of (1) preliminary research phase, (2) the prototyping phase, and (3) assessment phase. In this research is done is initial investigation phase and designing. Data collecting technique to know validation is observation and questionnaire. In the initial investigative phase, curriculum analysis, student analysis, and concept analysis were conducted. In the design phase and the realization of module design for SMA / MA and SMK subjects in dynamic electrical materials. After that, the formative evaluation which include self evaluation, prototyping (expert reviews, one-to-one, and small group. At this stage validity is performed. This research data is obtained through the module validation sheet, which then generates a valid module.

  12. Validation of landsurface processes in the AMIP models

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Phillips, T J

    The Atmospheric Model Intercomparison Project (AMIP) is a commonly accepted protocol for testing the performance of the world's atmospheric general circulation models (AGCMs) under common specifications of radiative forcings (in solar constant and carbon dioxide concentration) and observed ocean boundary conditions (Gates 1992, Gates et al. 1999). From the standpoint of landsurface specialists, the AMIP affords an opportunity to investigate the behaviors of a wide variety of land-surface schemes (LSS) that are coupled to their ''native'' AGCMs (Phillips et al. 1995, Phillips 1999). In principle, therefore, the AMIP permits consideration of an overarching question: ''To what extent does an AGCM'smore » performance in simulating continental climate depend on the representations of land-surface processes by the embedded LSS?'' There are, of course, some formidable obstacles to satisfactorily addressing this question. First, there is the dilemna of how to effectively validate simulation performance, given the present dearth of global land-surface data sets. Even if this data problem were to be alleviated, some inherent methodological difficulties would remain: in the context of the AMIP, it is not possible to validate a given LSS per se, since the associated land-surface climate simulation is a product of the coupled AGCM/LSS system. Moreover, aside from the intrinsic differences in LSS across the AMIP models, the varied representations of land-surface characteristics (e.g. vegetation properties, surface albedos and roughnesses, etc.) and related variations in land-surface forcings further complicate such an attribution process. Nevertheless, it may be possible to develop validation methodologies/statistics that are sufficiently penetrating to reveal ''signatures'' of particular ISS representations (e.g. ''bucket'' vs more complex parameterizations of hydrology) in the AMIP land-surface simulations.« less

  13. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    NASA Technical Reports Server (NTRS)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  14. Validating and determining the weight of items used for evaluating clinical governance implementation based on analytic hierarchy process model.

    PubMed

    Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein

    2015-04-08

    The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.

  15. IV&V Project Assessment Process Validation

    NASA Technical Reports Server (NTRS)

    Driskell, Stephen

    2012-01-01

    The Space Launch System (SLS) will launch NASA's Multi-Purpose Crew Vehicle (MPCV). This launch vehicle will provide American launch capability for human exploration and travelling beyond Earth orbit. SLS is designed to be flexible for crew or cargo missions. The first test flight is scheduled for December 2017. The SLS SRR/SDR provided insight into the project development life cycle. NASA IV&V ran the standard Risk Based Assessment and Portfolio Based Risk Assessment to identify analysis tasking for the SLS program. This presentation examines the SLS System Requirements Review/System Definition Review (SRR/SDR), IV&V findings for IV&V process validation correlation to/from the selected IV&V tasking and capabilities. It also provides a reusable IEEE 1012 scorecard for programmatic completeness across the software development life cycle.

  16. Validation in the clinical process: four settings for objectification of the subjectivity of understanding.

    PubMed

    Beland, H

    1994-12-01

    Clinical material is presented for discussion with the aim of exemplifying the author's conceptions of validation in a number of sessions and in psychoanalytic research and of making them verifiable, susceptible to consensus and/or falsifiable. Since Freud's postscript to the Dora case, the first clinical validation in the history of psychoanalysis, validation has been group-related and society-related, that is to say, it combines the evidence of subjectivity with the consensus of the research community (the scientific community). Validation verifies the conformity of the unconscious transference meaning with the analyst's understanding. The deciding criterion is the patient's reaction to the interpretation. In terms of the theory of science, validation in the clinical process corresponds to experimental testing of truth in the sphere of inanimate nature. Four settings of validation can be distinguished: the analyst's self-supervision during the process of understanding, which goes from incomprehension to comprehension (container-contained, PS-->D, selected fact); the patient's reaction to the interpretation (insight) and the analyst's assessment of the reaction; supervision and second thoughts; and discussion in groups and publications leading to consensus. It is a peculiarity of psychoanalytic research that in the event of positive validation the three criteria of truth (evidence, consensus and utility) coincide.

  17. Cognitive Validity: Can Multiple-Choice Items Tap Historical Thinking Processes?

    ERIC Educational Resources Information Center

    Smith, Mark D.

    2017-01-01

    Cognitive validity examines the relationship between what an assessment aims to measure and what it actually elicits from test takers. The present study examined whether multiple-choice items from the National Assessment of Educational Progress (NAEP) grade 12 U.S. history exam elicited the historical thinking processes they were designed to…

  18. Benchmarking Multilayer-HySEA model for landslide generated tsunami. HTHMP validation process.

    NASA Astrophysics Data System (ADS)

    Macias, J.; Escalante, C.; Castro, M. J.

    2017-12-01

    Landslide tsunami hazard may be dominant along significant parts of the coastline around the world, in particular in the USA, as compared to hazards from other tsunamigenic sources. This fact motivated NTHMP about the need of benchmarking models for landslide generated tsunamis, following the same methodology already used for standard tsunami models when the source is seismic. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory data sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. A total of 7 benchmarks. The Multilayer-HySEA model including non-hydrostatic effects has been used to perform all the benchmarking problems dealing with laboratory experiments proposed in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017 by NTHMP. The aim of this presentation is to show some of the latest numerical results obtained with the Multilayer-HySEA (non-hydrostatic) model in the framework of this validation effort.Acknowledgements. This research has been partially supported by the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and University of Malaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  19. Empiric validation of a process for behavior change.

    PubMed

    Elliot, Diane L; Goldberg, Linn; MacKinnon, David P; Ranby, Krista W; Kuehl, Kerry S; Moe, Esther L

    2016-09-01

    Most behavior change trials focus on outcomes rather than deconstructing how those outcomes related to programmatic theoretical underpinnings and intervention components. In this report, the process of change is compared for three evidence-based programs' that shared theories, intervention elements and potential mediating variables. Each investigation was a randomized trial that assessed pre- and post- intervention variables using survey constructs with established reliability. Each also used mediation analyses to define relationships. The findings were combined using a pattern matching approach. Surprisingly, knowledge was a significant mediator in each program (a and b path effects [p<0.01]). Norms, perceived control abilities, and self-monitoring were confirmed in at least two studies (p<0.01 for each). Replication of findings across studies with a common design but varied populations provides a robust validation of the theory and processes of an effective intervention. Combined findings also demonstrate a means to substantiate process aspects and theoretical models to advance understanding of behavior change.

  20. Model-Based Verification and Validation of the SMAP Uplink Processes

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Dubos, Gregory F.; Tirona, Joseph; Standley, Shaun

    2013-01-01

    This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V&V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process.Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based V&V development efforts.

  1. Validity of segmental bioelectrical impedance analysis for estimating fat-free mass in children including overweight individuals.

    PubMed

    Ohta, Megumi; Midorikawa, Taishi; Hikihara, Yuki; Masuo, Yoshihisa; Sakamoto, Shizuo; Torii, Suguru; Kawakami, Yasuo; Fukunaga, Tetsuo; Kanehisa, Hiroaki

    2017-02-01

    This study examined the validity of segmental bioelectrical impedance (BI) analysis for predicting the fat-free masses (FFMs) of whole-body and body segments in children including overweight individuals. The FFM and impedance (Z) values of arms, trunk, legs, and whole body were determined using a dual-energy X-ray absorptiometry and segmental BI analyses, respectively, in 149 boys and girls aged 6 to 12 years, who were divided into model-development (n = 74), cross-validation (n = 35), and overweight (n = 40) groups. Simple regression analysis was applied to (length) 2 /Z (BI index) for each of the whole-body and 3 segments to develop the prediction equations of the measured FFM of the related body part. In the model-development group, the BI index of each of the 3 segments and whole body was significantly correlated to the measured FFM (R 2 = 0.867-0.932, standard error of estimation = 0.18-1.44 kg (5.9%-8.7%)). There was no significant difference between the measured and predicted FFM values without systematic error. The application of each equation derived in the model-development group to the cross-validation and overweight groups did not produce significant differences between the measured and predicted FFM values and systematic errors, with an exception that the arm FFM in the overweight group was overestimated. Segmental bioelectrical impedance analysis is useful for predicting the FFM of each of whole-body and body segments in children including overweight individuals, although the application for estimating arm FFM in overweight individuals requires a certain modification.

  2. Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process

    NASA Astrophysics Data System (ADS)

    Macias, Jorge

    2017-04-01

    In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  3. Applying the Mixed Methods Instrument Development and Construct Validation Process: the Transformative Experience Questionnaire

    ERIC Educational Resources Information Center

    Koskey, Kristin L. K.; Sondergeld, Toni A.; Stewart, Victoria C.; Pugh, Kevin J.

    2018-01-01

    Onwuegbuzie and colleagues proposed the Instrument Development and Construct Validation (IDCV) process as a mixed methods framework for creating and validating measures. Examples applying IDCV are lacking. We provide an illustrative case integrating the Rasch model and cognitive interviews applied to the development of the Transformative…

  4. Challenges in validating the sterilisation dose for processed human amniotic membranes

    NASA Astrophysics Data System (ADS)

    Yusof, Norimah; Hassan, Asnah; Firdaus Abd Rahman, M. N.; Hamid, Suzina A.

    2007-11-01

    Most of the tissue banks in the Asia Pacific region have been using ionising radiation at 25 kGy to sterilise human tissues for save clinical usage. Under tissue banking quality system, any dose employed for sterilisation has to be validated and the validation exercise has to be a part of quality document. Tissue grafts, unlike medical items, are not produced in large number per each processing batch and tissues relatively have a different microbial population. A Code of Practice established by the International Atomic Energy Agency (IAEA) in 2004 offers several validation methods using smaller number of samples compared to ISO 11137 (1995), which is meant for medical products. The methods emphasise on bioburden determination, followed by sterility test on samples after they were exposed to verification dose for attaining of sterility assurance level (SAL) of 10 -1. This paper describes our experience in using the IAEA Code of Practice in conducting the validation exercise for substantiating 25 kGy as sterilisation dose for both air-dried amnion and those preserved in 99% glycerol.

  5. Media fill for validation of a good manufacturing practice-compliant cell production process.

    PubMed

    Serra, Marta; Roseti, Livia; Bassi, Alessandra

    2015-01-01

    According to the European Regulation EC 1394/2007, the clinical use of Advanced Therapy Medicinal Products, such as Human Bone Marrow Mesenchymal Stem Cells expanded for the regeneration of bone tissue or Chondrocytes for Autologous Implantation, requires the development of a process in compliance with the Good Manufacturing Practices. The Media Fill test, consisting of a simulation of the expansion process by using a microbial growth medium instead of the cells, is considered one of the most effective ways to validate a cell production process. Such simulation, in fact, allows to identify any weakness in production that can lead to microbiological contamination of the final cell product as well as qualifying operators. Here, we report the critical aspects concerning the design of a Media Fill test to be used as a tool for the further validation of the sterility of a cell-based Good Manufacturing Practice-compliant production process.

  6. Improving the residency admissions process by integrating a professionalism assessment: a validity and feasibility study.

    PubMed

    Bajwa, Nadia M; Yudkowsky, Rachel; Belli, Dominique; Vu, Nu Viet; Park, Yoon Soo

    2017-03-01

    The purpose of this study was to provide validity and feasibility evidence in measuring professionalism using the Professionalism Mini-Evaluation Exercise (P-MEX) scores as part of a residency admissions process. In 2012 and 2013, three standardized-patient-based P-MEX encounters were administered to applicants invited for an interview at the University of Geneva Pediatrics Residency Program. Validity evidence was gathered for P-MEX content (item analysis); response process (qualitative feedback); internal structure (inter-rater reliability with intraclass correlation and Generalizability); relations to other variables (correlations); and consequences (logistic regression to predict admission). To improve reliability, Kane's formula was used to create an applicant composite score using P-MEX, structured letter of recommendation (SLR), and structured interview (SI) scores. Applicant rank lists using composite scores versus faculty global ratings were compared using the Wilcoxon signed-rank test. Seventy applicants were assessed. Moderate associations were found between pairwise correlations of P-MEX scores and SLR (r = 0.25, P = .036), SI (r = 0.34, P = .004), and global ratings (r = 0.48, P < .001). Generalizability of the P-MEX using three cases was moderate (G-coefficient = 0.45). P-MEX scores had the greatest correlation with acceptance (r = 0.56, P < .001), were the strongest predictor of acceptance (OR 4.37, P < .001), and increased pseudo R-squared by 0.20 points. Including P-MEX scores increased composite score reliability from 0.51 to 0.74. Rank lists of applicants using composite score versus global rating differed significantly (z = 5.41, P < .001). Validity evidence supports the use of P-MEX scores to improve the reliability of the residency admissions process by improving applicant composite score reliability.

  7. Filament winding cylinders. II - Validation of the process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilio P.; Lee, Soo-Yong; Springer, George S.

    1990-01-01

    Analytical and experimental studies were performed to validate the model developed by Lee and Springer for simulating the manufacturing process of filament wound composite cylinders. First, results calculated by the Lee-Springer model were compared to results of the Calius-Springer thin cylinder model. Second, temperatures and strains calculated by the Lee-Springer model were compared to data. The data used in these comparisons were generated during the course of this investigation with cylinders made of Hercules IM-6G/HBRF-55 and Fiberite T-300/976 graphite-epoxy tows. Good agreement was found between the calculated and measured stresses and strains, indicating that the model is a useful representation of the winding and curing processes.

  8. Climatological Processing of Radar Data for the TRMM Ground Validation Program

    NASA Technical Reports Server (NTRS)

    Kulie, Mark; Marks, David; Robinson, Michael; Silberstein, David; Wolff, David; Ferrier, Brad; Amitai, Eyal; Fisher, Brad; Wang, Jian-Xin; Augustine, David; hide

    2000-01-01

    The Tropical Rainfall Measuring Mission (TRMM) satellite was successfully launched in November, 1997. The main purpose of TRMM is to sample tropical rainfall using the first active spaceborne precipitation radar. To validate TRMM satellite observations, a comprehensive Ground Validation (GV) Program has been implemented. The primary goal of TRMM GV is to provide basic validation of satellite-derived precipitation measurements over monthly climatologies for the following primary sites: Melbourne, FL; Houston, TX; Darwin, Australia; and Kwajalein Atoll, RMI. As part of the TRMM GV effort, research analysts at NASA Goddard Space Flight Center (GSFC) generate standardized TRMM GV products using quality-controlled ground-based radar data from the four primary GV sites as input. This presentation will provide an overview of the TRMM GV climatological processing system. A description of the data flow between the primary GV sites, NASA GSFC, and the TRMM Science and Data Information System (TSDIS) will be presented. The radar quality control algorithm, which features eight adjustable height and reflectivity parameters, and its effect on monthly rainfall maps will be described. The methodology used to create monthly, gauge-adjusted rainfall products for each primary site will also be summarized. The standardized monthly rainfall products are developed in discrete, modular steps with distinct intermediate products. These developmental steps include: (1) extracting radar data over the locations of rain gauges, (2) merging rain gauge and radar data in time and space with user-defined options, (3) automated quality control of radar and gauge merged data by tracking accumulations from each instrument, and (4) deriving Z-R relationships from the quality-controlled merged data over monthly time scales. A summary of recently reprocessed official GV rainfall products available for TRMM science users will be presented. Updated basic standardized product results and trends involving

  9. Screening Poststroke Fatigue; Feasibility and Validation of an Instrument for the Screening of Poststroke Fatigue throughout the Rehabilitation Process.

    PubMed

    Kruithof, Nena; Van Cleef, Melanie Hubertina Maria; Rasquin, Sascha Maria Cornelia; Bovend'Eerdt, Thamar Johannes Henricus

    2016-01-01

    Our objective is to investigate the feasibility and validity of a new instrument to screen for determinants of poststroke fatigue during the rehabilitation process. This prospective cohort study was conducted within the stroke department of a rehabilitation center. The participants in the study were postacute adult stroke patients. The Detection List Fatigue (DLF)was administered 2 weeks after the start of the rehabilitation program and again 6 weeks later. To determine the construct validity, the Hospital Anxiety and Depression Scale, the Checklist Individual Strength subscale fatigue, and the Fatigue Severity Scale--7-item version were administered. A fatigue rating scale was used to measure the patients' fatigue experience. Frequency analyses of the number of patients reporting poststroke fatigue determinants according to the DLF were performed. One hundred seven patients (mean age 60 years) without severe communication difficulties were included in the study. The DLF was easy to understand and quick to administer. The DLF showed good internal consistency (Cronbach's alpha: .79 and .87), high convergent validity (rs = .85 and rs = .79), and good divergent validity (rs = .31 and rs = .45). The majority of the patients (88.4%-90.2%) experienced at least 2 poststroke fatigue (PSF) determinants,of which "sleeping problem" was most frequently reported. The DLF is a feasible and valid instrument for the screening of PSF determinants throughout the rehabilitation process in stroke patients. Future studies should investigate whether the use of the list in determining a treatment plan prevents the development of PSF.

  10. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  11. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  12. Validation of a Communication Process Measure for Coding Control in Counseling.

    ERIC Educational Resources Information Center

    Heatherington, Laurie

    The increasingly popular view of the counseling process from an interactional perspective necessitates the development of new measurement instruments which are suitable to the study of the reciprocal interaction between people. The validity of the Relational Communication Coding System, an instrument which operationalizes the constructs of…

  13. Precooking as a Control for Histamine Formation during the Processing of Tuna: An Industrial Process Validation.

    PubMed

    Adams, Farzana; Nolte, Fred; Colton, James; De Beer, John; Weddig, Lisa

    2018-02-23

    An experiment to validate the precooking of tuna as a control for histamine formation was carried out at a commercial tuna factory in Fiji. Albacore tuna ( Thunnus alalunga) were brought on board long-line catcher vessels alive, immediately chilled but never frozen, and delivered to an on-shore facility within 3 to 13 days. These fish were then allowed to spoil at 25 to 30°C for 21 to 25 h to induce high levels of histamine (>50 ppm), as a simulation of "worst-case" postharvest conditions, and subsequently frozen. These spoiled fish later were thawed normally and then precooked at a commercial tuna processing facility to a target maximum core temperature of 60°C. These tuna were then held at ambient temperatures of 19 to 37°C for up to 30 h, and samples were collected every 6 h for histamine analysis. After precooking, no further histamine formation was observed for 12 to 18 h, indicating that a conservative minimum core temperature of 60°C pauses subsequent histamine formation for 12 to 18 h. Using the maximum core temperature of 60°C provided a challenge study to validate a recommended minimum core temperature of 60°C, and 12 to 18 h was sufficient to convert precooked tuna into frozen loins or canned tuna. This industrial-scale process validation study provides support at a high confidence level for the preventive histamine control associated with precooking. This study was conducted with tuna deliberately allowed to spoil to induce high concentrations of histamine and histamine-forming capacity and to fail standard organoleptic evaluations, and the critical limits for precooking were validated. Thus, these limits can be used in a hazard analysis critical control point plan in which precooking is identified as a critical control point.

  14. Method for Fabricating Composite Structures Including Continuous Press Forming and Pultrusion Processing

    NASA Technical Reports Server (NTRS)

    Farley, Gary L. (Inventor)

    1995-01-01

    A method for fabricating composite structures at a low-cost, moderate-to-high production rate is disclosed. A first embodiment of the method includes employing a continuous press forming fabrication process. A second embodiment of the method includes employing a pultrusion process for obtaining composite structures. The methods include coating yarns with matrix material, weaving the yarn into fabric to produce a continuous fabric supply, and feeding multiple layers of net-shaped fabrics having optimally oriented fibers into a debulking tool to form an undebulked preform. The continuous press forming fabrication process includes partially debulking the preform, cutting the partially debulked preform, and debulking the partially debulked preform to form a netshape. An electron-beam or similar technique then cures the structure. The pultrusion fabric process includes feeding the undebulked preform into a heated die and gradually debulking the undebulked preform. The undebulked preform in the heated die changes dimension until a desired cross-sectional dimension is achieved. This process further includes obtaining a net-shaped infiltrated uncured preform, cutting the uncured preform to a desired length, and electron-beam curing (or similar technique) the uncured preform. These fabrication methods produce superior structures formed at higher production rates, resulting in lower cost and high structural performance.

  15. Bicarbonate of soda paint stripping process validation and material characterization

    NASA Technical Reports Server (NTRS)

    Haas, Michael N.

    1995-01-01

    The Aircraft Production Division at San Antonio Air Logistics Center has conducted extensive investigation into the replacement of hazardous chemicals in aircraft component cleaning, degreasing, and depainting. One of the most viable solutions is process substitution utilizing abrasive techniques. SA-ALC has incorporated the use of Bicarbonate of Soda Blasting as one such substitution. Previous utilization of methylene chloride based chemical strippers and carbon removal agents has been replaced by a walk-in blast booth in which we remove carbon from engine nozzles and various gas turbine engine parts, depaint cowlings, and perform various other functions on a variety of parts. Prior to implementation of this new process, validation of the process was performed, and materials and waste stream characterization studies were conducted. These characterization studies examined the effects of the blasting process on the integrity of the thin-skinned aluminum substrates, the effects of the process on both air emissions and effluent disposal, and the effects on the personnel exposed to the process.

  16. Validating and Extending the Three Process Model of Alertness in Airline Operations

    PubMed Central

    Ingre, Michael; Van Leeuwen, Wessel; Klemets, Tomas; Ullvetter, Christer; Hough, Stephen; Kecklund, Göran; Karlsson, David; Åkerstedt, Torbjörn

    2014-01-01

    Sleepiness and fatigue are important risk factors in the transport sector and bio-mathematical sleepiness, sleep and fatigue modeling is increasingly becoming a valuable tool for assessing safety of work schedules and rosters in Fatigue Risk Management Systems (FRMS). The present study sought to validate the inner workings of one such model, Three Process Model (TPM), on aircrews and extend the model with functions to model jetlag and to directly assess the risk of any sleepiness level in any shift schedule or roster with and without knowledge of sleep timings. We collected sleep and sleepiness data from 136 aircrews in a real life situation by means of an application running on a handheld touch screen computer device (iPhone, iPod or iPad) and used the TPM to predict sleepiness with varying level of complexity of model equations and data. The results based on multilevel linear and non-linear mixed effects models showed that the TPM predictions correlated with observed ratings of sleepiness, but explorative analyses suggest that the default model can be improved and reduced to include only two-processes (S+C), with adjusted phases of the circadian process based on a single question of circadian type. We also extended the model with a function to model jetlag acclimatization and with estimates of individual differences including reference limits accounting for 50%, 75% and 90% of the population as well as functions for predicting the probability of any level of sleepiness for ecological assessment of absolute and relative risk of sleepiness in shift systems for safety applications. PMID:25329575

  17. Design for validation: An approach to systems validation

    NASA Technical Reports Server (NTRS)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  18. Process, including PSA and membrane separation, for separating hydrogen from hydrocarbons

    DOEpatents

    Baker, Richard W.; Lokhandwala, Kaaeid A.; He, Zhenjie; Pinnau, Ingo

    2001-01-01

    An improved process for separating hydrogen from hydrocarbons. The process includes a pressure swing adsorption step, a compression/cooling step and a membrane separation step. The membrane step relies on achieving a methane/hydrogen selectivity of at least about 2.5 under the conditions of the process.

  19. An initial investigation into the validity of a computer-based auditory processing assessment (Feather Squadron).

    PubMed

    Barker, Matthew D; Purdy, Suzanne C

    2016-01-01

    This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.

  20. Use of a Computer-Mediated Delphi Process to Validate a Mass Casualty Conceptual Model

    PubMed Central

    CULLEY, JOAN M.

    2012-01-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters. PMID:21076283

  1. Use of a computer-mediated Delphi process to validate a mass casualty conceptual model.

    PubMed

    Culley, Joan M

    2011-05-01

    Since the original work on the Delphi technique, multiple versions have been developed and used in research and industry; however, very little empirical research has been conducted that evaluates the efficacy of using online computer, Internet, and e-mail applications to facilitate a Delphi method that can be used to validate theoretical models. The purpose of this research was to develop computer, Internet, and e-mail applications to facilitate a modified Delphi technique through which experts provide validation for a proposed conceptual model that describes the information needs for a mass-casualty continuum of care. Extant literature and existing theoretical models provided the basis for model development. Two rounds of the Delphi process were needed to satisfy the criteria for consensus and/or stability related to the constructs, relationships, and indicators in the model. The majority of experts rated the online processes favorably (mean of 6.1 on a seven-point scale). Using online Internet and computer applications to facilitate a modified Delphi process offers much promise for future research involving model building or validation. The online Delphi process provided an effective methodology for identifying and describing the complex series of events and contextual factors that influence the way we respond to disasters.

  2. Evaluation of the Effect of the Volume Throughput and Maximum Flux of Low-Surface-Tension Fluids on Bacterial Penetration of 0.2 Micron-Rated Filters during Process-Specific Filter Validation Testing.

    PubMed

    Folmsbee, Martha

    2015-01-01

    Approximately 97% of filter validation tests result in the demonstration of absolute retention of the test bacteria, and thus sterile filter validation failure is rare. However, while Brevundimonas diminuta (B. diminuta) penetration of sterilizing-grade filters is rarely detected, the observation that some fluids (such as vaccines and liposomal fluids) may lead to an increased incidence of bacterial penetration of sterilizing-grade filters by B. diminuta has been reported. The goal of the following analysis was to identify important drivers of filter validation failure in these rare cases. The identification of these drivers will hopefully serve the purpose of assisting in the design of commercial sterile filtration processes with a low risk of filter validation failure for vaccine, liposomal, and related fluids. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to the effect of bacterial load (CFU/cm(2)), bacterial load rate (CFU/min/cm(2)), volume throughput (mL/cm(2)), and maximum filter flux (mL/min/cm(2)) on bacterial penetration. The data set (∼1162 individual filtrations) included all instances of process-specific filter validation failures performed at Pall Corporation, including those using other filter media, but did not include all successful retentive filter validation bacterial challenges. It was neither practical nor necessary to include all filter validation successes worldwide (Pall Corporation) to achieve the goals of this analysis. The percentage of failed filtration events for the selected total master data set was 27% (310/1162). Because it is heavily weighted with penetration events, this percentage is considerably higher than the actual rate of failed filter validations, but, as such, facilitated a close examination of the conditions that lead to filter validation failure. In agreement with our previous reports, two of the significant drivers of bacterial penetration identified were the total

  3. Three-Step Validation of Exercise Behavior Processes of Change in an Adolescent Sample

    ERIC Educational Resources Information Center

    Rhodes, Ryan E.; Berry, Tanya; Naylor, Patti-Jean; Higgins, S. Joan Wharf

    2004-01-01

    Though the processes of change are conceived as the core constructs of the transtheoretical model (TTM), few researchers have examined their construct validity in the physical activity domain. Further, only 1 study was designed to investigate the processes of change in an adolescent sample. The purpose of this study was to examine the exercise…

  4. Development and Validation of the Physics Anxiety Rating Scale

    ERIC Educational Resources Information Center

    Sahin, Mehmet; Caliskan, Serap; Dilek, Ufuk

    2015-01-01

    This study reports the development and validation process for an instrument to measure university students' anxiety in physics courses. The development of the Physics Anxiety Rating Scale (PARS) included the following steps: Generation of scale items, content validation, construct validation, and reliability calculation. The results of construct…

  5. EOS Terra Validation Program

    NASA Technical Reports Server (NTRS)

    Starr, David

    1999-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include ASTER, CERES, MISR, MODIS and MOPITT. In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS), AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2, though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community. Detailed information about the EOS Terra validation Program can be found on the EOS Validation program

  6. Developing a Science Process Skills Test for Secondary Students: Validity and Reliability Study

    ERIC Educational Resources Information Center

    Feyzioglu, Burak; Demirdag, Baris; Akyildiz, Murat; Altun, Eralp

    2012-01-01

    Science process skills are claimed to enable an individual to improve their own life visions and give a scientific view/literacy as a standard of their understanding about the nature of science. The main purpose of this study was to develop a test for measuring a valid, reliable and practical test for Science Process Skills (SPS) in secondary…

  7. Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes

    NASA Astrophysics Data System (ADS)

    Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.

    2018-04-01

    To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.

  8. Initial Retrieval Validation from the Joint Airborne IASI Validation Experiment (JAIVEx)

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Smith, WIlliam L.; Larar, Allen M.; Taylor, Jonathan P.; Revercomb, Henry E.; Mango, Stephen A.; Schluessel, Peter; Calbet, Xavier

    2007-01-01

    The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite, but also included a strong component focusing on validation of the Atmospheric InfraRed Sounder (AIRS) aboard the AQUA satellite. The cross validation of IASI and AIRS is important for the joint use of their data in the global Numerical Weather Prediction process. Initial inter-comparisons of geophysical products have been conducted from different aspects, such as using different measurements from airborne ultraspectral Fourier transform spectrometers (specifically, the NPOESS Airborne Sounder Testbed Interferometer (NAST-I) and the Scanning-High resolution Interferometer Sounder (S-HIS) aboard the NASA WB-57 aircraft), UK Facility for Airborne Atmospheric Measurements (FAAM) BAe146-301 aircraft insitu instruments, dedicated dropsondes, radiosondes, and ground based Raman Lidar. An overview of the JAIVEx retrieval validation plan and some initial results of this field campaign are presented.

  9. Donabedian's structure-process-outcome quality of care model: Validation in an integrated trauma system.

    PubMed

    Moore, Lynne; Lavoie, André; Bourgeois, Gilles; Lapointe, Jean

    2015-06-01

    According to Donabedian's health care quality model, improvements in the structure of care should lead to improvements in clinical processes that should in turn improve patient outcome. This model has been widely adopted by the trauma community but has not yet been validated in a trauma system. The objective of this study was to assess the performance of an integrated trauma system in terms of structure, process, and outcome and evaluate the correlation between quality domains. Quality of care was evaluated for patients treated in a Canadian provincial trauma system (2005-2010; 57 centers, n = 63,971) using quality indicators (QIs) developed and validated previously. Structural performance was measured by transposing on-site accreditation visit reports onto an evaluation grid according to American College of Surgeons criteria. The composite process QI was calculated as the average sum of proportions of conformity to 15 process QIs derived from literature review and expert opinion. Outcome performance was measured using risk-adjusted rates of mortality, complications, and readmission as well as hospital length of stay (LOS). Correlation was assessed with Pearson's correlation coefficients. Statistically significant correlations were observed between structure and process QIs (r = 0.33), and process and outcome QIs (r = -0.33 for readmission, r = -0.27 for LOS). Significant positive correlations were also observed between outcome QIs (r = 0.37 for mortality-readmission; r = 0.39 for mortality-LOS and readmission-LOS; r = 0.45 for mortality-complications; r = 0.34 for readmission-complications; 0.63 for complications-LOS). Significant correlations between quality domains observed in this study suggest that Donabedian's structure-process-outcome model is a valid model for evaluating trauma care. Trauma centers that perform well in terms of structure also tend to perform well in terms of clinical processes, which in turn has a favorable influence on patient outcomes

  10. Test validity and performance validity: considerations in providing a framework for development of an ability-focused neuropsychological test battery.

    PubMed

    Larrabee, Glenn J

    2014-11-01

    Literature on test validity and performance validity is reviewed to propose a framework for specification of an ability-focused battery (AFB). Factor analysis supports six domains of ability: first, verbal symbolic; secondly, visuoperceptual and visuospatial judgment and problem solving; thirdly, sensorimotor skills; fourthly, attention/working memory; fifthly, processing speed; finally, learning and memory (which can be divided into verbal and visual subdomains). The AFB should include at least three measures for each of the six domains, selected based on various criteria for validity including sensitivity to presence of disorder, sensitivity to severity of disorder, correlation with important activities of daily living, and containing embedded/derived measures of performance validity. Criterion groups should include moderate and severe traumatic brain injury, and Alzheimer's disease. Validation groups should also include patients with left and right hemisphere stroke, to determine measures sensitive to lateralized cognitive impairment and so that the moderating effects of auditory comprehension impairment and neglect can be analyzed on AFB measures. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Preparing for the Validation Visit--Guidelines for Optimizing the Experience.

    ERIC Educational Resources Information Center

    Osborn, Hazel A.

    2003-01-01

    Urges child care programs to seek accreditation from NAEYC's National Academy of Early Childhood Programs to increase program quality and provides information on the validation process. Includes information on the validation visit and the validator's role and background. Offers suggestions for preparing the director, staff, children, and families…

  12. Instrument validation process: a case study using the Paediatric Pain Knowledge and Attitudes Questionnaire.

    PubMed

    Peirce, Deborah; Brown, Janie; Corkish, Victoria; Lane, Marguerite; Wilson, Sally

    2016-06-01

    To compare two methods of calculating interrater agreement while determining content validity of the Paediatric Pain Knowledge and Attitudes Questionnaire for use with Australian nurses. Paediatric pain assessment and management documentation was found to be suboptimal revealing a need to assess paediatric nurses' knowledge and attitude to pain. The Paediatric Pain Knowledge and Attitudes Questionnaire was selected as it had been reported as valid and reliable in the United Kingdom with student nurses. The questionnaire required content validity determination prior to use in the Australian context. A two phase process of expert review. Ten paediatric nurses completed a relevancy rating of all 68 questionnaire items. In phase two, five pain experts reviewed the items of the questionnaire that scored an unacceptable item level content validity. Item and scale level content validity indices and intraclass correlation coefficients were calculated. In phase one, 31 items received an item level content validity index <0·78 and the scale level content validity index average was 0·80 which were below levels required for acceptable validity. The intraclass correlation coefficient was 0·47. In phase two, 10 items were amended and four items deleted. The revised questionnaire provided a scale level content validity index average >0·90 and an intraclass correlation coefficient of 0·94 demonstrating excellent agreement between raters therefore acceptable content validity. Equivalent outcomes were achieved using the content validity index and the intraclass correlation coefficient. To assess content validity the content validity index has the advantage of providing an item level score and is a simple calculation. The intraclass correlation coefficient requires statistical knowledge, or support, and has the advantage of accounting for the possibility of chance agreement. © 2016 John Wiley & Sons Ltd.

  13. Examining the Validity of Self-Reports on Scales Measuring Students' Strategic Processing

    ERIC Educational Resources Information Center

    Samuelstuen, Marit S.; Braten, Ivar

    2007-01-01

    Background: Self-report inventories trying to measure strategic processing at a global level have been much used in both basic and applied research. However, the validity of global strategy scores is open to question because such inventories assess strategy perceptions outside the context of specific task performance. Aims: The primary aim was to…

  14. Validation of learning assessments: A primer.

    PubMed

    Peeters, Michael J; Martin, Beth A

    2017-09-01

    The Accreditation Council for Pharmacy Education's Standards 2016 has placed greater emphasis on validating educational assessments. In this paper, we describe validity, reliability, and validation principles, drawing attention to the conceptual change that highlights one validity with multiple evidence sources; to this end, we recommend abandoning historical (confusing) terminology associated with the term validity. Further, we describe and apply Kane's framework (scoring, generalization, extrapolation, and implications) for the process of validation, with its inferences and conclusions from varied uses of assessment instruments by different colleges and schools of pharmacy. We then offer five practical recommendations that can improve reporting of validation evidence in pharmacy education literature. We describe application of these recommendations, including examples of validation evidence in the context of pharmacy education. After reading this article, the reader should be able to understand the current concept of validation, and use a framework as they validate and communicate their own institution's learning assessments. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. Safety of the Surrogate Microorganism Enterococcus faecium NRRL B-2354 for Use in Thermal Process Validation

    PubMed Central

    Kopit, Lauren M.; Kim, Eun Bae; Siezen, Roland J.; Harris, Linda J.

    2014-01-01

    Enterococcus faecium NRRL B-2354 is a surrogate microorganism used in place of pathogens for validation of thermal processing technologies and systems. We evaluated the safety of strain NRRL B-2354 based on its genomic and functional characteristics. The genome of E. faecium NRRL B-2354 was sequenced and found to comprise a 2,635,572-bp chromosome and a 214,319-bp megaplasmid. A total of 2,639 coding sequences were identified, including 45 genes unique to this strain. Hierarchical clustering of the NRRL B-2354 genome with 126 other E. faecium genomes as well as pbp5 locus comparisons and multilocus sequence typing (MLST) showed that the genotype of this strain is most similar to commensal, or community-associated, strains of this species. E. faecium NRRL B-2354 lacks antibiotic resistance genes, and both NRRL B-2354 and its clonal relative ATCC 8459 are sensitive to clinically relevant antibiotics. This organism also lacks, or contains nonfunctional copies of, enterococcal virulence genes including acm, cyl, the ebp operon, esp, gelE, hyl, IS16, and associated phenotypes. It does contain scm, sagA, efaA, and pilA, although either these genes were not expressed or their roles in enterococcal virulence are not well understood. Compared with the clinical strains TX0082 and 1,231,502, E. faecium NRRL B-2354 was more resistant to acidic conditions (pH 2.4) and high temperatures (60°C) and was able to grow in 8% ethanol. These findings support the continued use of E. faecium NRRL B-2354 in thermal process validation of food products. PMID:24413604

  16. Data Validation Package - April and July 2015 Groundwater and Surface Water Sampling at the Gunnison, Colorado, Processing Site

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linard, Joshua; Campbell, Sam

    This event included annual sampling of groundwater and surface water locations at the Gunnison, Colorado, Processing Site. Sampling and analyses were conducted as specified in Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites. Samples were collected from 28 monitoring wells, three domestic wells, and six surface locations in April at the processing site as specified in the 2010 Ground Water Compliance Action Plan for the Gunnison, Colorado, Processing Site. Domestic wells 0476 and 0477 were sampled in July because the homes were unoccupied in April, and the wells were not in use. Duplicate samplesmore » were collected from locations 0113, 0248, and 0477. One equipment blank was collected during this sampling event. Water levels were measured at all monitoring wells that were sampled. No issues were identified during the data validation process that requires additional action or follow-up.« less

  17. Excavator Design Validation

    NASA Technical Reports Server (NTRS)

    Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je

    2010-01-01

    The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete

  18. USING CFD TO ANALYZE NUCLEAR SYSTEMS BEHAVIOR: DEFINING THE VALIDATION REQUIREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard Schultz

    2012-09-01

    A recommended protocol to formulate numeric tool specifications and validation needs in concert with practices accepted by regulatory agencies for advanced reactors is described. The protocol is based on the plant type and perceived transient and accident envelopes that translates to boundary conditions for a process that gives the: (a) key phenomena and figures-of-merit which must be analyzed to ensure that the advanced plant can be licensed, (b) specification of the numeric tool capabilities necessary to perform the required analyses—including bounding calculational uncertainties, and (c) specification of the validation matrices and experiments--including the desired validation data. The result of applyingmore » the process enables a complete program to be defined, including costs, for creating and benchmarking transient and accident analysis methods for advanced reactors. By following a process that is in concert with regulatory agency licensing requirements from the start to finish, based on historical acceptance of past licensing submittals, the methods derived and validated have a high probability of regulatory agency acceptance.« less

  19. An Overview of NASA's IM&S Verification and Validation Process Plan and Specification for Space Exploration

    NASA Technical Reports Server (NTRS)

    Gravitz, Robert M.; Hale, Joseph

    2006-01-01

    NASA's Exploration Systems Mission Directorate (ESMD) is implementing a management approach for modeling and simulation (M&S) that will provide decision-makers information on the model's fidelity, credibility, and quality. This information will allow the decision-maker to understand the risks involved in using a model's results in the decision-making process. This presentation will discuss NASA's approach for verification and validation (V&V) of its models or simulations supporting space exploration. This presentation will describe NASA's V&V process and the associated M&S verification and validation (V&V) activities required to support the decision-making process. The M&S V&V Plan and V&V Report templates for ESMD will also be illustrated.

  20. Danish translation and linguistic validation of the BODY-Q: a description of the process.

    PubMed

    Poulsen, Lotte; Rose, Michael; Klassen, Anne; Roessler, Kirsten K; Sørensen, Jens Ahm

    2017-01-01

    Patient-reported outcome (PRO) instruments are increasingly being included in research and clinical practice to assess the patient point of view. Bariatric and body contouring surgery has the potential to improve or restore a patient's body image and health-related quality of life (HR-QOL). A new PRO instrument, called the BODY-Q, has recently been developed specifically for this patient group. The aim of the current study was to translate and perform a linguistic validation of the BODY-Q for use in Danish bariatric and body contouring patients. The translation was performed in accordance with the International Society For Pharmacoeconomics and Outcomes Research (ISPOR) and the World Health Organization (WHO) recommendations. Main steps taken included forward and backward translations, an expert panel meeting, and cognitive patient interviews. All translators aimed to conduct a conceptual translation rather than a literal translation and used a simple and clear formulation to create a translation understandable for all patients. The linguistic translation process led to a conceptually equivalent Danish version of the BODY-Q. The comparison between the back translation of the first Danish version and the original English version of the BODY-Q identified 18 items or instructions requiring re-translation. The expert panel helped to identify and resolve inadequate expressions and concepts of the translation. The panel identified 31 items or instructions that needed to be changed, while the cognitive interviews led to seven major revisions. The impact of weight loss methods such as bariatric surgery and body contouring surgery on patients' HR-QOL would benefit from input from the patient perspective. A thorough translation and linguistic validation must be considered an essential step when implementing a PRO instrument to another language and/or culture. A combination of the ISPOR and WHO guidelines contributed to a straightforward and thorough translation methodology

  1. Predictors of parental discretionary choice provision using the health action process approach framework: Development and validation of a self-reported questionnaire for parents of 4-7-year-olds.

    PubMed

    Johnson, Brittany J; Zarnowiecki, Dorota; Hendrie, Gilly A; Golley, Rebecca K

    2018-02-21

    Children's intake of discretionary choices is excessive. This study aimed to develop a questionnaire measuring parents' attitudes and beliefs towards limiting provision of discretionary choices, using the Health Action Process Approach model. The questionnaire items were informed by the Health Action Process Approach model, which extends the Theory of Planned Behaviour to include both motivational (intention) and volitional (post-intention) factors that influence behaviour change. The questionnaire was piloted for content and face validity (expert panel, n = 5; parents, n = 4). Construct and predictive validity were examined in a sample of 178 parents of 4-7-year-old children who completed the questionnaire online. Statistical analyses included exploratory factor analyses, Cronbach's alpha and multiple linear regression. Pilot testing supported content and face validity. Principal component analyses identified constructs that aligned with the eight constructs of the Health Action Process Approach model. Internal consistencies were high for all subscales, in both the motivational (Cronbach's alpha 0.77-0.88) and volitional phase (Cronbach's alpha 0.85-0.92). Initial results from validation tests support the development of a new questionnaire for measuring parent attitudes and beliefs regarding provision of discretionary choices to their 4-7-year-old children within the home. This new questionnaire can be used to gain greater insight into parents' attitudes and beliefs that influence ability to limit discretionary choices provision to children. Further research to expand understanding of the questionnaires' psychometric properties would be valuable, including confirmatory factor analysis and reproducibility. © 2018 Dietitians Association of Australia.

  2. "La Clave Profesional": Validation of a Vocational Guidance Instrument

    ERIC Educational Resources Information Center

    Mudarra, Maria J.; Lázaro Martínez, Ángel

    2014-01-01

    Introduction: The current study demonstrates empirical and cultural validity of "La Clave Profesional" (Spanish adaptation of Career Key, Jones's test based Holland's RIASEC model). The process of providing validity evidence also includes a reflection on personal and career development and examines the relationahsips between RIASEC…

  3. Development and validation of a ten-item questionnaire with explanatory illustrations to assess upper extremity disorders: favorable effect of illustrations in the item reduction process.

    PubMed

    Kurimoto, Shigeru; Suzuki, Mikako; Yamamoto, Michiro; Okui, Nobuyuki; Imaeda, Toshihiko; Hirata, Hitoshi

    2011-11-01

    The purpose of this study is to develop a short and valid measure for upper extremity disorders and to assess the effect of attached illustrations in item reduction of a self-administered disability questionnaire while retaining psychometric properties. A validated questionnaire used to assess upper extremity disorders, the Hand20, was reduced to ten items using two item-reduction techniques. The psychometric properties of the abbreviated form, the Hand10, were evaluated on an independent sample that was used for the shortening process. Validity, reliability, and responsiveness of the Hand10 were retained in the item reduction process. It was possible that the use of explanatory illustrations attached to the Hand10 helped with its reproducibility. The illustrations for the Hand10 promoted text comprehension and motivation to answer the items. These changes resulted in high acceptability; more than 99.3% of patients, including 98.5% of elderly patients, could complete the Hand10 properly. The illustrations had favorable effects on the item reduction process and made it possible to retain precision of the instrument. The Hand10 is a reliable and valid instrument for individual-level applications with the advantage of being compact and broadly applicable, even in elderly individuals.

  4. Development and validation of the social information processing application: a Web-based measure of social information processing patterns in elementary school-age boys.

    PubMed

    Kupersmidt, Janis B; Stelter, Rebecca; Dodge, Kenneth A

    2011-12-01

    The purpose of this study was to evaluate the psychometric properties of an audio computer-assisted self-interviewing Web-based software application called the Social Information Processing Application (SIP-AP) that was designed to assess social information processing skills in boys in 3rd through 5th grades. This study included a racially and ethnically diverse sample of 244 boys ages 8 through 12 (M = 9.4) from public elementary schools in 3 states. The SIP-AP includes 8 videotaped vignettes, filmed from the first-person perspective, that depict common misunderstandings among boys. Each vignette shows a negative outcome for the victim and ambiguous intent on the part of the perpetrator. Boys responded to 16 Web-based questions representing the 5 social information processing mechanisms, after viewing each vignette. Parents and teachers completed measures assessing boys' antisocial behavior. Confirmatory factor analyses revealed that a model positing the original 5 cognitive mechanisms fit the data well when the items representing prosocial cognitions were included on their own factor, creating a 6th factor. The internal consistencies for each of the 16 individual cognitions as well as for the 6 cognitive mechanism scales were excellent. Boys with elevated scores on 5 of the 6 cognitive mechanisms exhibited more antisocial behavior than boys whose scores were not elevated. These findings highlight the need for further research on the measurement of prosocial cognitions or cognitive strengths in boys in addition to assessing cognitive deficits. Findings suggest that the SIP-AP is a reliable and valid tool for use in future research of social information processing skills in boys.

  5. Development and Validation of the Social Information Processing Application: A Web-Based Measure of Social Information Processing Patterns in Elementary School-Age Boys

    PubMed Central

    Kupersmidt, Janis B.; Stelter, Rebecca; Dodge, Kenneth A.

    2013-01-01

    The purpose of this study was to evaluate the psychometric properties of an audio computer-assisted self-interviewing Web-based software application called the Social Information Processing Application (SIP-AP) that was designed to assess social information processing skills in boys in 3rd through 5th grades. This study included a racially and ethnically diverse sample of 244 boys ages 8 through 12 (M = 9.4) from public elementary schools in 3 states. The SIP-AP includes 8 videotaped vignettes, filmed from the first-person perspective, that depict common misunderstandings among boys. Each vignette shows a negative outcome for the victim and ambiguous intent on the part of the perpetrator. Boys responded to 16 Web-based questions representing the 5 social information processing mechanisms, after viewing each vignette. Parents and teachers completed measures assessing boys’ antisocial behavior. Confirmatory factor analyses revealed that a model positing the original 5 cognitive mechanisms fit the data well when the items representing prosocial cognitions were included on their own factor, creating a 6th factor. The internal consistencies for each of the 16 individual cognitions as well as for the 6 cognitive mechanism scales were excellent. Boys with elevated scores on 5 of the 6 cognitive mechanisms exhibited more antisocial behavior than boys whose scores were not elevated. These findings highlight the need for further research on the measurement of prosocial cognitions or cognitive strengths in boys in addition to assessing cognitive deficits. Findings suggest that the SIP-AP is a reliable and valid tool for use in future research of social information processing skills in boys. PMID:21534693

  6. Validation of a DNA IQ-based extraction method for TECAN robotic liquid handling workstations for processing casework.

    PubMed

    Frégeau, Chantal J; Lett, C Marc; Fourney, Ron M

    2010-10-01

    A semi-automated DNA extraction process for casework samples based on the Promega DNA IQ™ system was optimized and validated on TECAN Genesis 150/8 and Freedom EVO robotic liquid handling stations configured with fixed tips and a TECAN TE-Shake™ unit. The use of an orbital shaker during the extraction process promoted efficiency with respect to DNA capture, magnetic bead/DNA complex washes and DNA elution. Validation studies determined the reliability and limitations of this shaker-based process. Reproducibility with regards to DNA yields for the tested robotic workstations proved to be excellent and not significantly different than that offered by the manual phenol/chloroform extraction. DNA extraction of animal:human blood mixtures contaminated with soil demonstrated that a human profile was detectable even in the presence of abundant animal blood. For exhibits containing small amounts of biological material, concordance studies confirmed that DNA yields for this shaker-based extraction process are equivalent or greater to those observed with phenol/chloroform extraction as well as our original validated automated magnetic bead percolation-based extraction process. Our data further supports the increasing use of robotics for the processing of casework samples. Crown Copyright © 2009. Published by Elsevier Ireland Ltd. All rights reserved.

  7. Validation of the Technological Process of the Preparation "Milk by Vidal".

    PubMed

    Savchenko, L P; Mishchenko, V A; Georgiyants, V A

    2017-01-01

    Validation was performed on the technological process of the compounded preparation "Milk by Vidal" in accordance with the requirements of the regulatory framework of Ukraine. Critical stages of formulation which can affect the quality of the finished preparation were considered during the research. The obtained results indicated that the quality of the finished preparation met the requirements of the State Pharmacopoeia of Ukraine. Copyright© by International Journal of Pharmaceutical Compounding, Inc.

  8. The Reliability and Validity of a Performance Task for Evaluating Science Process Skills.

    ERIC Educational Resources Information Center

    Adams, Cheryll M.; Callahan, Carolyn M.

    1995-01-01

    The Diet Cola Test was designed as a process assessment of science aptitude in intermediate grade students. Investigations of the instrument's reliability and validity indicated that data did not support use of the instrument for identifying individual students' aptitude. However, results suggested the test's appropriateness for evaluating…

  9. 25 CFR 42.7 - What does due process in a formal disciplinary proceeding include?

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 25 Indians 1 2010-04-01 2010-04-01 false What does due process in a formal disciplinary proceeding... RIGHTS § 42.7 What does due process in a formal disciplinary proceeding include? Due process must include... rendering a disciplinary decision. (b) The school must hold a fair and impartial hearing before imposing...

  10. Development and Validation of the Evidence-Based Practice Process Assessment Scale: Preliminary Findings

    ERIC Educational Resources Information Center

    Rubin, Allen; Parrish, Danielle E.

    2010-01-01

    Objective: This report describes the development and preliminary findings regarding the reliability, validity, and sensitivity of a scale that has been developed to assess practitioners' perceived familiarity with, attitudes about, and implementation of the phases of the evidence-based practice (EBP) process. Method: After a panel of national…

  11. PIV Data Validation Software Package

    NASA Technical Reports Server (NTRS)

    Blackshire, James L.

    1997-01-01

    A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.

  12. Modification and Validation of an Automotive Data Processing Unit, Compessed Video System, and Communications Equipment

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Carter, R.J.

    1997-04-01

    The primary purpose of the "modification and validation of an automotive data processing unit (DPU), compressed video system, and communications equipment" cooperative research and development agreement (CRADA) was to modify and validate both hardware and software, developed by Scientific Atlanta, Incorporated (S-A) for defense applications (e.g., rotary-wing airplanes), for the commercial sector surface transportation domain (i.e., automobiles and trucks). S-A also furnished a state-of-the-art compressed video digital storage and retrieval system (CVDSRS), and off-the-shelf data storage and transmission equipment to support the data acquisition system for crash avoidance research (DASCAR) project conducted by Oak Ridge National Laboratory (ORNL). In turn,more » S-A received access to hardware and technology related to DASCAR. DASCAR was subsequently removed completely and installation was repeated a number of times to gain an accurate idea of complete installation, operation, and removal of DASCAR. Upon satisfactory completion of the DASCAR construction and preliminary shakedown, ORNL provided NHTSA with an operational demonstration of DASCAR at their East Liberty, OH test facility. The demonstration included an on-the-road demonstration of the entire data acquisition system using NHTSA'S test track. In addition, the demonstration also consisted of a briefing, containing the following: ORNL generated a plan for validating the prototype data acquisition system with regard to: removal of DASCAR from an existing vehicle, and installation and calibration in other vehicles; reliability of the sensors and systems; data collection and transmission process (data integrity); impact on the drivability of the vehicle and obtrusiveness of the system to the driver; data analysis procedures; conspicuousness of the vehicle to other drivers; and DASCAR installation and removal training and documentation. In order to identify any operational problems not captured by the systems

  13. Validation of the Chinese Expanded Euthanasia Attitude Scale

    ERIC Educational Resources Information Center

    Chong, Alice Ming-Lin; Fok, Shiu-Yeu

    2013-01-01

    This article reports the validation of the Chinese version of an expanded 31-item Euthanasia Attitude Scale. A 4-stage validation process included a pilot survey of 119 college students and a randomized household survey with 618 adults in Hong Kong. Confirmatory factor analysis confirmed a 4-factor structure of the scale, which can therefore be…

  14. Verification and Validation Process for Progressive Damage and Failure Analysis Methods in the NASA Advanced Composites Consortium

    NASA Technical Reports Server (NTRS)

    Wanthal, Steven; Schaefer, Joseph; Justusson, Brian; Hyder, Imran; Engelstad, Stephen; Rose, Cheryl

    2017-01-01

    The Advanced Composites Consortium is a US Government/Industry partnership supporting technologies to enable timeline and cost reduction in the development of certified composite aerospace structures. A key component of the consortium's approach is the development and validation of improved progressive damage and failure analysis methods for composite structures. These methods will enable increased use of simulations in design trade studies and detailed design development, and thereby enable more targeted physical test programs to validate designs. To accomplish this goal with confidence, a rigorous verification and validation process was developed. The process was used to evaluate analysis methods and associated implementation requirements to ensure calculation accuracy and to gage predictability for composite failure modes of interest. This paper introduces the verification and validation process developed by the consortium during the Phase I effort of the Advanced Composites Project. Specific structural failure modes of interest are first identified, and a subset of standard composite test articles are proposed to interrogate a progressive damage analysis method's ability to predict each failure mode of interest. Test articles are designed to capture the underlying composite material constitutive response as well as the interaction of failure modes representing typical failure patterns observed in aerospace structures.

  15. [Support of the nursing process through electronic nursing documentation systems (UEPD) – Initial validation of an instrument].

    PubMed

    Hediger, Hannele; Müller-Staub, Maria; Petry, Heidi

    2016-01-01

    Electronic nursing documentation systems, with standardized nursing terminology, are IT-based systems for recording the nursing processes. These systems have the potential to improve the documentation of the nursing process and to support nurses in care delivery. This article describes the development and initial validation of an instrument (known by its German acronym UEPD) to measure the subjectively-perceived benefits of an electronic nursing documentation system in care delivery. The validity of the UEPD was examined by means of an evaluation study carried out in an acute care hospital (n = 94 nurses) in German-speaking Switzerland. Construct validity was analyzed by principal components analysis. Initial references of validity of the UEPD could be verified. The analysis showed a stable four factor model (FS = 0.89) scoring in 25 items. All factors loaded ≥ 0.50 and the scales demonstrated high internal consistency (Cronbach's α = 0.73 – 0.90). Principal component analysis revealed four dimensions of support: establishing nursing diagnosis and goals; recording a case history/an assessment and documenting the nursing process; implementation and evaluation as well as information exchange. Further testing with larger control samples and with different electronic documentation systems are needed. Another potential direction would be to employ the UEPD in a comparison of various electronic documentation systems.

  16. The Music Therapy Session Assessment Scale (MT-SAS): Validation of a new tool for music therapy process evaluation.

    PubMed

    Raglio, Alfredo; Gnesi, Marco; Monti, Maria Cristina; Oasi, Osmano; Gianotti, Marta; Attardo, Lapo; Gontero, Giulia; Morotti, Lara; Boffelli, Sara; Imbriani, Chiara; Montomoli, Cristina; Imbriani, Marcello

    2017-11-01

    Music therapy (MT) interventions are aimed at creating and developing a relationship between patient and therapist. However, there is a lack of validated observational instruments to consistently evaluate the MT process. The purpose of this study was the validation of Music Therapy Session Assessment Scale (MT-SAS), designed to assess the relationship between therapist and patient during active MT sessions. Videotapes of a single 30-min session per patient were considered. A pilot study on the videotapes of 10 patients was carried out to help refine the items, define the scoring system and improve inter-rater reliability among the five raters. Then, a validation study on 100 patients with different clinical conditions was carried out. The Italian MT-SAS was used throughout the process, although we also provide an English translation. The final scale consisted of 7 binary items accounting for eye contact, countenance, and nonverbal and sound-music communication. In the pilot study, raters were found to share an acceptable level of agreement in their assessments. Explorative factorial analysis disclosed a single homogeneous factor including 6 items (thus supporting an ordinal total score), with only the item about eye contact being unrelated to the others. Moreover, the existence of 2 different archetypal profiles of attuned and disattuned behaviours was highlighted through multiple correspondence analysis. As suggested by the consistent results of 2 different analyses, MT-SAS is a reliable tool that globally evaluates sonorous-musical and nonverbal behaviours related to emotional attunement and empathetic relationship between patient and therapist during active MT sessions. Copyright © 2017 John Wiley & Sons, Ltd.

  17. Screening tool for oropharyngeal dysphagia in stroke - Part I: evidence of validity based on the content and response processes.

    PubMed

    Almeida, Tatiana Magalhães de; Cola, Paula Cristina; Pernambuco, Leandro de Araújo; Magalhães, Hipólito Virgílio; Magnoni, Carlos Daniel; Silva, Roberta Gonçalves da

    2017-08-17

    The aim of the present study was to identify the evidence of validity based on the content and response process of the Rastreamento de Disfagia Orofaríngea no Acidente Vascular Encefálico (RADAVE; "Screening Tool for Oropharyngeal Dysphagia in Stroke"). The criteria used to elaborate the questions were based on a literature review. A group of judges consisting of 19 different health professionals evaluated the relevance and representativeness of the questions, and the results were analyzed using the Content Validity Index. In order to evidence validity based on the response processes, 23 health professionals administered the screening tool and analyzed the questions using a structured scale and cognitive interview. The RADAVE structured to be applied in two stages. The first version consisted of 18 questions in stage I and 11 questions in stage II. Eight questions in stage I and four in stage II did not reach the minimum Content Validity Index, requiring reformulation by the authors. The cognitive interview demonstrated some misconceptions. New adjustments were made and the final version was produced with 12 questions in stage I and six questions in stage II. It was possible to develop a screening tool for dysphagia in stroke with adequate evidence of validity based on content and response processes. Both validity evidences obtained so far allowed to adjust the screening tool in relation to its construct. The next studies will analyze the other evidences of validity and the measures of accuracy.

  18. Numerical Validation of Chemical Compositional Model for Wettability Alteration Processes

    NASA Astrophysics Data System (ADS)

    Bekbauov, Bakhbergen; Berdyshev, Abdumauvlen; Baishemirov, Zharasbek; Bau, Domenico

    2017-12-01

    Chemical compositional simulation of enhanced oil recovery and surfactant enhanced aquifer remediation processes is a complex task that involves solving dozens of equations for all grid blocks representing a reservoir. In the present work, we perform a numerical validation of the newly developed mathematical formulation which satisfies the conservation laws of mass and energy and allows applying a sequential solution approach to solve the governing equations separately and implicitly. Through its application to the numerical experiment using a wettability alteration model and comparisons with existing chemical compositional model's numerical results, the new model has proven to be practical, reliable and stable.

  19. Obtaining Valid Safety Data for Software Safety Measurement and Process Improvement

    NASA Technical Reports Server (NTRS)

    Basili, Victor r.; Zelkowitz, Marvin V.; Layman, Lucas; Dangle, Kathleen; Diep, Madeline

    2010-01-01

    We report on a preliminary case study to examine software safety risk in the early design phase of the NASA Constellation spaceflight program. Our goal is to provide NASA quality assurance managers with information regarding the ongoing state of software safety across the program. We examined 154 hazard reports created during the preliminary design phase of three major flight hardware systems within the Constellation program. Our purpose was two-fold: 1) to quantify the relative importance of software with respect to system safety; and 2) to identify potential risks due to incorrect application of the safety process, deficiencies in the safety process, or the lack of a defined process. One early outcome of this work was to show that there are structural deficiencies in collecting valid safety data that make software safety different from hardware safety. In our conclusions we present some of these deficiencies.

  20. EOS Terra Validation Program

    NASA Technical Reports Server (NTRS)

    Starr, David

    2000-01-01

    The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra

  1. Automated identification of wound information in clinical notes of patients with heart diseases: Developing and validating a natural language processing application.

    PubMed

    Topaz, Maxim; Lai, Kenneth; Dowding, Dawn; Lei, Victor J; Zisberg, Anna; Bowles, Kathryn H; Zhou, Li

    2016-12-01

    Electronic health records are being increasingly used by nurses with up to 80% of the health data recorded as free text. However, only a few studies have developed nursing-relevant tools that help busy clinicians to identify information they need at the point of care. This study developed and validated one of the first automated natural language processing applications to extract wound information (wound type, pressure ulcer stage, wound size, anatomic location, and wound treatment) from free text clinical notes. First, two human annotators manually reviewed a purposeful training sample (n=360) and random test sample (n=1100) of clinical notes (including 50% discharge summaries and 50% outpatient notes), identified wound cases, and created a gold standard dataset. We then trained and tested our natural language processing system (known as MTERMS) to process the wound information. Finally, we assessed our automated approach by comparing system-generated findings against the gold standard. We also compared the prevalence of wound cases identified from free-text data with coded diagnoses in the structured data. The testing dataset included 101 notes (9.2%) with wound information. The overall system performance was good (F-measure is a compiled measure of system's accuracy=92.7%), with best results for wound treatment (F-measure=95.7%) and poorest results for wound size (F-measure=81.9%). Only 46.5% of wound notes had a structured code for a wound diagnosis. The natural language processing system achieved good performance on a subset of randomly selected discharge summaries and outpatient notes. In more than half of the wound notes, there were no coded wound diagnoses, which highlight the significance of using natural language processing to enrich clinical decision making. Our future steps will include expansion of the application's information coverage to other relevant wound factors and validation of the model with external data. Copyright © 2016 Elsevier Ltd. All

  2. TES Validation Reports

    Atmospheric Science Data Center

    2014-06-30

    ... Reports: TES Data Versions: TES Validation Report Version 6.0 (PDF) R13 processing version; F07_10 file versions TES Validation Report Version 5.0 (PDF) R12 processing version; F06_08, F06_09 file ...

  3. The Hyper-X Flight Systems Validation Program

    NASA Technical Reports Server (NTRS)

    Redifer, Matthew; Lin, Yohan; Bessent, Courtney Amos; Barklow, Carole

    2007-01-01

    For the Hyper-X/X-43A program, the development of a comprehensive validation test plan played an integral part in the success of the mission. The goal was to demonstrate hypersonic propulsion technologies by flight testing an airframe-integrated scramjet engine. Preparation for flight involved both verification and validation testing. By definition, verification is the process of assuring that the product meets design requirements; whereas validation is the process of assuring that the design meets mission requirements for the intended environment. This report presents an overview of the program with emphasis on the validation efforts. It includes topics such as hardware-in-the-loop, failure modes and effects, aircraft-in-the-loop, plugs-out, power characterization, antenna pattern, integration, combined systems, captive carry, and flight testing. Where applicable, test results are also discussed. The report provides a brief description of the flight systems onboard the X-43A research vehicle and an introduction to the ground support equipment required to execute the validation plan. The intent is to provide validation concepts that are applicable to current, follow-on, and next generation vehicles that share the hybrid spacecraft and aircraft characteristics of the Hyper-X vehicle.

  4. An integrated assessment instrument: Developing and validating instrument for facilitating critical thinking abilities and science process skills on electrolyte and nonelectrolyte solution matter

    NASA Astrophysics Data System (ADS)

    Astuti, Sri Rejeki Dwi; Suyanta, LFX, Endang Widjajanti; Rohaeti, Eli

    2017-05-01

    The demanding of assessment in learning process was impact by policy changes. Nowadays, assessment is not only emphasizing knowledge, but also skills and attitudes. However, in reality there are many obstacles in measuring them. This paper aimed to describe how to develop integrated assessment instrument and to verify instruments' validity such as content validity and construct validity. This instrument development used test development model by McIntire. Development process data was acquired based on development test step. Initial product was observed by three peer reviewer and six expert judgments (two subject matter experts, two evaluation experts and two chemistry teachers) to acquire content validity. This research involved 376 first grade students of two Senior High Schools in Bantul Regency to acquire construct validity. Content validity was analyzed used Aiken's formula. The verifying of construct validity was analyzed by exploratory factor analysis using SPSS ver 16.0. The result show that all constructs in integrated assessment instrument are asserted valid according to content validity and construct validity. Therefore, the integrated assessment instrument is suitable for measuring critical thinking abilities and science process skills of senior high school students on electrolyte solution matter.

  5. Development and Validation of a Theory Based Screening Process for Suicide Risk

    DTIC Science & Technology

    2015-09-01

    not be delayed until all data have been collected. This is with particular respect to our data that confirms soldiers under report suicide ideation ...and that while they say that they would inform loved ones about suicidal thoughts, over 50% of soldiers who endorse ideation have not told anyone...AD_________________ Award Number: W81XWH-11-1-0588 TITLE: Development and Validation of a Theory Based Screening Process for Suicide Risk

  6. Power Plant Model Validation Tool

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    The PPMV is used to validate generator model using disturbance recordings. The PPMV tool contains a collection of power plant models and model validation studies, as well as disturbance recordings from a number of historic grid events. The user can import data from a new disturbance into the database, which converts PMU and SCADA data into GE PSLF format, and then run the tool to validate (or invalidate) the model for a specific power plant against its actual performance. The PNNL PPMV tool enables the automation of the process of power plant model validation using disturbance recordings. The tool usesmore » PMU and SCADA measurements as input information. The tool automatically adjusts all required EPCL scripts and interacts with GE PSLF in the batch mode. The main tool features includes: The tool interacts with GE PSLF; The tool uses GE PSLF Play-In Function for generator model validation; Database of projects (model validation studies); Database of the historic events; Database of the power plant; The tool has advanced visualization capabilities; and The tool automatically generates reports« less

  7. Validity of an Integrative Method for Processing Physical Activity Data.

    PubMed

    Ellingson, Laura D; Schwabacher, Isaac J; Kim, Youngwon; Welk, Gregory J; Cook, Dane B

    2016-08-01

    Accurate assessments of both physical activity and sedentary behaviors are crucial to understand the health consequences of movement patterns and to track changes over time and in response to interventions. The study evaluates the validity of an integrative, machine learning method for processing activity monitor data in relation to a portable metabolic analyzer (Oxycon mobile [OM]) and direct observation (DO). Forty-nine adults (age 18-40 yr) each completed 5-min bouts of 15 activities ranging from sedentary to vigorous intensity in a laboratory setting while wearing ActiGraph (AG) on the hip, activPAL on the thigh, and OM. Estimates of energy expenditure (EE) and categorization of activity intensity were obtained from the AG processed with Lyden's sojourn (SOJ) method and from our new sojourns including posture (SIP) method, which integrates output from the AG and activPAL. Classification accuracy and estimates of EE were then compared with criterion measures (OM and DO) using confusion matrices and comparisons of the mean absolute error of log-transformed data (MAE ln Q). The SIP method had a higher overall classification agreement (79%, 95% CI = 75%-82%) than the SOJ (56%, 95% CI = 52%-59%) based on DO. Compared with OM, estimates of EE from SIP had lower mean absolute error of log-transformed data than SOJ for light-intensity (0.21 vs 0.27), moderate-intensity (0.33 vs 0.42), and vigorous-intensity (0.16 vs 0.35) activities. The SIP method was superior to SOJ for distinguishing between sedentary and light activities as well as estimating EE at higher intensities. Thus, SIP is recommended for research in which accuracy of measurement across the full range of activity intensities is of interest.

  8. Update: Validation, Edits, and Application Processing. Phase II and Error-Prone Model Report.

    ERIC Educational Resources Information Center

    Gray, Susan; And Others

    An update to the Validation, Edits, and Application Processing and Error-Prone Model Report (Section 1, July 3, 1980) is presented. The objective is to present the most current data obtained from the June 1980 Basic Educational Opportunity Grant applicant and recipient files and to determine whether the findings reported in Section 1 of the July…

  9. Validation of a model for the cast-film process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chambon, F.; Ohlsson, S.; Silagy, D.

    1996-12-31

    We have developed a model of the cast-film process and compared theoretical predictions against experiments on a pilot line. Three polyethylenes with a markedly different level of melt elasticity were used in this evaluation; namely, a high pressure low density polyethylene, LDPE, and two linear low density polyethylenes, LLDPE-1 and LLDPE-2. The final film dimensions of the LDPE were found to be in good agreement with 1-D viscoelastic stationary predictions. Flow field visualization experiments indicate, however, a 2-D velocity field in the airgap between the extrusion die and the chill roll. Taking this observation into account, evolutions of the freemore » surface of the web along the airgap were recorded with LLDPE-2, our least elastic melt. An excellent agreement is found between these measurements and predictions of neck-in and edge bead with 2-D Newtonian stationary simulations. The time-dependent solution, which is based on a linear stability analysis, allows to identify a zone of draw resonance within the working space of the process, defined by the draw ratio, the Deborah number, and the web aspect ratio. It is predicted that increasing this latter parameter stabilizes the process until an optimum value is reached. Experiments with LLDPE-1 are shown to validate this unique theoretical result, thus allowing to increase the draw ratio by about 75%.« less

  10. Validity of Scientific Based Chemistry Android Module to Empower Science Process Skills (SPS) in Solubility Equilibrium

    NASA Astrophysics Data System (ADS)

    Antrakusuma, B.; Masykuri, M.; Ulfa, M.

    2018-04-01

    Evolution of Android technology can be applied to chemistry learning, one of the complex chemistry concept was solubility equilibrium. this concept required the science process skills (SPS). This study aims to: 1) Characteristic scientific based chemistry Android module to empowering SPS, and 2) Validity of the module based on content validity and feasibility test. This research uses a Research and Development approach (RnD). Research subjects were 135 s1tudents and three teachers at three high schools in Boyolali, Central of Java. Content validity of the module was tested by seven experts using Aiken’s V technique, and the module feasibility was tested to students and teachers in each school. Characteristics of chemistry module can be accessed using the Android device. The result of validation of the module contents got V = 0.89 (Valid), and the results of the feasibility test Obtained 81.63% (by the student) and 73.98% (by the teacher) indicates this module got good criteria.

  11. Finite element analysis of dental implants with validation: to what extent can we expect the model to predict biological phenomena? A literature review and proposal for classification of a validation process.

    PubMed

    Chang, Yuanhan; Tambe, Abhijit Anil; Maeda, Yoshinobu; Wada, Masahiro; Gonda, Tomoya

    2018-03-08

    A literature review of finite element analysis (FEA) studies of dental implants with their model validation process was performed to establish the criteria for evaluating validation methods with respect to their similarity to biological behavior. An electronic literature search of PubMed was conducted up to January 2017 using the Medical Subject Headings "dental implants" and "finite element analysis." After accessing the full texts, the context of each article was searched using the words "valid" and "validation" and articles in which these words appeared were read to determine whether they met the inclusion criteria for the review. Of 601 articles published from 1997 to 2016, 48 that met the eligibility criteria were selected. The articles were categorized according to their validation method as follows: in vivo experiments in humans (n = 1) and other animals (n = 3), model experiments (n = 32), others' clinical data and past literature (n = 9), and other software (n = 2). Validation techniques with a high level of sufficiency and efficiency are still rare in FEA studies of dental implants. High-level validation, especially using in vivo experiments tied to an accurate finite element method, needs to become an established part of FEA studies. The recognition of a validation process should be considered when judging the practicality of an FEA study.

  12. Developing a primary care patient measure of safety (PC PMOS): a modified Delphi process and face validity testing.

    PubMed

    Hernan, Andrea L; Giles, Sally J; O'Hara, Jane K; Fuller, Jeffrey; Johnson, Julie K; Dunbar, James A

    2016-04-01

    Patients are a valuable source of information about ways to prevent harm in primary care and are in a unique position to provide feedback about the factors that contribute to safety incidents. Unlike in the hospital setting, there are currently no tools that allow the systematic capture of this information from patients. The aim of this study was to develop a quantitative primary care patient measure of safety (PC PMOS). A two-stage approach was undertaken to develop questionnaire domains and items. Stage 1 involved a modified Delphi process. An expert panel reached consensus on domains and items based on three sources of information (validated hospital PMOS, previous research conducted by our study team and literature on threats to patient safety). Stage 2 involved testing the face validity of the questionnaire developed during stage 1 with patients and primary care staff using the 'think aloud' method. Following this process, the questionnaire was revised accordingly. The PC PMOS was received positively by both patients and staff during face validity testing. Barriers to completion included the length, relevance and clarity of questions. The final PC PMOS consisted of 50 items across 15 domains. The contributory factors to safety incidents centred on communication, access to care, patient-related factors, organisation and care planning, task performance and information flow. This is the first tool specifically designed for primary care settings, which allows patients to provide feedback about factors contributing to potential safety incidents. The PC PMOS provides a way for primary care organisations to learn about safety from the patient perspective and make service improvements with the aim of reducing harm in this setting. Future research will explore the reliability and construct validity of the PC PMOS. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  13. WE-H-BRA-03: Development of a Model to Include the Evolution of Resistant Tumor Subpopulations Into the Treatment Optimization Process for Schedules Involving Targeted Agents in Chemoradiation Therapy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grassberger, C; Paganetti, H

    Purpose: To develop a model that includes the process of resistance development into the treatment optimization process for schedules that include targeted therapies. Further, to validate the approach using clinical data and to apply the model to assess the optimal induction period with targeted agents before curative treatment with chemo-radiation in stage III lung cancer. Methods: Growth of the tumor and its subpopulations is modeled by Gompertzian growth dynamics, resistance induction as a stochastic process. Chemotherapy induced cell kill is modeled by log-cell kill dynamics, targeted agents similarly but restricted to the sensitive population. Radiation induced cell kill is assumedmore » to follow the linear-quadratic model. The validation patient data consist of a cohort of lung cancer patients treated with tyrosine kinase inhibitors that had longitudinal imaging data available. Results: The resistance induction model was successfully validated using clinical trial data from 49 patients treated with targeted agents. The observed recurrence kinetics, with tumors progressing from 1.4–63 months, result in tumor growth equaling a median volume doubling time of 92 days [34–248] and a median fraction of pre-existing resistance of 0.035 [0–0.22], in agreement with previous clinical studies. The model revealed widely varying optimal time points for the use of curative therapy, reaching from ∼1m to >6m depending on the patient’s growth rate and amount of pre-existing resistance. This demonstrates the importance of patient-specific treatment schedules when targeted agents are incorporated into the treatment. Conclusion: We developed a model including evolutionary dynamics of resistant sub-populations with traditional chemotherapy and radiation cell kill models. Fitting to clinical data yielded patient specific growth rates and resistant fraction in agreement with previous studies. Further application of the model demonstrated how proper timing of chemo

  14. Validation of New Process Models for Large Injection-Molded Long-Fiber Thermoplastic Composite Structures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Nguyen, Ba Nghiep; Jin, Xiaoshi; Wang, Jin

    2012-02-23

    This report describes the work conducted under the CRADA Nr. PNNL/304 between Battelle PNNL and Autodesk whose objective is to validate the new process models developed under the previous CRADA for large injection-molded LFT composite structures. To this end, the ARD-RSC and fiber length attrition models implemented in the 2013 research version of Moldflow was used to simulate the injection molding of 600-mm x 600-mm x 3-mm plaques from 40% glass/polypropylene (Dow Chemical DLGF9411.00) and 40% glass/polyamide 6,6 (DuPont Zytel 75LG40HSL BK031) materials. The injection molding was performed by Injection Technologies, Inc. at Windsor, Ontario (under a subcontract by Oakmore » Ridge National Laboratory, ORNL) using the mold offered by the Automotive Composite Consortium (ACC). Two fill speeds under the same back pressure were used to produce plaques under slow-fill and fast-fill conditions. Also, two gating options were used to achieve the following desired flow patterns: flows in edge-gated plaques and in center-gated plaques. After molding, ORNL performed measurements of fiber orientation and length distributions for process model validations. The structure of this report is as follows. After the Introduction (Section 1), Section 2 provides a summary of the ARD-RSC and fiber length attrition models. A summary of model implementations in the latest research version of Moldflow is given in Section 3. Section 4 provides the key processing conditions and parameters for molding of the ACC plaques. The validations of the ARD-RSC and fiber length attrition models are presented and discussed in Section 5. The conclusions will be drawn in Section 6.« less

  15. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of

  16. Climatological Processing and Product Development for the TRMM Ground Validation Program

    NASA Technical Reports Server (NTRS)

    Marks, D. A.; Kulie, M. S.; Robinson, M.; Silberstein, D. S.; Wolff, D. B.; Ferrier, B. S.; Amitai, E.; Fisher, B.; Wang, J.; Augustine, D.; hide

    2000-01-01

    The Tropical Rainfall Measuring Mission (TRMM) satellite was successfully launched in November 1997.The main purpose of TRMM is to sample tropical rainfall using the first active spaceborne precipitation radar. To validate TRMM satellite observations, a comprehensive Ground Validation (GV) Program has been implemented. The primary goal of TRMM GV is to provide basic validation of satellite-derived precipitation measurements over monthly climatologies for the following primary sites: Melbourne, FL; Houston, TX; Darwin, Australia- and Kwajalein Atoll, RMI As part of the TRMM GV effort, research analysts at NASA Goddard Space Flight Center (GSFC) generate standardized rainfall products using quality-controlled ground-based radar data from the four primary GV sites. This presentation will provide an overview of TRMM GV climatological processing and product generation. A description of the data flow between the primary GV sites, NASA GSFC, and the TRMM Science and Data Information System (TSDIS) will be presented. The radar quality control algorithm, which features eight adjustable height and reflectivity parameters, and its effect on monthly rainfall maps, will be described. The methodology used to create monthly, gauge-adjusted rainfall products for each primary site will also be summarized. The standardized monthly rainfall products are developed in discrete, modular steps with distinct intermediate products. A summary of recently reprocessed official GV rainfall products available for TRMM science users will be presented. Updated basic standardized product results involving monthly accumulation, Z-R relationship, and gauge statistics for each primary GV site will also be displayed.

  17. Development process and initial validation of the Ethical Conflict in Nursing Questionnaire-Critical Care Version.

    PubMed

    Falcó-Pegueroles, Anna; Lluch-Canut, Teresa; Guàrdia-Olmos, Joan

    2013-06-01

    Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables 'frequency' and 'degree of conflict'. In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable 'exposure to conflict', as well as considering six 'types of ethical conflict'. An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach's alpha was used to evaluate the instrument's reliability. All analyses were performed using the statistical software PASW v19. Cronbach's alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its structure is such that the four variables on which our model

  18. Standing wave design and experimental validation of a tandem simulated moving bed process for insulin purification.

    PubMed

    Xie, Yi; Mun, Sungyong; Kim, Jinhyun; Wang, Nien-Hwa Linda

    2002-01-01

    A tandem simulated moving bed (SMB) process for insulin purification has been proposed and validated experimentally. The mixture to be separated consists of insulin, high molecular weight proteins, and zinc chloride. A systematic approach based on the standing wave design, rate model simulations, and experiments was used to develop this multicomponent separation process. The standing wave design was applied to specify the SMB operating conditions of a lab-scale unit with 10 columns. The design was validated with rate model simulations prior to experiments. The experimental results show 99.9% purity and 99% yield, which closely agree with the model predictions and the standing wave design targets. The agreement proves that the standing wave design can ensure high purity and high yield for the tandem SMB process. Compared to a conventional batch SEC process, the tandem SMB has 10% higher yield, 400% higher throughput, and 72% lower eluant consumption. In contrast, a design that ignores the effects of mass transfer and nonideal flow cannot meet the purity requirement and gives less than 96% yield.

  19. Does the decision in a validation process of a surrogate endpoint change with level of significance of treatment effect? A proposal on validation of surrogate endpoints.

    PubMed

    Sertdemir, Y; Burgut, R

    2009-01-01

    In recent years the use of surrogate end points (S) has become an interesting issue. In clinical trials, it is important to get treatment outcomes as early as possible. For this reason there is a need for surrogate endpoints (S) which are measured earlier than the true endpoint (T). However, before a surrogate endpoint can be used it must be validated. For a candidate surrogate endpoint, for example time to recurrence, the validation result may change dramatically between clinical trials. The aim of this study is to show how the validation criterion (R(2)(trial)) proposed by Buyse et al. are influenced by the magnitude of treatment effect with an application using real data. The criterion R(2)(trial) proposed by Buyse et al. (2000) is applied to the four data sets from colon cancer clinical trials (C-01, C-02, C-03 and C-04). Each clinical trial is analyzed separately for treatment effect on survival (true endpoint) and recurrence free survival (surrogate endpoint) and this analysis is done also for each center in each trial. Results are used for standard validation analysis. The centers were grouped by the Wald statistic in 3 equal groups. Validation criteria R(2)(trial) were 0.641 95% CI (0.432-0.782), 0.223 95% CI (0.008-0.503), 0.761 95% CI (0.550-0.872) and 0.560 95% CI (0.404-0.687) for C-01, C-02, C-03 and C-04 respectively. The R(2)(trial) criteria changed by the Wald statistics observed for the centers used in the validation process. Higher the Wald statistic groups are higher the R(2)(trial) values observed. The recurrence free survival is not a good surrogate for overall survival in clinical trials with non significant treatment effects and moderate for significant treatment effects. This shows that the level of significance of treatment effect should be taken into account in validation process of surrogate endpoints.

  20. Workflow for Criticality Assessment Applied in Biopharmaceutical Process Validation Stage 1.

    PubMed

    Zahel, Thomas; Marschall, Lukas; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Mueller, Eric M; Murphy, Patrick; Natschläger, Thomas; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-12

    Identification of critical process parameters that impact product quality is a central task during regulatory requested process validation. Commonly, this is done via design of experiments and identification of parameters significantly impacting product quality (rejection of the null hypothesis that the effect equals 0). However, parameters which show a large uncertainty and might result in an undesirable product quality limit critical to the product, may be missed. This might occur during the evaluation of experiments since residual/un-modelled variance in the experiments is larger than expected a priori. Estimation of such a risk is the task of the presented novel retrospective power analysis permutation test. This is evaluated using a data set for two unit operations established during characterization of a biopharmaceutical process in industry. The results show that, for one unit operation, the observed variance in the experiments is much larger than expected a priori, resulting in low power levels for all non-significant parameters. Moreover, we present a workflow of how to mitigate the risk associated with overlooked parameter effects. This enables a statistically sound identification of critical process parameters. The developed workflow will substantially support industry in delivering constant product quality, reduce process variance and increase patient safety.

  1. The construct and criterion validity of the multi-source feedback process to assess physician performance: a meta-analysis

    PubMed Central

    Al Ansari, Ahmed; Donnon, Tyrone; Al Khalifa, Khalid; Darwish, Abdulla; Violato, Claudio

    2014-01-01

    Background The purpose of this study was to conduct a meta-analysis on the construct and criterion validity of multi-source feedback (MSF) to assess physicians and surgeons in practice. Methods In this study, we followed the guidelines for the reporting of observational studies included in a meta-analysis. In addition to PubMed and MEDLINE databases, the CINAHL, EMBASE, and PsycINFO databases were searched from January 1975 to November 2012. All articles listed in the references of the MSF studies were reviewed to ensure that all relevant publications were identified. All 35 articles were independently coded by two authors (AA, TD), and any discrepancies (eg, effect size calculations) were reviewed by the other authors (KA, AD, CV). Results Physician/surgeon performance measures from 35 studies were identified. A random-effects model of weighted mean effect size differences (d) resulted in: construct validity coefficients for the MSF system on physician/surgeon performance across different levels in practice ranged from d=0.14 (95% confidence interval [CI] 0.40–0.69) to d=1.78 (95% CI 1.20–2.30); construct validity coefficients for the MSF on physician/surgeon performance on two different occasions ranged from d=0.23 (95% CI 0.13–0.33) to d=0.90 (95% CI 0.74–1.10); concurrent validity coefficients for the MSF based on differences in assessor group ratings ranged from d=0.50 (95% CI 0.47–0.52) to d=0.57 (95% CI 0.55–0.60); and predictive validity coefficients for the MSF on physician/surgeon performance across different standardized measures ranged from d=1.28 (95% CI 1.16–1.41) to d=1.43 (95% CI 0.87–2.00). Conclusion The construct and criterion validity of the MSF system is supported by small to large effect size differences based on the MSF process and physician/surgeon performance across different clinical and nonclinical domain measures. PMID:24600300

  2. TU-FG-209-11: Validation of a Channelized Hotelling Observer to Optimize Chest Radiography Image Processing for Nodule Detection: A Human Observer Study

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sanchez, A; Little, K; Chung, J

    Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and

  3. Collisional-radiative model including recombination processes for W27+ ion★

    NASA Astrophysics Data System (ADS)

    Murakami, Izumi; Sasaki, Akira; Kato, Daiji; Koike, Fumihiro

    2017-10-01

    We have constructed a collisional-radiative (CR) model for W27+ ions including 226 configurations with n ≤ 9 and ł ≤ 5 for spectroscopic diagnostics. We newly include recombination processes in the model and this is the first result of extreme ultraviolet spectrum calculated for recombining plasma component. Calculated spectra in 40-70 Å range in ionizing and recombining plasma components show similar 3 strong lines and 1 line weak in recombining plasma component at 45-50 Å and many weak lines at 50-65 Å for both components. Recombination processes do not contribute much to the spectrum at around 60 Å for W27+ ion. Dielectronic satellite lines are also minor contribution to the spectrum of recombining plasma component. Dielectronic recombination (DR) rate coefficient from W28+ to W27+ ions is also calculated with the same atomic data in the CR model. We found that larger set of energy levels including many autoionizing states gave larger DR rate coefficients but our rate agree within factor 6 with other works at electron temperature around 1 keV in which W27+ and W28+ ions are usually observed in plasmas. Contribution to the Topical Issue "Atomic and Molecular Data and their Applications", edited by Gordon W.F. Drake, Jung-Sik Yoon, Daiji Kato, and Grzegorz Karwasz.

  4. Item validity vs. item discrimination index: a redundancy?

    NASA Astrophysics Data System (ADS)

    Panjaitan, R. L.; Irawati, R.; Sujana, A.; Hanifah, N.; Djuanda, D.

    2018-03-01

    In several literatures about evaluation and test analysis, it is common to find that there are calculations of item validity as well as item discrimination index (D) with different formula for each. Meanwhile, other resources said that item discrimination index could be obtained by calculating the correlation between the testee’s score in a particular item and the testee’s score on the overall test, which is actually the same concept as item validity. Some research reports, especially undergraduate theses tend to include both item validity and item discrimination index in the instrument analysis. It seems that these concepts might overlap for both reflect the test quality on measuring the examinees’ ability. In this paper, examples of some results of data processing on item validity and item discrimination index were compared. It would be discussed whether item validity and item discrimination index can be represented by one of them only or it should be better to present both calculations for simple test analysis, especially in undergraduate theses where test analyses were included.

  5. Random Qualitative Validation: A Mixed-Methods Approach to Survey Validation

    ERIC Educational Resources Information Center

    Van Duzer, Eric

    2012-01-01

    The purpose of this paper is to introduce the process and value of Random Qualitative Validation (RQV) in the development and interpretation of survey data. RQV is a method of gathering clarifying qualitative data that improves the validity of the quantitative analysis. This paper is concerned with validity in relation to the participants'…

  6. The Chemical Validation and Standardization Platform (CVSP): large-scale automated validation of chemical structure datasets.

    PubMed

    Karapetyan, Karen; Batchelor, Colin; Sharpe, David; Tkachenko, Valery; Williams, Antony J

    2015-01-01

    There are presently hundreds of online databases hosting millions of chemical compounds and associated data. As a result of the number of cheminformatics software tools that can be used to produce the data, subtle differences between the various cheminformatics platforms, as well as the naivety of the software users, there are a myriad of issues that can exist with chemical structure representations online. In order to help facilitate validation and standardization of chemical structure datasets from various sources we have delivered a freely available internet-based platform to the community for the processing of chemical compound datasets. The chemical validation and standardization platform (CVSP) both validates and standardizes chemical structure representations according to sets of systematic rules. The chemical validation algorithms detect issues with submitted molecular representations using pre-defined or user-defined dictionary-based molecular patterns that are chemically suspicious or potentially requiring manual review. Each identified issue is assigned one of three levels of severity - Information, Warning, and Error - in order to conveniently inform the user of the need to browse and review subsets of their data. The validation process includes validation of atoms and bonds (e.g., making aware of query atoms and bonds), valences, and stereo. The standard form of submission of collections of data, the SDF file, allows the user to map the data fields to predefined CVSP fields for the purpose of cross-validating associated SMILES and InChIs with the connection tables contained within the SDF file. This platform has been applied to the analysis of a large number of data sets prepared for deposition to our ChemSpider database and in preparation of data for the Open PHACTS project. In this work we review the results of the automated validation of the DrugBank dataset, a popular drug and drug target database utilized by the community, and ChEMBL 17 data set

  7. Preliminary evidence for validity of the Bahasa Indonesian version of Study Process Questionnaire.

    PubMed

    Liem, Arief Darmanegara; Prasetya, Paulus Hidajat

    2007-02-01

    This study provides preliminary evidence for the validity of the Bahasa Indonesian version of the Study Process Questionnaire (BI-SPQ) from a sample of 147 psychology students (22 men and 125 women; M age = 21.8 yr., SD = 1.3). The internal consistency alpha of the BI-SPQ subscales were found to range from .46 (Surface Strategy) to .77 (Deep Strategy), with a median of .67. Principal component analysis indicated a two-factor solution, where the Deep and Achieving subscales loaded onto Factor 1 and the Surface subscales loaded on Factor 2. Students' GPAs were associated negatively with Surface Motive (r = -.24) and were associated positively with Deep and Achieving Motives (rs = .20). Further studies with larger samples involving students majoring in other disciplines are needed to provide further evidence of the validity of the BI-SPQ.

  8. [Validation and verfication of microbiology methods].

    PubMed

    Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción

    2015-01-01

    Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  9. Methodological challenges when doing research that includes ethnic minorities: a scoping review.

    PubMed

    Morville, Anne-Le; Erlandsson, Lena-Karin

    2016-11-01

    There are challenging methodological issues in obtaining valid and reliable results on which to base occupational therapy interventions for ethnic minorities. The aim of this scoping review is to describe the methodological problems within occupational therapy research, when ethnic minorities are included. A thorough literature search yielded 21 articles obtained from the scientific databases PubMed, Cinahl, Web of Science and PsychInfo. Analysis followed Arksey and O'Malley's framework for scoping reviews, applying content analysis. The results showed methodological issues concerning the entire research process from defining and recruiting samples, the conceptual understanding, lack of appropriate instruments, data collection using interpreters to analyzing data. In order to avoid excluding the ethnic minorities from adequate occupational therapy research and interventions, development of methods for the entire research process is needed. It is a costly and time-consuming process, but the results will be valid and reliable, and therefore more applicable in clinical practice.

  10. Assessment of processing speed in children with mild TBI: a "first look" at the validity of pediatric ImPACT.

    PubMed

    Newman, Julie B; Reesman, Jennifer H; Vaughan, Christopher G; Gioia, Gerard A

    2013-01-01

    Deficit in the speed of cognitive processing is a commonly identified neuropsychological change in children recovering from a mild TBI. However, there are few validated child assessment instruments that allow for serial assessment over the course of recovery in this population. Pediatric ImPACT is a novel measure that purports to assess cognitive speed, learning, and efficiency in this population. The current study sought to validate the use of this new measure by comparing it to traditional paper and pencil measures of processing speed. One hundred and sixty-four children (71% male) age 5-12 with mild TBI evaluated in an outpatient concussion clinic were administered Pediatric ImPACT and other neuropsychological test measures as part of a flexible test battery. Performance on the Response Speed Composite of Pediatric ImPACT was more strongly associated with other measures of cognitive processing speed, than with measures of immediate/working memory and learning/memory in this sample of injured children. There is preliminary support for convergent and discriminant validity of Pediatric ImPACT as a measure for use in post-concussion evaluations of processing speed in children.

  11. [Computerized system validation of clinical researches].

    PubMed

    Yan, Charles; Chen, Feng; Xia, Jia-lai; Zheng, Qing-shan; Liu, Daniel

    2015-11-01

    Validation is a documented process that provides a high degree of assurance. The computer system does exactly and consistently what it is designed to do in a controlled manner throughout the life. The validation process begins with the system proposal/requirements definition, and continues application and maintenance until system retirement and retention of the e-records based on regulatory rules. The objective to do so is to clearly specify that each application of information technology fulfills its purpose. The computer system validation (CSV) is essential in clinical studies according to the GCP standard, meeting product's pre-determined attributes of the specifications, quality, safety and traceability. This paper describes how to perform the validation process and determine relevant stakeholders within an organization in the light of validation SOPs. Although a specific accountability in the implementation of the validation process might be outsourced, the ultimate responsibility of the CSV remains on the shoulder of the business process owner-sponsor. In order to show that the compliance of the system validation has been properly attained, it is essential to set up comprehensive validation procedures and maintain adequate documentations as well as training records. Quality of the system validation should be controlled using both QC and QA means.

  12. Examining Evidence for the Validity of PISA Learning Strategy Scales Based on Student Response Processes

    ERIC Educational Resources Information Center

    Hopfenbeck, Therese N.; Maul, Andrew

    2011-01-01

    The aim of this study was to investigate response-process based evidence for the validity of the Programme for International Student Assessment's (PISA) self-report questionnaire scales as measures of specific psychological constructs, with a focus on scales meant to measure inclination toward specific learning strategies. Cognitive interviews (N…

  13. Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh

    2015-01-15

    Highlights: • Fractionation of solid wastes into readily and slowly biodegradable fractions. • Kinetic coefficients estimation from mono-digestion batch assays. • Validation of kinetic coefficients with a co-digestion continuous experiment. • Simulation of batch and continuous experiments with an ADM1-based model. - Abstract: A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowlymore » biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 g VS/L d. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes.« less

  14. [Validation of the knowledge and attitudes of health professionals in the Living Will Declaration process].

    PubMed

    Contreras-Fernández, Eugenio; Barón-López, Francisco Javier; Méndez-Martínez, Camila; Canca-Sánchez, José Carlos; Cabezón Rodríguez, Isabel; Rivas-Ruiz, Francisco

    2017-04-01

    Evaluate the validity and reliability of the knowledge and attitudes of health professionals questionnaire on the Living Will Declaration (LWD) process. Cross-sectional study structured into 3 phases: (i)pilot questionnaire administered with paper to assess losses and adjustment problems; (ii)assessment of the validity and internal reliability, and (iii)assessment of the pre-filtering questionnaire stability (test-retest). Costa del Sol (Malaga) Health Area. January 2014 to April 2015. Healthcare professionals of the Costa del Sol Primary Care District and the Costa del Sol Health Agency. There were 391 (23.6%) responses, and 100 participated in the stability assessment (83 responses). The questionnaire consisted of 2 parts: (i)Knowledge (5 dimensions and 41 items), and (ii)Attitudes (2 dimensions and 17 items). In the pilot study, none of the items lost over 10%. In the evaluation phase of validity and reliability, the questionnaire was reduced to 41 items (29 of knowledge, and 12 of attitudes). In the stability evaluation phase, all items evaluated met the requirement of a kappa higher than 0.2, or had a percentage of absolute agreement exceeding 75%. The questionnaire will identify the status and areas for improvement in the health care setting, and then will allow an improved culture of LWD process in general population. Copyright © 2016 Elsevier España, S.L.U. All rights reserved.

  15. Validation of King's transaction process for healthcare provider-patient communication in pharmaceutical context: One cross-sectional study.

    PubMed

    Wang, Dan; Liu, Chenxi; Zhang, Zinan; Ye, Liping; Zhang, Xinping

    2018-03-27

    With the impressive advantages of patient-pharmacist communication being advocated and poor pharmacist-patient communication in different settings, it is of great significance and urgency to explore the mechanism of the pharmacist-patient communicative relationship. The King's theory of goal attainment is proposed as one of the most promising models to be applied, because it takes into consideration both improving the patient-pharmacist relationship and attaining patients' health outcomes. This study aimed to validate the King's transaction process and build the linkage between the transaction process and patient satisfaction in a pharmaceutical context. A cross-sectional study was conducted in four tertiary hospitals in two provincial cities (Wuhan and Shanghai) in central and east China in July 2017. Patients over 18 were investigated in the pharmacies of the hospitals. The instrument for the transaction process was revised and tested. Path analysis was conducted for the King's transaction process and its relationship with patient satisfaction. Five hundred eighty-nine participants were investigated for main study. Prior to the addition of covariates, the hypothesised model of the King's transaction process was validated, in which all paths of the transaction process were statistically significant (p < 0.001). The transaction process had direct effects on patient satisfaction (p < 0.001). After controlling the effects of covariates, the Multiple Indicators, Multiple Causes (MIMIC) model showed good fit to data (Tucker-Lewis index [TLI] = 0.99, comparative fit index [CFI] = 0.99, root mean square error of approximation [RMSEA] = 0.05, weighted root mean square residual [WRMR] = 1.00). The MIMIC model showed that chronic disease and site were predictors for both identifying problems and patient satisfaction (p < 0.05). Based on the well-fitting path analytic model, the transaction process was established as one valid theoretical

  16. The Japanese version of the questionnaire about the process of recovery: development and validity and reliability testing.

    PubMed

    Kanehara, Akiko; Kotake, Risa; Miyamoto, Yuki; Kumakura, Yousuke; Morita, Kentaro; Ishiura, Tomoko; Shimizu, Kimiko; Fujieda, Yumiko; Ando, Shuntaro; Kondo, Shinsuke; Kasai, Kiyoto

    2017-11-07

    Personal recovery is increasingly recognised as an important outcome measure in mental health services. This study aimed to develop a Japanese version of the Questionnaire about the Process of Recovery (QPR-J) and test its validity and reliability. The study comprised two stages that employed the cross-sectional and prospective cohort designs, respectively. We translated the questionnaire using a standard translation/back-translation method. Convergent validity was examined by calculating Pearson's correlation coefficients with scores on the Recovery Assessment Scale (RAS) and the Short-Form-8 Health Survey (SF-8). An exploratory factor analysis (EFA) was conducted to examine factorial validity. We used intraclass correlation and Cronbach's alpha to examine the test-retest and internal consistency reliability of the QPR-J's 22-item full scale, 17-item intrapersonal and 5-item interpersonal subscales. We conducted an EFA along with a confirmatory factor analysis (CFA). Data were obtained from 197 users of mental health services (mean age: 42.0 years; 61.9% female; 49.2% diagnosed with schizophrenia). The QPR-J showed adequate convergent validity, exhibiting significant, positive correlations with the RAS and SF-8 scores. The QPR-J's full version, subscales, showed excellent test-retest and internal consistency reliability, with the exception of acceptable but relatively low internal consistency reliability for the interpersonal subscale. Based on the results of the CFA and EFA, we adopted the factor structure extracted from the original 2-factor model based on the present CFA. The QPR-J is an adequately valid and reliable measure of the process of recovery among Japanese users with mental health services.

  17. Statistical validation and an empirical model of hydrogen production enhancement found by utilizing passive flow disturbance in the steam-reformation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Paul A.; Liao, Chang-hsien

    2007-11-15

    A passive flow disturbance has been proven to enhance the conversion of fuel in a methanol-steam reformer. This study presents a statistical validation of the experiment based on a standard 2{sup k} factorial experiment design and the resulting empirical model of the enhanced hydrogen producing process. A factorial experiment design was used to statistically analyze the effects and interactions of various input factors in the experiment. Three input factors, including the number of flow disturbers, catalyst size, and reactant flow rate were investigated for their effects on the fuel conversion in the steam-reformation process. Based on the experimental results, anmore » empirical model was developed and further evaluated with an uncertainty analysis and interior point data. (author)« less

  18. Development process and initial validation of the Ethical Conflict in Nursing Questionnaire-Critical Care Version

    PubMed Central

    2013-01-01

    Background Ethical conflicts are arising as a result of the growing complexity of clinical care, coupled with technological advances. Most studies that have developed instruments for measuring ethical conflict base their measures on the variables ‘frequency’ and ‘degree of conflict’. In our view, however, these variables are insufficient for explaining the root of ethical conflicts. Consequently, the present study formulates a conceptual model that also includes the variable ‘exposure to conflict’, as well as considering six ‘types of ethical conflict’. An instrument was then designed to measure the ethical conflicts experienced by nurses who work with critical care patients. The paper describes the development process and validation of this instrument, the Ethical Conflict in Nursing Questionnaire Critical Care Version (ECNQ-CCV). Methods The sample comprised 205 nursing professionals from the critical care units of two hospitals in Barcelona (Spain). The ECNQ-CCV presents 19 nursing scenarios with the potential to produce ethical conflict in the critical care setting. Exposure to ethical conflict was assessed by means of the Index of Exposure to Ethical Conflict (IEEC), a specific index developed to provide a reference value for each respondent by combining the intensity and frequency of occurrence of each scenario featured in the ECNQ-CCV. Following content validity, construct validity was assessed by means of Exploratory Factor Analysis (EFA), while Cronbach’s alpha was used to evaluate the instrument’s reliability. All analyses were performed using the statistical software PASW v19. Results Cronbach’s alpha for the ECNQ-CCV as a whole was 0.882, which is higher than the values reported for certain other related instruments. The EFA suggested a unidimensional structure, with one component accounting for 33.41% of the explained variance. Conclusions The ECNQ-CCV is shown to a valid and reliable instrument for use in critical care units. Its

  19. Coupling of geochemical and multiphase flow processes for validation of the MUFITS reservoir simulator against TOUGHREACT

    NASA Astrophysics Data System (ADS)

    De Lucia, Marco; Kempka, Thomas; Afanasyev, Andrey; Melnik, Oleg; Kühn, Michael

    2016-04-01

    Coupled reactive transport simulations, especially in heterogeneous settings considering multiphase flow, are extremely time consuming and suffer from significant numerical issues compared to purely hydrodynamic simulations. This represents a major hurdle in the assessment of geological subsurface utilization, since it constrains the practical application of reactive transport modelling to coarse spatial discretization or oversimplified geological settings. In order to overcome such limitations, De Lucia et al. [1] developed and validated a one-way coupling approach between geochemistry and hydrodynamics, which is particularly well suited for CO2 storage simulations, while being of general validity. In the present study, the models used for the validation of the one-way coupling approach introduced by De Lucia et al. (2015), and originally performed with the TOUGHREACT simulator, are transferred to and benchmarked against the multiphase reservoir simulator MUFITS [2]. The geological model is loosely inspired by an existing CO2 storage site. Its grid comprises 2,950 elements enclosed in a single layer, but reflecting a realistic three-dimensional anticline geometry. For the purpose of this comparison, homogeneous and heterogeneous scenarios in terms of porosity and permeability were investigated. In both cases, the results of the MUFITS simulator are in excellent agreement with those produced with the fully-coupled TOUGHREACT simulator, while profiting from significantly higher computational performance. This study demonstrates how a computationally efficient simulator such as MUFITS can be successfully included in a coupled process simulation framework, and also suggests ameliorations and specific strategies for the coupling of chemical processes with hydrodynamics and heat transport, aiming at tackling geoscientific problems beyond the storage of CO2. References [1] De Lucia, M., Kempka, T., and Kühn, M. A coupling alternative to reactive transport simulations

  20. Processing negative valence of word pairs that include a positive word.

    PubMed

    Itkes, Oksana; Mashal, Nira

    2016-09-01

    Previous research has suggested that cognitive performance is interrupted by negative relative to neutral or positive stimuli. We examined whether negative valence affects performance at the word or phrase level. Participants performed a semantic decision task on word pairs that included either a negative or a positive target word. In Experiment 1, the valence of the target word was congruent with the overall valence conveyed by the word pair (e.g., fat kid). As expected, response times were slower in the negative condition relative to the positive condition. Experiment 2 included target words that were incongruent with the overall valence of the word pair (e.g., fat salary). Response times were longer for word pairs whose overall valence was negative relative to positive, even though these word pairs included a positive word. Our findings support the Cognitive Primacy Hypothesis, according to which emotional valence is extracted after conceptual processing is complete.

  1. Development and in-line validation of a Process Analytical Technology to facilitate the scale up of coating processes.

    PubMed

    Wirges, M; Funke, A; Serno, P; Knop, K; Kleinebudde, P

    2013-05-05

    Incorporation of an active pharmaceutical ingredient (API) into the coating layer of film-coated tablets is a method mainly used to formulate fixed-dose combinations. Uniform and precise spray-coating of an API represents a substantial challenge, which could be overcome by applying Raman spectroscopy as process analytical tool. In pharmaceutical industry, Raman spectroscopy is still mainly used as a bench top laboratory analytical method and usually not implemented in the production process. Concerning the application in the production process, a lot of scientific approaches stop at the level of feasibility studies and do not manage the step to production scale and process applications. The present work puts the scale up of an active coating process into focus, which is a step of highest importance during the pharmaceutical development. Active coating experiments were performed at lab and production scale. Using partial least squares (PLS), a multivariate model was constructed by correlating in-line measured Raman spectral data with the coated amount of API. By transferring this model, being implemented for a lab scale process, to a production scale process, the robustness of this analytical method and thus its applicability as a Process Analytical Technology (PAT) tool for the correct endpoint determination in pharmaceutical manufacturing could be shown. Finally, this method was validated according to the European Medicine Agency (EMA) guideline with respect to the special requirements of the applied in-line model development strategy. Copyright © 2013 Elsevier B.V. All rights reserved.

  2. An overview of mesoscale aerosol processes, comparisons, and validation studies from DRAGON networks

    NASA Astrophysics Data System (ADS)

    Holben, Brent N.; Kim, Jhoon; Sano, Itaru; Mukai, Sonoyo; Eck, Thomas F.; Giles, David M.; Schafer, Joel S.; Sinyuk, Aliaksandr; Slutsker, Ilya; Smirnov, Alexander; Sorokin, Mikhail; Anderson, Bruce E.; Che, Huizheng; Choi, Myungje; Crawford, James H.; Ferrare, Richard A.; Garay, Michael J.; Jeong, Ukkyo; Kim, Mijin; Kim, Woogyung; Knox, Nichola; Li, Zhengqiang; Lim, Hwee S.; Liu, Yang; Maring, Hal; Nakata, Makiko; Pickering, Kenneth E.; Piketh, Stuart; Redemann, Jens; Reid, Jeffrey S.; Salinas, Santo; Seo, Sora; Tan, Fuyi; Tripathi, Sachchida N.; Toon, Owen B.; Xiao, Qingyang

    2018-01-01

    Over the past 24 years, the AErosol RObotic NETwork (AERONET) program has provided highly accurate remote-sensing characterization of aerosol optical and physical properties for an increasingly extensive geographic distribution including all continents and many oceanic island and coastal sites. The measurements and retrievals from the AERONET global network have addressed satellite and model validation needs very well, but there have been challenges in making comparisons to similar parameters from in situ surface and airborne measurements. Additionally, with improved spatial and temporal satellite remote sensing of aerosols, there is a need for higher spatial-resolution ground-based remote-sensing networks. An effort to address these needs resulted in a number of field campaign networks called Distributed Regional Aerosol Gridded Observation Networks (DRAGONs) that were designed to provide a database for in situ and remote-sensing comparison and analysis of local to mesoscale variability in aerosol properties. This paper describes the DRAGON deployments that will continue to contribute to the growing body of research related to meso- and microscale aerosol features and processes. The research presented in this special issue illustrates the diversity of topics that has resulted from the application of data from these networks.

  3. Catalyst regeneration process including metal contaminants removal

    DOEpatents

    Ganguli, Partha S.

    1984-01-01

    Spent catalysts removed from a catalytic hydrogenation process for hydrocarbon feedstocks, and containing undesired metals contaminants deposits, are regenerated. Following solvent washing to remove process oils, the catalyst is treated either with chemicals which form sulfate or oxysulfate compounds with the metals contaminants, or with acids which remove the metal contaminants, such as 5-50 W % sulfuric acid in aqueous solution and 0-10 W % ammonium ion solutions to substantially remove the metals deposits. The acid treating occurs within the temperature range of 60.degree.-250.degree. F. for 5-120 minutes at substantially atmospheric pressure. Carbon deposits are removed from the treated catalyst by carbon burnoff at 800.degree.-900.degree. F. temperature, using 1-6 V % oxygen in an inert gas mixture, after which the regenerated catalyst can be effectively reused in the catalytic process.

  4. Conceptualization of Approaches and Thought Processes Emerging in Validating of Model in Mathematical Modeling in Technology Aided Environment

    ERIC Educational Resources Information Center

    Hidiroglu, Çaglar Naci; Bukova Güzel, Esra

    2013-01-01

    The aim of the present study is to conceptualize the approaches displayed for validation of model and thought processes provided in mathematical modeling process performed in technology-aided learning environment. The participants of this grounded theory study were nineteen secondary school mathematics student teachers. The data gathered from the…

  5. Improving the Residency Admissions Process by Integrating a Professionalism Assessment: A Validity and Feasibility Study

    ERIC Educational Resources Information Center

    Bajwa, Nadia M.; Yudkowsky, Rachel; Belli, Dominique; Vu, Nu Viet; Park, Yoon Soo

    2017-01-01

    The purpose of this study was to provide validity and feasibility evidence in measuring professionalism using the Professionalism Mini-Evaluation Exercise (P-MEX) scores as part of a residency admissions process. In 2012 and 2013, three standardized-patient-based P-MEX encounters were administered to applicants invited for an interview at the…

  6. Lubricant base oil and wax processing. [Glossary included

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Sequeira, A. Jr.

    1994-01-01

    This book provides state-of-the-art information on all processes currently used to manufacture lubricant base oils and waxes. It furnishes helpful lists of conversion factors, construction cost data, and process licensors, as well as a glossary of essential petroleum processing terms.

  7. Development and validation of the trait and state versions of the Post-Event Processing Inventory.

    PubMed

    Blackie, Rebecca A; Kocovski, Nancy L

    2017-03-01

    Post-event processing (PEP) refers to negative and prolonged rumination following anxiety-provoking social situations. Although there are scales to assess PEP, they are situation-specific, some targeting only public-speaking situations. Furthermore, there are no trait measures to assess the tendency to engage in PEP. The purpose of this research was to create a new measure of PEP, the Post-Event Processing Inventory (PEPI), which can be employed following all types of social situations and includes both trait and state forms. Over two studies (study 1, N = 220; study 2, N = 199), we explored and confirmed the factor structure of the scale with student samples. For each form of the scale, we found and confirmed that a higher-order, general PEP factor could be inferred from three sub-domains (intensity, frequency, and self-judgment). We also found preliminary evidence for the convergent, concurrent, discriminant/divergent, incremental, and predictive validity for each version of the scale. Both forms of the scale demonstrated excellent internal consistency and the trait form had excellent two-week test-retest reliability. Given the utility and versatility of the scale, the PEPI may provide a useful alternative to existing measures of PEP and rumination.

  8. Validation of lumbar spine loading from a musculoskeletal model including the lower limbs and lumbar spine.

    PubMed

    Actis, Jason A; Honegger, Jasmin D; Gates, Deanna H; Petrella, Anthony J; Nolasco, Luis A; Silverman, Anne K

    2018-02-08

    Low back mechanics are important to quantify to study injury, pain and disability. As in vivo forces are difficult to measure directly, modeling approaches are commonly used to estimate these forces. Validation of model estimates is critical to gain confidence in modeling results across populations of interest, such as people with lower-limb amputation. Motion capture, ground reaction force and electromyographic data were collected from ten participants without an amputation (five male/five female) and five participants with a unilateral transtibial amputation (four male/one female) during trunk-pelvis range of motion trials in flexion/extension, lateral bending and axial rotation. A musculoskeletal model with a detailed lumbar spine and the legs including 294 muscles was used to predict L4-L5 loading and muscle activations using static optimization. Model estimates of L4-L5 intervertebral joint loading were compared to measured intradiscal pressures from the literature and muscle activations were compared to electromyographic signals. Model loading estimates were only significantly different from experimental measurements during trunk extension for males without an amputation and for people with an amputation, which may suggest a greater portion of L4-L5 axial load transfer through the facet joints, as facet loads are not captured by intradiscal pressure transducers. Pressure estimates between the model and previous work were not significantly different for flexion, lateral bending or axial rotation. Timing of model-estimated muscle activations compared well with electromyographic activity of the lumbar paraspinals and upper erector spinae. Validated estimates of low back loading can increase the applicability of musculoskeletal models to clinical diagnosis and treatment. Copyright © 2017 Elsevier Ltd. All rights reserved.

  9. Positron annihilation processes update

    NASA Technical Reports Server (NTRS)

    Guessoum, Nidhal; Skibo, Jeffrey G.; Ramaty, Reuven

    1997-01-01

    The present knowledge concerning the positron annihilation processes is reviewed, with emphasis on the data of the cross sections of the various processes of interest in astrophysical applications. Recent results are presented including results on reaction rates and line widths, the validity of which is verified.

  10. Using Healthcare Failure Mode and Effect Analysis to reduce medication errors in the process of drug prescription, validation and dispensing in hospitalised patients.

    PubMed

    Vélez-Díaz-Pallarés, Manuel; Delgado-Silveira, Eva; Carretero-Accame, María Emilia; Bermejo-Vicedo, Teresa

    2013-01-01

    To identify actions to reduce medication errors in the process of drug prescription, validation and dispensing, and to evaluate the impact of their implementation. A Health Care Failure Mode and Effect Analysis (HFMEA) was supported by a before-and-after medication error study to measure the actual impact on error rate after the implementation of corrective actions in the process of drug prescription, validation and dispensing in wards equipped with computerised physician order entry (CPOE) and unit-dose distribution system (788 beds out of 1080) in a Spanish university hospital. The error study was carried out by two observers who reviewed medication orders on a daily basis to register prescription errors by physicians and validation errors by pharmacists. Drugs dispensed in the unit-dose trolleys were reviewed for dispensing errors. Error rates were expressed as the number of errors for each process divided by the total opportunities for error in that process times 100. A reduction in prescription errors was achieved by providing training for prescribers on CPOE, updating prescription procedures, improving clinical decision support and automating the software connection to the hospital census (relative risk reduction (RRR), 22.0%; 95% CI 12.1% to 31.8%). Validation errors were reduced after optimising time spent in educating pharmacy residents on patient safety, developing standardised validation procedures and improving aspects of the software's database (RRR, 19.4%; 95% CI 2.3% to 36.5%). Two actions reduced dispensing errors: reorganising the process of filling trolleys and drawing up a protocol for drug pharmacy checking before delivery (RRR, 38.5%; 95% CI 14.1% to 62.9%). HFMEA facilitated the identification of actions aimed at reducing medication errors in a healthcare setting, as the implementation of several of these led to a reduction in errors in the process of drug prescription, validation and dispensing.

  11. Validating workplace performance assessments in health sciences students: a case study from speech pathology.

    PubMed

    McAllister, Sue; Lincoln, Michelle; Ferguson, Allison; McAllister, Lindy

    2013-01-01

    Valid assessment of health science students' ability to perform in the real world of workplace practice is critical for promoting quality learning and ultimately certifying students as fit to enter the world of professional practice. Current practice in performance assessment in the health sciences field has been hampered by multiple issues regarding assessment content and process. Evidence for the validity of scores derived from assessment tools are usually evaluated against traditional validity categories with reliability evidence privileged over validity, resulting in the paradoxical effect of compromising the assessment validity and learning processes the assessments seek to promote. Furthermore, the dominant statistical approaches used to validate scores from these assessments fall under the umbrella of classical test theory approaches. This paper reports on the successful national development and validation of measures derived from an assessment of Australian speech pathology students' performance in the workplace. Validation of these measures considered each of Messick's interrelated validity evidence categories and included using evidence generated through Rasch analyses to support score interpretation and related action. This research demonstrated that it is possible to develop an assessment of real, complex, work based performance of speech pathology students, that generates valid measures without compromising the learning processes the assessment seeks to promote. The process described provides a model for other health professional education programs to trial.

  12. An assessment of space shuttle flight software development processes

    NASA Technical Reports Server (NTRS)

    1993-01-01

    In early 1991, the National Aeronautics and Space Administration's (NASA's) Office of Space Flight commissioned the Aeronautics and Space Engineering Board (ASEB) of the National Research Council (NRC) to investigate the adequacy of the current process by which NASA develops and verifies changes and updates to the Space Shuttle flight software. The Committee for Review of Oversight Mechanisms for Space Shuttle Flight Software Processes was convened in Jan. 1992 to accomplish the following tasks: (1) review the entire flight software development process from the initial requirements definition phase to final implementation, including object code build and final machine loading; (2) review and critique NASA's independent verification and validation process and mechanisms, including NASA's established software development and testing standards; (3) determine the acceptability and adequacy of the complete flight software development process, including the embedded validation and verification processes through comparison with (1) generally accepted industry practices, and (2) generally accepted Department of Defense and/or other government practices (comparing NASA's program with organizations and projects having similar volumes of software development, software maturity, complexity, criticality, lines of code, and national standards); (4) consider whether independent verification and validation should continue. An overview of the study, independent verification and validation of critical software, and the Space Shuttle flight software development process are addressed. Findings and recommendations are presented.

  13. GPM Ground Validation: Pre to Post-Launch Era

    NASA Astrophysics Data System (ADS)

    Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George

    2015-04-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation

  14. LACO-Wiki: A land cover validation tool and a new, innovative teaching resource for remote sensing and the geosciences

    NASA Astrophysics Data System (ADS)

    See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz

    2016-04-01

    The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.

  15. International validation of quality indicators for evaluating priority setting in low income countries: process and key lessons.

    PubMed

    Kapiriri, Lydia

    2017-06-19

    While there have been efforts to develop frameworks to guide healthcare priority setting; there has been limited focus on evaluation frameworks. Moreover, while the few frameworks identify quality indicators for successful priority setting, they do not provide the users with strategies to verify these indicators. Kapiriri and Martin (Health Care Anal 18:129-147, 2010) developed a framework for evaluating priority setting in low and middle income countries. This framework provides BOTH parameters for successful priority setting and proposes means of their verification. Before its use in real life contexts, this paper presents results from a validation process of the framework. The framework validation involved 53 policy makers and priority setting researchers at the global, national and sub-national levels (in Uganda). They were requested to indicate the relative importance of the proposed parameters as well as the feasibility of obtaining the related information. We also pilot tested the proposed means of verification. Almost all the respondents evaluated all the parameters, including the contextual factors, as 'very important'. However, some respondents at the global level thought 'presence of incentives to comply', 'reduced disagreements', 'increased public understanding,' 'improved institutional accountability' and 'meeting the ministry of health objectives', which could be a reflection of their levels of decision making. All the proposed means of verification were assessed as feasible with the exception of meeting observations which would require an insider. These findings results were consistent with those obtained from the pilot testing. These findings are relevant to policy makers and researchers involved in priority setting in low and middle income countries. To the best of our knowledge, this is one of the few initiatives that has involved potential users of a framework (at the global and in a Low Income Country) in its validation. The favorable validation

  16. Project Interface Requirements Process Including Shuttle Lessons Learned

    NASA Technical Reports Server (NTRS)

    Bauch, Garland T.

    2010-01-01

    Most failures occur at interfaces between organizations and hardware. Processing interface requirements at the start of a project life cycle will reduce the likelihood of costly interface changes/failures later. This can be done by adding Interface Control Documents (ICDs) to the Project top level drawing tree, providing technical direction to the Projects for interface requirements, and by funding the interface requirements function directly from the Project Manager's office. The interface requirements function within the Project Systems Engineering and Integration (SE&I) Office would work in-line with the project element design engineers early in the life cycle to enhance communications and negotiate technical issues between the elements. This function would work as the technical arm of the Project Manager to help ensure that the Project cost, schedule, and risk objectives can be met during the Life Cycle. Some ICD Lessons Learned during the Space Shuttle Program (SSP) Life Cycle will include the use of hardware interface photos in the ICD, progressive life cycle design certification by analysis, test, & operations experience, assigning interface design engineers to Element Interface (EI) and Project technical panels, and linking interface design drawings with project build drawings

  17. Survey Instrument Validity Part I: Principles of Survey Instrument Development and Validation in Athletic Training Education Research

    ERIC Educational Resources Information Center

    Burton, Laura J.; Mazerolle, Stephanie M.

    2011-01-01

    Context: Instrument validation is an important facet of survey research methods and athletic trainers must be aware of the important underlying principles. Objective: To discuss the process of survey development and validation, specifically the process of construct validation. Background: Athletic training researchers frequently employ the use of…

  18. Validating soil denitrification models based on laboratory N_{2} and N_{2}O fluxes and underlying processes derived by stable isotope approaches

    NASA Astrophysics Data System (ADS)

    Well, Reinhard; Böttcher, Jürgen; Butterbach-Bahl, Klaus; Dannenmann, Michael; Deppe, Marianna; Dittert, Klaus; Dörsch, Peter; Horn, Marcus; Ippisch, Olaf; Mikutta, Robert; Müller, Carsten; Müller, Christoph; Senbayram, Mehmet; Vogel, Hans-Jörg; Wrage-Mönnig, Nicole

    2016-04-01

    Robust denitrification data suitable to validate soil N2 fluxes in denitrification models are scarce due to methodical limitations and the extreme spatio-temporal heterogeneity of denitrification in soils. Numerical models have become essential tools to predict denitrification at different scales. Model performance could either be tested for total gaseous flux (NO + N2O + N2), individual denitrification products (e.g. N2O and/or NO) or for the effect of denitrification factors (e.g. C-availability, respiration, diffusivity, anaerobic volume, etc.). While there are numerous examples for validating N2O fluxes, there are neither robust field data of N2 fluxes nor sufficiently resolved measurements of control factors used as state variables in the models. To the best of our knowledge there has been only one published validation of modelled soil N2 flux by now, using a laboratory data set to validate an ecosystem model. Hence there is a need for validation data at both, the mesocosm and the field scale including validation of individual denitrification controls. Here we present the concept for collecting model validation data which is be part of the DFG-research unit "Denitrification in Agricultural Soils: Integrated Control and Modelling at Various Scales (DASIM)" starting this year. We will use novel approaches including analysis of stable isotopes, microbial communities, pores structure and organic matter fractions to provide denitrification data sets comprising as much detail on activity and regulation as possible as a basis to validate existing and calibrate new denitrification models that are applied and/or developed by DASIM subprojects. The basic idea is to simulate "field-like" conditions as far as possible in an automated mesocosm system without plants in order to mimic processes in the soil parts not significantly influenced by the rhizosphere (rhizosphere soils are studied by other DASIM projects). Hence, to allow model testing in a wide range of conditions

  19. Development and validation of an educational booklet for healthy eating during pregnancy1

    PubMed Central

    de Oliveira, Sheyla Costa; Lopes, Marcos Venícios de Oliveira; Fernandes, Ana Fátima Carvalho

    2014-01-01

    OBJECTIVE: to describe the validation process of an educational booklet for healthy eating in pregnancy using local and regional food. METHODS: methodological study, developed in three steps: construction of the educational booklet, validation of the educational material by judges, and by pregnant women. The validation process was conducted by 22 judges and 20 pregnant women, by convenience selection. We considered a p-value<0.85 to validate the booklet compliance and relevance, according to the six items of the instrument. As for content validation, the item-level Content Validity Index (I-CVI) was considered when a minimum score of at least 0.80 was obtained. RESULTS: five items were considered relevant by the judges. The mean I-CVI was 0.91. The pregnant women evaluated positively the booklet. The suggestions were accepted and included in the final version of the material. CONCLUSION: the booklet was validated in terms of content and relevance, and should be used by nurses for advice on healthy eating during pregnancy. PMID:25296145

  20. Validation and Comprehension: An Integrated Overview

    ERIC Educational Resources Information Center

    Kendeou, Panayiota

    2014-01-01

    In this article, I review and discuss the work presented in this special issue while focusing on a number of issues that warrant further investigation in validation research. These issues pertain to the nature of the validation processes, the processes and mechanisms that support validation during comprehension, the factors that influence…

  1. Process evaluation of a primary healthcare validation study of a culturally adapted depression screening tool for use by Aboriginal and Torres Strait Islander people: study protocol.

    PubMed

    Farnbach, Sara; Evans, John; Eades, Anne-Marie; Gee, Graham; Fernando, Jamie; Hammond, Belinda; Simms, Matty; DeMasi, Karrina; Hackett, Maree

    2017-11-03

    Process evaluations are conducted alongside research projects to identify the context, impact and consequences of research, determine whether it was conducted per protocol and to understand how, why and for whom an intervention is effective. We present a process evaluation protocol for the Getting it Right research project, which aims to determine validity of a culturally adapted depression screening tool for use by Aboriginal and Torres Strait Islander people. In this process evaluation, we aim to: (1) explore the context, impact and consequences of conducting Getting It Right, (2) explore primary healthcare staff and community representatives' experiences with the research project, (3) determine if it was conducted per protocol and (4) explore experiences with the depression screening tool, including perceptions about how it could be implemented into practice (if found to be valid). We also describe the partnerships established to conduct this process evaluation and how the national Values and Ethics: Guidelines for Ethical Conduct in Aboriginal and Torres Strait Islander Health Research is met. Realist and grounded theory approaches are used. Qualitative data include semistructured interviews with primary healthcare staff and community representatives involved with Getting it Right. Iterative data collection and analysis will inform a coding framework. Interviews will continue until saturation of themes is reached, or all participants are considered. Data will be triangulated against administrative data and patient feedback. An Aboriginal and Torres Strait Islander Advisory Group guides this research. Researchers will be blinded from validation data outcomes for as long as is feasible. The University of Sydney Human Research Ethics Committee, Aboriginal Health and Medical Research Council of New South Wales and six state ethics committees have approved this research. Findings will be submitted to academic journals and presented at conferences. ACTRN

  2. NASA Construction of Facilities Validation Processes - Total Building Commissioning (TBCx)

    NASA Technical Reports Server (NTRS)

    Hoover, Jay C.

    2004-01-01

    Key Atributes include: Total Quality Management (TQM) System that looks at all phases of a project. A team process that spans boundaries. A Commissioning Authority to lead the process. Commissioning requirements in contracts. Independent design review to verify compliance with Facility Project Requirements (FPR). Formal written Commissioning Plan with Documented Results. Functional performance testing (FPT) against the requirements document.

  3. Review of surface steam sterilization for validation purposes.

    PubMed

    van Doornmalen, Joost; Kopinga, Klaas

    2008-03-01

    Sterilization is an essential step in the process of producing sterile medical devices. To guarantee sterility, the process of sterilization must be validated. Because there is no direct way to measure sterility, the techniques applied to validate the sterilization process are based on statistical principles. Steam sterilization is the most frequently applied sterilization method worldwide and can be validated either by indicators (chemical or biological) or physical measurements. The steam sterilization conditions are described in the literature. Starting from these conditions, criteria for the validation of steam sterilization are derived and can be described in terms of physical parameters. Physical validation of steam sterilization appears to be an adequate and efficient validation method that could be considered as an alternative for indicator validation. Moreover, physical validation can be used for effective troubleshooting in steam sterilizing processes.

  4. Developing, validating and consolidating the doctor-patient relationship: the patients' views of a dynamic process.

    PubMed Central

    Gore, J; Ogden, J

    1998-01-01

    BACKGROUND: Previous research has examined the doctor-patient relationship in terms of its therapeutic effect, the need to consider the patients' models of their illness, and the patients' expectations of their doctor. However, to date, no research has examined the patients' views of the doctor-patient relationship. AIM: To examine patients' views of the process of creating a relationship with their general practitioner (GP). METHOD: A qualitative design was used involving in-depth interviews with 27 frequently attending patients from four urban general practices. They were chosen to provide a heterogeneous group in terms of age, sex, and ethnicity. RESULTS: The responders described creating the relationship in terms of three stages: development, validation, and consolidation. The development stage involved overcoming initial reservations, actively searching for a doctor that met the patient's needs, or knowing from the start that the doctor was the right one for them. The validation stage involved evaluating the nature of the relationship by searching for evidence of caring, comparing their doctor with others, storing key events for illustration of the value of the relationship, recruiting the views of others to support their own perspectives, and the willingness to make tradeoffs. The consolidation stage involved testing and setting boundaries concerned with knowledge, power, and a personal relationship. CONCLUSION: Creating a relationship with a GP is a dynamic process involving an active patient who searches out a GP who matches their own representation of the 'ideal', selects and retains information to validate their choice, and locates mutually acceptable boundaries. PMID:9800396

  5. Verification and Validation in a Rapid Software Development Process

    NASA Technical Reports Server (NTRS)

    Callahan, John R.; Easterbrook, Steve M.

    1997-01-01

    The high cost of software production is driving development organizations to adopt more automated design and analysis methods such as rapid prototyping, computer-aided software engineering (CASE) tools, and high-level code generators. Even developers of safety-critical software system have adopted many of these new methods while striving to achieve high levels Of quality and reliability. While these new methods may enhance productivity and quality in many cases, we examine some of the risks involved in the use of new methods in safety-critical contexts. We examine a case study involving the use of a CASE tool that automatically generates code from high-level system designs. We show that while high-level testing on the system structure is highly desirable, significant risks exist in the automatically generated code and in re-validating releases of the generated code after subsequent design changes. We identify these risks and suggest process improvements that retain the advantages of rapid, automated development methods within the quality and reliability contexts of safety-critical projects.

  6. Objectifying Content Validity: Conducting a Content Validity Study in Social Work Research.

    ERIC Educational Resources Information Center

    Rubio, Doris McGartland; Berg-Weger, Marla; Tebb, Susan S.; Lee, E. Suzanne; Rauch, Shannon

    2003-01-01

    The purpose of this article is to demonstrate how to conduct a content validity study. Instructions on how to calculate a content validity index, factorial validity index, and an interrater reliability index and guide for interpreting these indices are included. Implications regarding the value of conducting a content validity study for…

  7. Validation of contractor HMA testing data in the materials acceptance process.

    DOT National Transportation Integrated Search

    2010-08-01

    "This study conducted an analysis of the SCDOT HMA specification. A Research Steering Committee comprised of SCDOT, FHWA, and Industry representatives provided oversight of the process. The research process included a literature review, a brief surve...

  8. Immobilized bacterial spores for use as bioindicators in the validation of thermal sterilization processes.

    PubMed

    Serp, D; von Stockar, U; Marison, I W

    2002-07-01

    Spores of Bacillus subtilis ATCC 6051 and Bacillus stearothermophilus NCTC 10003 were immobilized in monodisperse alginate beads (diameter, 550 microm +/- 5%), and the capacity of the immobilized bioindicators to provide accurate and reliable F-values for sterilization processes was studied. The resistance of the beads to abrasion and heat was strong enough to ensure total retention of the bioindicators in the beads in a sterilization cycle. D- and z-values for free spores were identical to those for immobilized spores, which shows that immobilization does not modify the thermal resistance of the bioindicators. A D(100 degrees C) value of 1.5 min was found for free and immobilized B. subtilis spores heated in demineralized water, skimmed milk, and milk containing 4% fat, suggesting that a lipid concentration as low as 4% does not alter the thermal resistance of B. subtilis spores. Providing that the pH range is kept between 3.4 to 10 and that sufficiently low concentrations of Ca2+ competitors or complexants are present in the medium, immobilized bioindicators may serve as an efficient, accurate, and reliable tool with which to validate the efficiency of any sterilization process. The environmental factors (pH, media composition) affecting the thermoresistance of native contaminants are intrinsically reflected in the F-value, allowing for a sharper adjustment of the sterilization process. Immobilized spores of B. stearothermophilus were successfully used to validate a resonance and interference microwave system that is believed to offer a convenient alternative for the sterilization of temperature-sensitive products and medical wastes.

  9. A systematic review of validated sinus surgery simulators.

    PubMed

    Stew, B; Kao, S S-T; Dharmawardana, N; Ooi, E H

    2018-06-01

    Simulation provides a safe and effective opportunity to develop surgical skills. A variety of endoscopic sinus surgery (ESS) simulators has been described in the literature. Validation of these simulators allows for effective utilisation in training. To conduct a systematic review of the published literature to analyse the evidence for validated ESS simulation. Pubmed, Embase, Cochrane and Cinahl were searched from inception of the databases to 11 January 2017. Twelve thousand five hundred and sixteen articles were retrieved of which 10 112 were screened following the removal of duplicates. Thirty-eight full-text articles were reviewed after meeting search criteria. Evidence of face, content, construct, discriminant and predictive validity was extracted. Twenty articles were included in the analysis describing 12 ESS simulators. Eleven of these simulators had undergone validation: 3 virtual reality, 7 physical bench models and 1 cadaveric simulator. Seven of the simulators were shown to have face validity, 7 had construct validity and 1 had predictive validity. None of the simulators demonstrated discriminate validity. This systematic review demonstrates that a number of ESS simulators have been comprehensively validated. Many of the validation processes, however, lack standardisation in outcome reporting, thus limiting a meta-analysis comparison between simulators. © 2017 John Wiley & Sons Ltd.

  10. Instructions included? Make safety training part of medical device procurement process.

    PubMed

    Keller, James P

    2010-04-01

    Before hospitals embrace new technologies, it's important that medical personnel agree on how best to use them. Likewise, hospitals must provide the support to operate these sophisticated devices safely. With this in mind, it's wise for hospitals to include medical device training in the procurement process. Moreover, purchasing professionals can play a key role in helping to increase the amount of user training for medical devices and systems. What steps should you take to help ensure that new medical devices are implemented safely? Here are some tips.

  11. Simulation of ultrasonic arrays for industrial and civil engineering applications including validation

    NASA Astrophysics Data System (ADS)

    Spies, M.; Rieder, H.; Orth, Th.; Maack, S.

    2012-05-01

    In this contribution we address the beam field simulation of 2D ultrasonic arrays using the Generalized Point Source Synthesis technique. Aiming at the inspection of cylindrical components (e.g. pipes) the influence of concave and convex surface curvatures, respectively, has been evaluated for a commercial probe. We have compared these results with those obtained using a commercial simulation tool. In civil engineering, the ultrasonic inspection of highly attenuating concrete structures has been advanced by the development of dry contact point transducers, mainly applied in array arrangements. Our respective simulations for a widely used commercial probe are validated using experimental results acquired on concrete half-spheres with diameters from 200 mm up to 650 mm.

  12. Data Validation Package, April and June 2016 Groundwater and Surface Water Sampling at the Gunnison, Colorado, Processing Site, October 2016

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Linard, Joshua; Campbell, Sam

    This event included annual sampling of groundwater and surface water locations at the Gunnison, Colorado, Processing Site. Sampling and analyses were conducted as specified in Sampling and Analysis Plan for US Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated, http://energy.gov/lm/downloads/sampling-and­ analysis-plan-us-department-energy-office-legacy-management-sites). Samples were collected from 28 monitoring wells, three domestic wells, and six surface locations in April at the processing site as specified in the draft 2010 Ground Water Compliance Action Plan for the Gunnison, Colorado, Processing Site. Planned monitoring locations are shown in Attachment 1, Sampling and Analysis Work Order. Domestic wells 0476 and 0477 weremore » sampled in June because the homes were unoccupied in April, and the wells were not in use. Duplicate samples were collected from locations 0126, 0477, and 0780. One equipment blank was collected during this sampling event. Water levels were measured at all monitoring wells that were sampled. See Attachment 2, Trip Reports for additional details. The analytical data and associated qualifiers can be viewed in environmental database reports and are also available for viewing with dynamic mapping via the GEMS (Geospatial Environmental Mapping System) website at http://gems.lm.doe.gov/#. No issues were identified during the data validation process that requires additional action or follow-up. An assessment of anomalous data is included in Attachment 3. Interpretation and presentation of results, including an assessment ofthe natural flushing compliance strategy, will be reported in the upcoming 2016 Verification Monitoring Report. U.S.« less

  13. Use of Synthetic Single-Stranded Oligonucleotides as Artificial Test Soiling for Validation of Surgical Instrument Cleaning Processes

    PubMed Central

    Wilhelm, Nadja; Perle, Nadja; Simmoteit, Robert; Schlensak, Christian; Wendel, Hans P.; Avci-Adali, Meltem

    2014-01-01

    Surgical instruments are often strongly contaminated with patients' blood and tissues, possibly containing pathogens. The reuse of contaminated instruments without adequate cleaning and sterilization can cause postoperative inflammation and the transmission of infectious diseases from one patient to another. Thus, based on the stringent sterility requirements, the development of highly efficient, validated cleaning processes is necessary. Here, we use for the first time synthetic single-stranded DNA (ssDNA_ODN), which does not appear in nature, as a test soiling to evaluate the cleaning efficiency of routine washing processes. Stainless steel test objects were coated with a certain amount of ssDNA_ODN. After cleaning, the amount of residual ssDNA_ODN on the test objects was determined using quantitative real-time PCR. The established method is highly specific and sensitive, with a detection limit of 20 fg, and enables the determination of the cleaning efficiency of medical cleaning processes under different conditions to obtain optimal settings for the effective cleaning and sterilization of instruments. The use of this highly sensitive method for the validation of cleaning processes can prevent, to a significant extent, the insufficient cleaning of surgical instruments and thus the transmission of pathogens to patients. PMID:24672793

  14. Lesson 6: Signature Validation

    EPA Pesticide Factsheets

    Checklist items 13 through 17 are grouped under the Signature Validation Process, and represent CROMERR requirements that the system must satisfy as part of ensuring that electronic signatures it receives are valid.

  15. A Framework for Performing Verification and Validation in Reuse Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  16. Validation of the FEA of a deep drawing process with additional force transmission

    NASA Astrophysics Data System (ADS)

    Behrens, B.-A.; Bouguecha, A.; Bonk, C.; Grbic, N.; Vucetic, M.

    2017-10-01

    In order to meet requirements by automotive industry like decreasing the CO2 emissions, which reflects in reducing vehicles mass in the car body, the chassis and the powertrain, the continuous innovation and further development of existing production processes are required. In sheet metal forming processes the process limits and components characteristics are defined through the process specific loads. While exceeding the load limits, a failure in the material occurs, which can be avoided by additional force transmission activated in the deep drawing process before the process limit is achieved. This contribution deals with experimental investigations of a forming process with additional force transmission regarding the extension of the process limits. Based on FEA a tool system is designed and developed by IFUM. For this purpose, the steel material HCT600 is analyzed numerically. Within the experimental investigations, the deep drawing processes, with and without the additional force transmission are carried out. Here, a comparison of the produced rectangle cups is done. Subsequently, the identical deep drawing processes are investigated numerically. Thereby, the values of the punch reaction force and displacement are estimated and compared with experimental results. Thus, the validation of material model is successfully carried out on process scale. For further quantitative verification of the FEA results the experimental determined geometry of the rectangular cup is measured optically with ATOS system of the company GOM mbH and digitally compared with external software Geomagic®QualifyTM. The goal of this paper is the verification of the transferability of the FEA model for a conventional deep drawing process to a deep drawing process with additional force transmission with a counter punch.

  17. Failure mode and effects analysis outputs: are they valid?

    PubMed Central

    2012-01-01

    Background Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies between the teams’ estimates

  18. Failure mode and effects analysis outputs: are they valid?

    PubMed

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident

  19. Three phase heat and mass transfer model for unsaturated soil freezing process: Part 2 - model validation

    NASA Astrophysics Data System (ADS)

    Zhang, Yaning; Xu, Fei; Li, Bingxi; Kim, Yong-Song; Zhao, Wenke; Xie, Gongnan; Fu, Zhongbin

    2018-04-01

    This study aims to validate the three-phase heat and mass transfer model developed in the first part (Three phase heat and mass transfer model for unsaturated soil freezing process: Part 1 - model development). Experimental results from studies and experiments were used for the validation. The results showed that the correlation coefficients for the simulated and experimental water contents at different soil depths were between 0.83 and 0.92. The correlation coefficients for the simulated and experimental liquid water contents at different soil temperatures were between 0.95 and 0.99. With these high accuracies, the developed model can be well used to predict the water contents at different soil depths and temperatures.

  20. Skills Acquisition in Plantain Flour Processing Enterprises: A Validation of Training Modules for Senior Secondary Schools

    ERIC Educational Resources Information Center

    Udofia, Nsikak-Abasi; Nlebem, Bernard S.

    2013-01-01

    This study was to validate training modules that can help provide requisite skills for Senior Secondary school students in plantain flour processing enterprises for self-employment and to enable them pass their examination. The study covered Rivers State. Purposive sampling technique was used to select a sample size of 205. Two sets of structured…

  1. Issues in developing valid assessments of speech pathology students' performance in the workplace.

    PubMed

    McAllister, Sue; Lincoln, Michelle; Ferguson, Alison; McAllister, Lindy

    2010-01-01

    Workplace-based learning is a critical component of professional preparation in speech pathology. A validated assessment of this learning is seen to be 'the gold standard', but it is difficult to develop because of design and validation issues. These issues include the role and nature of judgement in assessment, challenges in measuring quality, and the relationship between assessment and learning. Valid assessment of workplace-based performance needs to capture the development of competence over time and account for both occupation specific and generic competencies. This paper reviews important conceptual issues in the design of valid and reliable workplace-based assessments of competence including assessment content, process, impact on learning, measurement issues, and validation strategies. It then goes on to share what has been learned about quality assessment and validation of a workplace-based performance assessment using competency-based ratings. The outcomes of a four-year national development and validation of an assessment tool are described. A literature review of issues in conceptualizing, designing, and validating workplace-based assessments was conducted. Key factors to consider in the design of a new tool were identified and built into the cycle of design, trialling, and data analysis in the validation stages of the development process. This paper provides an accessible overview of factors to consider in the design and validation of workplace-based assessment tools. It presents strategies used in the development and national validation of a tool COMPASS, used in an every speech pathology programme in Australia, New Zealand, and Singapore. The paper also describes Rasch analysis, a model-based statistical approach which is useful for establishing validity and reliability of assessment tools. Through careful attention to conceptual and design issues in the development and trialling of workplace-based assessments, it has been possible to develop the

  2. Investigation of the performance of fermentation processes using a mathematical model including effects of metabolic bottleneck and toxic product on cells.

    PubMed

    Sriyudthsak, Kansuporn; Shiraishi, Fumihide

    2010-11-01

    A number of recent research studies have focused on theoretical and experimental investigation of a bottleneck in a metabolic reaction network. However, there is no study on how the bottleneck affects the performance of a fermentation process when a product is highly toxic and remarkably influences the growth and death of cells. The present work therefore studies the effect of bottleneck on product concentrations under different product toxicity conditions. A generalized bottleneck model in a fed-batch fermentation is constructed including both the bottleneck and the product influences on cell growth and death. The simulation result reveals that when the toxic product strongly influences the cell growth and death, the final product concentration is hardly changed even if the bottleneck is removed, whereas it is markedly changed by the degree of product toxicity. The performance of an ethanol fermentation process is also discussed as a case example to validate this result. In conclusion, when the product is highly toxic, one cannot expect a significant increase in the final product concentration even if removing the bottleneck; rather, it may be more effective to somehow protect the cells so that they can continuously produce the product. Copyright © 2010 Elsevier Inc. All rights reserved.

  3. 75 FR 69469 - Health Net, Inc., Claims Processing Group and Systems Configuration Organization, Including On...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-11-12

    ... Organization and provided application support and information technology services supporting the subject firm..., including on-site leased workers from Kelly Services and Cognizant Technology Solutions, Shelton... Processing Group and Systems Configuration Organization, Including On-Site Leased Workers From Kelly Services...

  4. Social anxiety questionnaire (SAQ): Development and preliminary validation.

    PubMed

    Łakuta, Patryk

    2018-05-30

    The Social Anxiety Questionnaire (SAQ) was designed to assess five dimensions of social anxiety as posited by the Clark and Wells' (1995; Clark, 2001) cognitive model. The development of the SAQ involved generation of an item pool, followed by a verification of content validity and the theorized factor structure (Study 1). The final version of the SAQ was then assessed for reliability, temporal stability (test re-test reliability), and construct, criterion-related, and contrasted-group validity (Study 2, 3, and 4). Following a systematic process, the results provide support for the SAQ as reliable, and both theoretically and empirically valid measure. A five-factor structure of the SAQ verified and replicated through confirmatory factor analyses reflect five dimensions of social anxiety: negative self-processing; self-focused attention and self-monitoring; safety behaviours; somatic and cognitive symptoms; and anticipatory and post-event rumination. Results suggest that the SAQ possesses good psychometric properties, while recognizing that additional validation is a required future research direction. It is important to replicate these findings in diverse populations, including a large clinical sample. The SAQ is a promising measure that supports social anxiety as a multidimensional construct, and the foundational role of self-focused cognitive processes in generation and maintenance of social anxiety symptoms. The findings make a significant contribution to the literature, moreover, the SAQ is a first instrument that offers to assess all, proposed by the Clark-Wells model, specific cognitive-affective, physiological, attitudinal, and attention processes related to social anxiety. Copyright © 2018 Elsevier B.V. All rights reserved.

  5. Validation of biomarkers to predict response to immunotherapy in cancer: Volume I - pre-analytical and analytical validation.

    PubMed

    Masucci, Giuseppe V; Cesano, Alessandra; Hawtin, Rachael; Janetzki, Sylvia; Zhang, Jenny; Kirsch, Ilan; Dobbin, Kevin K; Alvarez, John; Robbins, Paul B; Selvan, Senthamil R; Streicher, Howard Z; Butterfield, Lisa H; Thurin, Magdalena

    2016-01-01

    Immunotherapies have emerged as one of the most promising approaches to treat patients with cancer. Recently, there have been many clinical successes using checkpoint receptor blockade, including T cell inhibitory receptors such as cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) and programmed cell death-1 (PD-1). Despite demonstrated successes in a variety of malignancies, responses only typically occur in a minority of patients in any given histology. Additionally, treatment is associated with inflammatory toxicity and high cost. Therefore, determining which patients would derive clinical benefit from immunotherapy is a compelling clinical question. Although numerous candidate biomarkers have been described, there are currently three FDA-approved assays based on PD-1 ligand expression (PD-L1) that have been clinically validated to identify patients who are more likely to benefit from a single-agent anti-PD-1/PD-L1 therapy. Because of the complexity of the immune response and tumor biology, it is unlikely that a single biomarker will be sufficient to predict clinical outcomes in response to immune-targeted therapy. Rather, the integration of multiple tumor and immune response parameters, such as protein expression, genomics, and transcriptomics, may be necessary for accurate prediction of clinical benefit. Before a candidate biomarker and/or new technology can be used in a clinical setting, several steps are necessary to demonstrate its clinical validity. Although regulatory guidelines provide general roadmaps for the validation process, their applicability to biomarkers in the cancer immunotherapy field is somewhat limited. Thus, Working Group 1 (WG1) of the Society for Immunotherapy of Cancer (SITC) Immune Biomarkers Task Force convened to address this need. In this two volume series, we discuss pre-analytical and analytical (Volume I) as well as clinical and regulatory (Volume II) aspects of the validation process as applied to predictive biomarkers

  6. Demography of Principals' Work and School Improvement: Content Validity of Kentucky's Standards and Indicators for School Improvement (SISI)

    ERIC Educational Resources Information Center

    Lindle, Jane Clark; Stalion, Nancy; Young, Lu

    2005-01-01

    Kentucky's accountability system includes a school-processes audit known as Standards and Indicators for School Improvement (SISI), which is in a nascent stage of validation. Content validity methods include comparison to instruments measuring similar constructs as well as other techniques such as job analysis. This study used a two-phase process…

  7. System design from mission definition to flight validation

    NASA Technical Reports Server (NTRS)

    Batill, S. M.

    1992-01-01

    Considerations related to the engineering systems design process and an approach taken to introduce undergraduate students to that process are presented. The paper includes details on a particular capstone design course. This course is a team oriented aircraft design project which requires the students to participate in many phases of the system design process, from mission definition to validation of their design through flight testing. To accomplish this in a single course requires special types of flight vehicles. Relatively small-scale, remotely piloted vehicles have provided the class of aircraft considered in this course.

  8. A Comprehensive Plan for the Long-Term Calibration and Validation of Oceanic Biogeochemical Satellite Data

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B.; McClain, Charles R.; Mannino, Antonio

    2007-01-01

    The primary objective of this planning document is to establish a long-term capability and validating oceanic biogeochemical satellite data. It is a pragmatic solution to a practical problem based primarily o the lessons learned from prior satellite missions. All of the plan's elements are seen to be interdependent, so a horizontal organizational scheme is anticipated wherein the overall leadership comes from the NASA Ocean Biology and Biogeochemistry (OBB) Program Manager and the entire enterprise is split into two components of equal sature: calibration and validation plus satellite data processing. The detailed elements of the activity are based on the basic tasks of the two main components plus the current objectives of the Carbon Cycle and Ecosystems Roadmap. The former is distinguished by an internal core set of responsibilities and the latter is facilitated through an external connecting-core ring of competed or contracted activities. The core elements for the calibration and validation component include a) publish protocols and performance metrics; b) verify uncertainty budgets; c) manage the development and evaluation of instrumentation; and d) coordinate international partnerships. The core elements for the satellite data processing component are e) process and reprocess multisensor data; f) acquire, distribute, and archive data products; and g) implement new data products. Both components have shared responsibilities for initializing and temporally monitoring satellite calibration. Connecting-core elements include (but are not restricted to) atmospheric correction and characterization, standards and traceability, instrument and analysis round robins, field campaigns and vicarious calibration sites, in situ database, bio-optical algorithm (and product) validation, satellite characterization and vicarious calibration, and image processing software. The plan also includes an accountability process, creating a Calibration and Validation Team (to help manage

  9. TALYS/TENDL verification and validation processes: Outcomes and recommendations

    NASA Astrophysics Data System (ADS)

    Fleming, Michael; Sublet, Jean-Christophe; Gilbert, Mark R.; Koning, Arjan; Rochman, Dimitri

    2017-09-01

    The TALYS-generated Evaluated Nuclear Data Libraries (TENDL) provide truly general-purpose nuclear data files assembled from the outputs of the T6 nuclear model codes system for direct use in both basic physics and engineering applications. The most recent TENDL-2015 version is based on both default and adjusted parameters of the most recent TALYS, TAFIS, TANES, TARES, TEFAL, TASMAN codes wrapped into a Total Monte Carlo loop for uncertainty quantification. TENDL-2015 contains complete neutron-incident evaluations for all target nuclides with Z ≤116 with half-life longer than 1 second (2809 isotopes with 544 isomeric states), up to 200 MeV, with covariances and all reaction daughter products including isomers of half-life greater than 100 milliseconds. With the added High Fidelity Resonance (HFR) approach, all resonances are unique, following statistical rules. The validation of the TENDL-2014/2015 libraries against standard, evaluated, microscopic and integral cross sections has been performed against a newly compiled UKAEA database of thermal, resonance integral, Maxwellian averages, 14 MeV and various accelerator-driven neutron source spectra. This has been assembled using the most up-to-date, internationally-recognised data sources including the Atlas of Resonances, CRC, evaluated EXFOR, activation databases, fusion, fission and MACS. Excellent agreement was found with a small set of errors within the reference databases and TENDL-2014 predictions.

  10. Kinetic modelling of anaerobic hydrolysis of solid wastes, including disintegration processes.

    PubMed

    García-Gen, Santiago; Sousbie, Philippe; Rangaraj, Ganesh; Lema, Juan M; Rodríguez, Jorge; Steyer, Jean-Philippe; Torrijos, Michel

    2015-01-01

    A methodology to estimate disintegration and hydrolysis kinetic parameters of solid wastes and validate an ADM1-based anaerobic co-digestion model is presented. Kinetic parameters of the model were calibrated from batch reactor experiments treating individually fruit and vegetable wastes (among other residues) following a new protocol for batch tests. In addition, decoupled disintegration kinetics for readily and slowly biodegradable fractions of solid wastes was considered. Calibrated parameters from batch assays of individual substrates were used to validate the model for a semi-continuous co-digestion operation treating simultaneously 5 fruit and vegetable wastes. The semi-continuous experiment was carried out in a lab-scale CSTR reactor for 15 weeks at organic loading rate ranging between 2.0 and 4.7 gVS/Ld. The model (built in Matlab/Simulink) fit to a large extent the experimental results in both batch and semi-continuous mode and served as a powerful tool to simulate the digestion or co-digestion of solid wastes. Copyright © 2014 Elsevier Ltd. All rights reserved.

  11. Locating the Seventh Cervical Spinous Process: Development and Validation of a Multivariate Model Using Palpation and Personal Information.

    PubMed

    Ferreira, Ana Paula A; Póvoa, Luciana C; Zanier, José F C; Ferreira, Arthur S

    2017-02-01

    The aim of this study was to develop and validate a multivariate prediction model, guided by palpation and personal information, for locating the seventh cervical spinous process (C7SP). A single-blinded, cross-sectional study at a primary to tertiary health care center was conducted for model development and temporal validation. One-hundred sixty participants were prospectively included for model development (n = 80) and time-split validation stages (n = 80). The C7SP was located using the thorax-rib static method (TRSM). Participants underwent chest radiography for assessment of the inner body structure located with TRSM and using radio-opaque markers placed over the skin. Age, sex, height, body mass, body mass index, and vertex-marker distance (D V-M ) were used to predict the distance from the C7SP to the vertex (D V-C7 ). Multivariate linear regression modeling, limits of agreement plot, histogram of residues, receiver operating characteristic curves, and confusion tables were analyzed. The multivariate linear prediction model for D V-C7 (in centimeters) was D V-C7 = 0.986D V-M + 0.018(mass) + 0.014(age) - 1.008. Receiver operating characteristic curves had better discrimination of D V-C7 (area under the curve = 0.661; 95% confidence interval = 0.541-0.782; P = .015) than D V-M (area under the curve = 0.480; 95% confidence interval = 0.345-0.614; P = .761), with respective cutoff points at 23.40 cm (sensitivity = 41%, specificity = 63%) and 24.75 cm (sensitivity = 69%, specificity = 52%). The C7SP was correctly located more often when using predicted D V-C7 in the validation sample than when using the TRSM in the development sample: n = 53 (66%) vs n = 32 (40%), P < .001. Better accuracy was obtained when locating the C7SP by use of a multivariate model that incorporates palpation and personal information. Copyright © 2016. Published by Elsevier Inc.

  12. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers

    PubMed Central

    Reeves, Todd D.; Marbach-Ad, Gili

    2016-01-01

    Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology—either quantitative or qualitative—on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. PMID:26903498

  13. Digital Fly-By-Wire Flight Control Validation Experience

    NASA Technical Reports Server (NTRS)

    Szalai, K. J.; Jarvis, C. R.; Krier, G. E.; Megna, V. A.; Brock, L. D.; Odonnell, R. N.

    1978-01-01

    The experience gained in digital fly-by-wire technology through a flight test program being conducted by the NASA Dryden Flight Research Center in an F-8C aircraft is described. The system requirements are outlined, along with the requirements for flight qualification. The system is described, including the hardware components, the aircraft installation, and the system operation. The flight qualification experience is emphasized. The qualification process included the theoretical validation of the basic design, laboratory testing of the hardware and software elements, systems level testing, and flight testing. The most productive testing was performed on an iron bird aircraft, which used the actual electronic and hydraulic hardware and a simulation of the F-8 characteristics to provide the flight environment. The iron bird was used for sensor and system redundancy management testing, failure modes and effects testing, and stress testing in many cases with the pilot in the loop. The flight test program confirmed the quality of the validation process by achieving 50 flights without a known undetected failure and with no false alarms.

  14. The Identification and Validation Process of Proportional Reasoning Attributes: An Application of a Proportional Reasoning Modeling Framework

    ERIC Educational Resources Information Center

    Tjoe, Hartono; de la Torre, Jimmy

    2014-01-01

    In this paper, we discuss the process of identifying and validating students' abilities to think proportionally. More specifically, we describe the methodology we used to identify these proportional reasoning attributes, beginning with the selection and review of relevant literature on proportional reasoning. We then continue with the…

  15. Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal

    PubMed Central

    Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal

    2013-01-01

    This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433

  16. Design and Implementation Content Validity Study: Development of an instrument for measuring Patient-Centered Communication

    PubMed Central

    Zamanzadeh, Vahid; Ghahramanian, Akram; Rassouli, Maryam; Abbaszadeh, Abbas; Alavi-Majd, Hamid; Nikanfar, Ali-Reza

    2015-01-01

    Introduction: The importance of content validity in the instrument psychometric and its relevance with reliability, have made it an essential step in the instrument development. This article attempts to give an overview of the content validity process and to explain the complexity of this process by introducing an example. Methods: We carried out a methodological study conducted to examine the content validity of the patient-centered communication instrument through a two-step process (development and judgment). At the first step, domain determination, sampling (item generation) and instrument formation and at the second step, content validity ratio, content validity index and modified kappa statistic was performed. Suggestions of expert panel and item impact scores are used to examine the instrument face validity. Results: From a set of 188 items, content validity process identified seven dimensions includes trust building (eight items), informational support (seven items), emotional support (five items), problem solving (seven items), patient activation (10 items), intimacy/friendship (six items) and spirituality strengthening (14 items). Content validity study revealed that this instrument enjoys an appropriate level of content validity. The overall content validity index of the instrument using universal agreement approach was low; however, it can be advocated with respect to the high number of content experts that makes consensus difficult and high value of the S-CVI with the average approach, which was equal to 0.93. Conclusion: This article illustrates acceptable quantities indices for content validity a new instrument and outlines them during design and psychometrics of patient-centered communication measuring instrument. PMID:26161370

  17. The SCALE Verified, Archived Library of Inputs and Data - VALID

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional

  18. Reliability and validity of the revised Gibson Test of Cognitive Skills, a computer-based test battery for assessing cognition across the lifespan.

    PubMed

    Moore, Amy Lawson; Miller, Terissa M

    2018-01-01

    The purpose of the current study is to evaluate the validity and reliability of the revised Gibson Test of Cognitive Skills, a computer-based battery of tests measuring short-term memory, long-term memory, processing speed, logic and reasoning, visual processing, as well as auditory processing and word attack skills. This study included 2,737 participants aged 5-85 years. A series of studies was conducted to examine the validity and reliability using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test-retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III Tests of Cognitive Abilities and Tests of Achievement. Results indicated strong sources of evidence of validity and reliability for the test, including internal consistency reliability coefficients ranging from 0.87 to 0.98, test-retest reliability coefficients ranging from 0.69 to 0.91, split-half reliability coefficients ranging from 0.87 to 0.91, and concurrent validity coefficients ranging from 0.53 to 0.93. The Gibson Test of Cognitive Skills-2 is a reliable and valid tool for assessing cognition in the general population across the lifespan.

  19. Validating the Inactivation Effectiveness of Chemicals on Ebola Virus.

    PubMed

    Haddock, Elaine; Feldmann, Friederike

    2017-01-01

    While viruses such as Ebola virus must be handled in high-containment laboratories, there remains the need to process virus-infected samples for downstream research testing. This processing often includes removal to lower containment areas and therefore requires assurance of complete viral inactivation within the sample before removal from high-containment. Here we describe methods for the removal of chemical reagents used in inactivation procedures, allowing for validation of the effectiveness of various inactivation protocols.

  20. Efficient Time-Domain Imaging Processing for One-Stationary Bistatic Forward-Looking SAR Including Motion Errors

    PubMed Central

    Xie, Hongtu; Shi, Shaoying; Xiao, Hui; Xie, Chao; Wang, Feng; Fang, Qunle

    2016-01-01

    With the rapid development of the one-stationary bistatic forward-looking synthetic aperture radar (OS-BFSAR) technology, the huge amount of the remote sensing data presents challenges for real-time imaging processing. In this paper, an efficient time-domain algorithm (ETDA) considering the motion errors for the OS-BFSAR imaging processing, is presented. This method can not only precisely handle the large spatial variances, serious range-azimuth coupling and motion errors, but can also greatly improve the imaging efficiency compared with the direct time-domain algorithm (DTDA). Besides, it represents the subimages on polar grids in the ground plane instead of the slant-range plane, and derives the sampling requirements considering motion errors for the polar grids to offer a near-optimum tradeoff between the imaging precision and efficiency. First, OS-BFSAR imaging geometry is built, and the DTDA for the OS-BFSAR imaging is provided. Second, the polar grids of subimages are defined, and the subaperture imaging in the ETDA is derived. The sampling requirements for polar grids are derived from the point of view of the bandwidth. Finally, the implementation and computational load of the proposed ETDA are analyzed. Experimental results based on simulated and measured data validate that the proposed ETDA outperforms the DTDA in terms of the efficiency improvement. PMID:27845757

  1. Validation of the Spanish Version of the Emotional Skills Assessment Process (ESAP) with College Students in Mexico

    ERIC Educational Resources Information Center

    Teliz Triujeque, Rosalia

    2009-01-01

    The major purpose of the study was to determine the construct validity of the Spanish version of the Emotional Skills Assessment Process (ESAP) in a targeted population of agriculture college students in Mexico. The ESAP is a self assessment approach that helps students to identify and understand emotional intelligence skills relevant for…

  2. Conceptual dissonance: evaluating the efficacy of natural language processing techniques for validating translational knowledge constructs.

    PubMed

    Payne, Philip R O; Kwok, Alan; Dhaval, Rakesh; Borlawsky, Tara B

    2009-03-01

    The conduct of large-scale translational studies presents significant challenges related to the storage, management and analysis of integrative data sets. Ideally, the application of methodologies such as conceptual knowledge discovery in databases (CKDD) provides a means for moving beyond intuitive hypothesis discovery and testing in such data sets, and towards the high-throughput generation and evaluation of knowledge-anchored relationships between complex bio-molecular and phenotypic variables. However, the induction of such high-throughput hypotheses is non-trivial, and requires correspondingly high-throughput validation methodologies. In this manuscript, we describe an evaluation of the efficacy of a natural language processing-based approach to validating such hypotheses. As part of this evaluation, we will examine a phenomenon that we have labeled as "Conceptual Dissonance" in which conceptual knowledge derived from two or more sources of comparable scope and granularity cannot be readily integrated or compared using conventional methods and automated tools.

  3. On the validation of a code and a turbulence model appropriate to circulation control airfoils

    NASA Technical Reports Server (NTRS)

    Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.

    1988-01-01

    A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.

  4. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  5. Issues in cross-cultural validity: example from the adaptation, reliability, and validity testing of a Turkish version of the Stanford Health Assessment Questionnaire.

    PubMed

    Küçükdeveci, Ayse A; Sahin, Hülya; Ataman, Sebnem; Griffiths, Bridget; Tennant, Alan

    2004-02-15

    Guidelines have been established for cross-cultural adaptation of outcome measures. However, invariance across cultures must also be demonstrated through analysis of Differential Item Functioning (DIF). This is tested in the context of a Turkish adaptation of the Health Assessment Questionnaire (HAQ). Internal construct validity of the adapted HAQ is assessed by Rasch analysis; reliability, by internal consistency and the intraclass correlation coefficient; external construct validity, by association with impairments and American College of Rheumatology functional stages. Cross-cultural validity is tested through DIF by comparison with data from the UK version of the HAQ. The adapted version of the HAQ demonstrated good internal construct validity through fit of the data to the Rasch model (mean item fit 0.205; SD 0.998). Reliability was excellent (alpha = 0.97) and external construct validity was confirmed by expected associations. DIF for culture was found in only 1 item. Cross-cultural validity was found to be sufficient for use in international studies between the UK and Turkey. Future adaptation of instruments should include analysis of DIF at the field testing stage in the adaptation process.

  6. Validity evidence as a key marker of quality of technical skill assessment in OTL-HNS.

    PubMed

    Labbé, Mathilde; Young, Meredith; Nguyen, Lily H P

    2018-01-13

    Quality monitoring of assessment practices should be a priority in all residency programs. Validity evidence is one of the main hallmarks of assessment quality and should be collected to support the interpretation and use of assessment data. Our objective was to identify, synthesize, and present the validity evidence reported supporting different technical skill assessment tools in otolaryngology-head and neck surgery (OTL-HNS). We performed a secondary analysis of data generated through a systematic review of all published tools for assessing technical skills in OTL-HNS (n = 16). For each tool, we coded validity evidence according to the five types of evidence described by the American Educational Research Association's interpretation of Messick's validity framework. Descriptive statistical analyses were conducted. All 16 tools included in our analysis were supported by internal structure and relationship to variables validity evidence. Eleven articles presented evidence supporting content. Response process was discussed only in one article, and no study reported on evidence exploring consequences. We present the validity evidence reported for 16 rater-based tools that could be used for work-based assessment of OTL-HNS residents in the operating room. The articles included in our review were consistently deficient in evidence for response process and consequences. Rater-based assessment tools that support high-stakes decisions that impact the learner and programs should include several sources of validity evidence. Thus, use of any assessment should be done with careful consideration of the context-specific validity evidence supporting score interpretation, and we encourage deliberate continual assessment quality-monitoring. NA. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.

  7. Validity of juvenile idiopathic arthritis diagnoses using administrative health data.

    PubMed

    Stringer, Elizabeth; Bernatsky, Sasha

    2015-03-01

    Administrative health databases are valuable sources of data for conducting research including disease surveillance, outcomes research, and processes of health care at the population level. There has been limited use of administrative data to conduct studies of pediatric rheumatic conditions and no studies validating case definitions in Canada. We report a validation study of incident cases of juvenile idiopathic arthritis in the Canadian province of Nova Scotia. Cases identified through administrative data algorithms were compared to diagnoses in a clinical database. The sensitivity of algorithms that included pediatric rheumatology specialist claims was 81-86%. However, 35-48% of cases that were identified could not be verified in the clinical database depending on the algorithm used. Our case definitions would likely lead to overestimates of disease burden. Our findings may be related to issues pertaining to the non-fee-for-service remuneration model in Nova Scotia, in particular, systematic issues related to the process of submitting claims.

  8. Measuring adverse events in helicopter emergency medical services: establishing content validity.

    PubMed

    Patterson, P Daniel; Lave, Judith R; Martin-Gill, Christian; Weaver, Matthew D; Wadas, Richard J; Arnold, Robert M; Roth, Ronald N; Mosesso, Vincent N; Guyette, Francis X; Rittenberger, Jon C; Yealy, Donald M

    2014-01-01

    We sought to create a valid framework for detecting adverse events (AEs) in the high-risk setting of helicopter emergency medical services (HEMS). We assembled a panel of 10 expert clinicians (n = 6 emergency medicine physicians and n = 4 prehospital nurses and flight paramedics) affiliated with a large multistate HEMS organization in the Northeast US. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the content validity index (CVI), to quantify the validity of the framework's content. The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: (1) a trigger tool, (2) a method for rating proximal cause, and (3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. We demonstrate a standardized process for the development of a content-valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS.

  9. High liquid yield process for retorting various organic materials including oil shale

    DOEpatents

    Coburn, Thomas T.

    1990-01-01

    This invention is a continuous retorting process for various high molecular weight organic materials, including oil shale, that yields an enhanced output of liquid product. The organic material, mineral matter, and an acidic catalyst, that appreciably adsorbs alkenes on surface sites at prescribed temperatures, are mixed and introduced into a pyrolyzer. A circulating stream of olefin enriched pyrolysis gas is continuously swept through the organic material and catalyst, whereupon, as the result of pyrolysis, the enhanced liquid product output is provided. Mixed spent organic material, mineral matter, and cool catalyst are continuously withdrawn from the pyrolyzer. Combustion of the spent organic material and mineral matter serves to reheat the catalyst. Olefin depleted pyrolysis gas, from the pyrolyzer, is enriched in olefins and recycled into the pyrolyzer. The reheated acidic catalyst is separated from the mineral matter and again mixed with fresh organic material, to maintain the continuously cyclic process.

  10. Predictive modeling of infrared radiative heating in tomato dry-peeling process: Part II. Model validation and sensitivity analysis

    USDA-ARS?s Scientific Manuscript database

    A predictive mathematical model was developed to simulate heat transfer in a tomato undergoing double sided infrared (IR) heating in a dry-peeling process. The aims of this study were to validate the developed model using experimental data and to investigate different engineering parameters that mos...

  11. Validation of the Chinese expanded Euthanasia Attitude Scale.

    PubMed

    Chong, Alice Ming-Lin; Fok, Shiu-Yeu

    2013-01-01

    This article reports the validation of the Chinese version of an expanded 31-item Euthanasia Attitude Scale. A 4-stage validation process included a pilot survey of 119 college students and a randomized household survey with 618 adults in Hong Kong. Confirmatory factor analysis confirmed a 4-factor structure of the scale, which can therefore be used to examine attitudes toward general, active, passive, and non-voluntary euthanasia. The scale considers the role effect in decision-making about euthanasia requests and facilitates cross-cultural comparison of attitudes toward euthanasia. The new Chinese scale is more robust than its Western predecessors conceptually and measurement-wise.

  12. Rater Cognition: Implications for Validity

    ERIC Educational Resources Information Center

    Bejar, Issac I.

    2012-01-01

    The scoring process is critical in the validation of tests that rely on constructed responses. Documenting that readers carry out the scoring in ways consistent with the construct and measurement goals is an important aspect of score validity. In this article, rater cognition is approached as a source of support for a validity argument for scores…

  13. 77 FR 56870 - New Process Gear, a Division of Magna Powertrain, Including On-Site Leased Workers From ABM...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-09-14

    ... Process Gear, a division of Magna Powertrain, including on-site leased workers from ABM Janitorial Service... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,940] New Process Gear, a Division of Magna Powertrain, Including On- Site Leased Workers From ABM Janitorial Service Northeast, Inc...

  14. Critical review of the validity of patient satisfaction questionnaires pertaining to oral health care.

    PubMed

    Nair, Rahul; Ishaque, Sana; Spencer, Andrew John; Luzzi, Liana; Do, Loc Giang

    2018-03-30

    Review the validation process reported for oral healthcare satisfaction scales that intended to measure general oral health care that is not restricted to specific subspecialties or interventions. After preliminary searches, PUBMED and EMBASE were searched using a broad search strategy, followed by a snowball strategy using the references of the publications included from database searches. Title and abstract were screened for assessing inclusion, followed by a full-text screening of these publications. English language publications on multi-item questionnaires that report on a scale measuring patient satisfaction for oral health care were included. Publications were excluded when they did not report on any psychometric validation, or the scales were addressing specific treatments or subspecialities in oral health care. Fourteen instruments were identified from as many publications that report on their initial validation, while five more publications reported on further testing of the validity of these instruments. Number of items (range: 8-42) and dimension reported (range: 2-13) were often dissimilar between the assessed measurement instruments. There was also a lack of methodologies to incorporate patient's subjective perspective. Along with a limited reporting of psychometric properties of instruments, cross-cultural adaptations were limited to translation processes. The extent of validity and reliability of the included instruments was largely unassessed, and appropriate instruments for populations outside of those belonging to general adult populations were not present. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  15. Development and Validation of an Index to Measure the Quality of Facility-Based Labor and Delivery Care Processes in Sub-Saharan Africa

    PubMed Central

    Tripathi, Vandana; Stanton, Cynthia; Strobino, Donna; Bartlett, Linda

    2015-01-01

    Background High quality care is crucial in ensuring that women and newborns receive interventions that may prevent and treat birth-related complications. As facility deliveries increase in developing countries, there are concerns about service quality. Observation is the gold standard for clinical quality assessment, but existing observation-based measures of obstetric quality of care are lengthy and difficult to administer. There is a lack of consensus on quality indicators for routine intrapartum and immediate postpartum care, including essential newborn care. This study identified key dimensions of the quality of the process of intrapartum and immediate postpartum care (QoPIIPC) in facility deliveries and developed a quality assessment measure representing these dimensions. Methods and Findings Global maternal and neonatal care experts identified key dimensions of QoPIIPC through a modified Delphi process. Experts also rated indicators of these dimensions from a comprehensive delivery observation checklist used in quality surveys in sub-Saharan African countries. Potential QoPIIPC indices were developed from combinations of highly-rated indicators. Face, content, and criterion validation of these indices was conducted using data from observations of 1,145 deliveries in Kenya, Madagascar, and Tanzania (including Zanzibar). A best-performing index was selected, composed of 20 indicators of intrapartum/immediate postpartum care, including essential newborn care. This index represented most dimensions of QoPIIPC and effectively discriminated between poorly and well-performed deliveries. Conclusions As facility deliveries increase and the global community pays greater attention to the role of care quality in achieving further maternal and newborn mortality reduction, the QoPIIPC index may be a valuable measure. This index complements and addresses gaps in currently used quality assessment tools. Further evaluation of index usability and reliability is needed. The

  16. Modeling and validation of heat and mass transfer in individual coffee beans during the coffee roasting process using computational fluid dynamics (CFD).

    PubMed

    Alonso-Torres, Beatriz; Hernández-Pérez, José Alfredo; Sierra-Espinoza, Fernando; Schenker, Stefan; Yeretzian, Chahan

    2013-01-01

    Heat and mass transfer in individual coffee beans during roasting were simulated using computational fluid dynamics (CFD). Numerical equations for heat and mass transfer inside the coffee bean were solved using the finite volume technique in the commercial CFD code Fluent; the software was complemented with specific user-defined functions (UDFs). To experimentally validate the numerical model, a single coffee bean was placed in a cylindrical glass tube and roasted by a hot air flow, using the identical geometrical 3D configuration and hot air flow conditions as the ones used for numerical simulations. Temperature and humidity calculations obtained with the model were compared with experimental data. The model predicts the actual process quite accurately and represents a useful approach to monitor the coffee roasting process in real time. It provides valuable information on time-resolved process variables that are otherwise difficult to obtain experimentally, but critical to a better understanding of the coffee roasting process at the individual bean level. This includes variables such as time-resolved 3D profiles of bean temperature and moisture content, and temperature profiles of the roasting air in the vicinity of the coffee bean.

  17. 49 CFR 107.709 - Processing of an application for approval, including an application for renewal or modification.

    Code of Federal Regulations, 2010 CFR

    2010-10-01

    ... 49 Transportation 2 2010-10-01 2010-10-01 false Processing of an application for approval..., Registrations and Submissions § 107.709 Processing of an application for approval, including an application for... before the disposition of an application. (b) At any time during the processing of an application, the...

  18. SeaSat-A Satellite Scatterometer (SASS) Validation and Experiment Plan

    NASA Technical Reports Server (NTRS)

    Schroeder, L. C. (Editor)

    1978-01-01

    This plan was generated by the SeaSat-A satellite scatterometer experiment team to define the pre-and post-launch activities necessary to conduct sensor validation and geophysical evaluation. Details included are an instrument and experiment description/performance requirements, success criteria, constraints, mission requirements, data processing requirement and data analysis responsibilities.

  19. The GPM Ground Validation Program: Pre to Post-Launch

    NASA Astrophysics Data System (ADS)

    Petersen, W. A.

    2014-12-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, types and data quality are being routinely generated to facilitate statistical GV of instantaneous and merged GPM products. To assess precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of ground-satellite estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi

  20. Cleaning and other control and validation strategies to prevent allergen cross-contact in food-processing operations.

    PubMed

    Jackson, Lauren S; Al-Taher, Fadwa M; Moorman, Mark; DeVries, Jonathan W; Tippett, Roger; Swanson, Katherine M J; Fu, Tong-Jen; Salter, Robert; Dunaif, George; Estes, Susan; Albillos, Silvia; Gendel, Steven M

    2008-02-01

    Food allergies affect an estimated 10 to 12 million people in the United States. Some of these individuals can develop life-threatening allergic reactions when exposed to allergenic proteins. At present, the only successful method to manage food allergies is to avoid foods containing allergens. Consumers with food allergies rely on food labels to disclose the presence of allergenic ingredients. However, undeclared allergens can be inadvertently introduced into a food via cross-contact during manufacturing. Although allergen removal through cleaning of shared equipment or processing lines has been identified as one of the critical points for effective allergen control, there is little published information on the effectiveness of cleaning procedures for removing allergenic materials from processing equipment. There also is no consensus on how to validate or verify the efficacy of cleaning procedures. The objectives of this review were (i) to study the incidence and cause of allergen cross-contact, (ii) to assess the science upon which the cleaning of food contact surfaces is based, (iii) to identify best practices for cleaning allergenic foods from food contact surfaces in wet and dry manufacturing environments, and (iv) to present best practices for validating and verifying the efficacy of allergen cleaning protocols.

  1. Upper Atmosphere Research Satellite Validation Workshop III: Temperature and Constituents Validation

    NASA Technical Reports Server (NTRS)

    Grose, William L. (Editor); Gille, John (Editor)

    1995-01-01

    The Upper Atmosphere Research Satellite (UARS) was launched in September 1991. Since that time data have been retrieved continuously from the various instruments on the UARS spacecraft. These data have been processed by the respective instrument science teams and subsequently archived in the UARS Central Data Handling Facility (CDHF) at the NASA Goddard Space Flight Center, Greenbelt, Maryland. This report contains the proceedings from one of the three workshops held to evaluate the progress in validating UARS constituents and temperature data and to document the quality of that data. The first workshop was held in Oxford, England, in March 1992, five and one-half months after UARS launch. The second workshop was held in Boulder, Colorado in October 1992. Since launch, the various data have undergone numerous revisions. In many instances these revisions are a result of data problems identified during the validation workshops. Thus, the formal validation effort is a continually ongoing process.

  2. The Validity of Functional Near-Infrared Spectroscopy Recordings of Visuospatial Working Memory Processes in Humans.

    PubMed

    Witmer, Joëlle S; Aeschlimann, Eva A; Metz, Andreas J; Troche, Stefan J; Rammsayer, Thomas H

    2018-04-05

    Functional near infrared spectroscopy (fNIRS) is increasingly used for investigating cognitive processes. To provide converging evidence for the validity of fNIRS recordings in cognitive neuroscience, we investigated functional activation in the frontal cortex in 43 participants during the processing of a visuospatial working memory (WM) task and a sensory duration discrimination (DD) task functionally unrelated to WM. To distinguish WM-related processes from a general effect of increased task demand, we applied an adaptive approach, which ensured that subjective task demand was virtually identical for all individuals and across both tasks. Our specified region of interest covered Brodmann Area 8 of the left hemisphere, known for its important role in the execution of WM processes. Functional activation, as indicated by an increase of oxygenated and a decrease of deoxygenated hemoglobin, was shown for the WM task, but not in the DD task. The overall pattern of results indicated that hemodynamic responses recorded by fNIRS are sensitive to specific visuospatial WM capacity-related processes and do not reflect a general effect of increased task demand. In addition, the finding that no such functional activation could be shown for participants with far above-average mental ability suggested different cognitive processes adopted by this latter group.

  3. The Validity of Functional Near-Infrared Spectroscopy Recordings of Visuospatial Working Memory Processes in Humans

    PubMed Central

    Witmer, Joëlle S.; Aeschlimann, Eva A.; Metz, Andreas J.; Rammsayer, Thomas H.

    2018-01-01

    Functional near infrared spectroscopy (fNIRS) is increasingly used for investigating cognitive processes. To provide converging evidence for the validity of fNIRS recordings in cognitive neuroscience, we investigated functional activation in the frontal cortex in 43 participants during the processing of a visuospatial working memory (WM) task and a sensory duration discrimination (DD) task functionally unrelated to WM. To distinguish WM-related processes from a general effect of increased task demand, we applied an adaptive approach, which ensured that subjective task demand was virtually identical for all individuals and across both tasks. Our specified region of interest covered Brodmann Area 8 of the left hemisphere, known for its important role in the execution of WM processes. Functional activation, as indicated by an increase of oxygenated and a decrease of deoxygenated hemoglobin, was shown for the WM task, but not in the DD task. The overall pattern of results indicated that hemodynamic responses recorded by fNIRS are sensitive to specific visuospatial WM capacity-related processes and do not reflect a general effect of increased task demand. In addition, the finding that no such functional activation could be shown for participants with far above-average mental ability suggested different cognitive processes adopted by this latter group. PMID:29621179

  4. [Assessment of the validity and reliability of the processes of change scale based on the transtheoretical model of vegetable consumption behavior in Japanese male workers].

    PubMed

    Kushida, Osamu; Murayama, Nobuko

    2012-12-01

    A core construct of the Transtheoretical model is that the processes and stages of change are strongly related to observable behavioral changes. We created the Processes of Change Scale of vegetable consumption behavior and examined the validity and reliability of this scale. In September 2009, a self-administered questionnaire was administered to male Japanese employees, aged 20-59 years, working at 20 worksites in Niigata City in Japan. The stages of change (precontempration, contemplation, preparation, action, and maintenance stage) were measured using 2 items that assessed participants' current implementation of the target behavior (eating 5 or more servings of vegetables per day) and their readiness to change their habits. The Processes of Change Scale of vegetable consumption behavior comprised 10 items assessing 5 cognitive processes (consciousness raising, emotional arousal, environmental reevaluation, self-reevaluation, and social liberation) and 5 behavioral processes (commitment, rewards, helping relationships, countering, and environment control). Each item was selected from an existing scale. Decisional balance (pros [2 items] and cons [2 items]), and self-efficacy (3 items) were also assessed, because these constructs were considered to be relevant to the processes of change. The internal consistency reliability of the scale was examined using Cronbach's alpha. Its construct validity was examined using a factor analysis of the processes of change, decisional balance, and self-efficacy variables, while its criterion-related validity was determined by assessing the association between the scale scores and the stages of change. The data of 527 (out of 600) participants (mean age, 41.1 years) were analyzed. Results indicated that the Processes of Change Scale had sufficient internal consistency reliability (Cronbach's alpha: cognitive processes=0.722, behavioral processes=0.803). The processes of change were divided into 2 factors: "consciousness raising

  5. Construction and Validation of a Holistic Education School Evaluation Tool Using Montessori Erdkinder Principles

    ERIC Educational Resources Information Center

    Setari, Anthony Philip

    2016-01-01

    The purpose of this study was to construct a holistic education school evaluation tool using Montessori Erdkinder principles, and begin the validation process of examining the proposed tool. This study addresses a vital need in the holistic education community for a school evaluation tool. The tool construction process included using Erdkinder…

  6. Multibody dynamical modeling for spacecraft docking process with spring-damper buffering device: A new validation approach

    NASA Astrophysics Data System (ADS)

    Daneshjou, Kamran; Alibakhshi, Reza

    2018-01-01

    In the current manuscript, the process of spacecraft docking, as one of the main risky operations in an on-orbit servicing mission, is modeled based on unconstrained multibody dynamics. The spring-damper buffering device is utilized here in the docking probe-cone system for micro-satellites. Owing to the impact occurs inevitably during docking process and the motion characteristics of multibody systems are remarkably affected by this phenomenon, a continuous contact force model needs to be considered. Spring-damper buffering device, keeping the spacecraft stable in an orbit when impact occurs, connects a base (cylinder) inserted in the chaser satellite and the end of docking probe. Furthermore, by considering a revolute joint equipped with torsional shock absorber, between base and chaser satellite, the docking probe can experience both translational and rotational motions simultaneously. Although spacecraft docking process accompanied by the buffering mechanisms may be modeled by constrained multibody dynamics, this paper deals with a simple and efficient formulation to eliminate the surplus generalized coordinates and solve the impact docking problem based on unconstrained Lagrangian mechanics. By an example problem, first, model verification is accomplished by comparing the computed results with those recently reported in the literature. Second, according to a new alternative validation approach, which is based on constrained multibody problem, the accuracy of presented model can be also evaluated. This proposed verification approach can be applied to indirectly solve the constrained multibody problems by minimum required effort. The time history of impact force, the influence of system flexibility and physical interaction between shock absorber and penetration depth caused by impact are the issues followed in this paper. Third, the MATLAB/SIMULINK multibody dynamic analysis software will be applied to build impact docking model to validate computed results and

  7. Development and validation of a notational system to study the offensive process in football.

    PubMed

    Sarmento, Hugo; Anguera, Teresa; Campaniço, Jorge; Leitão, José

    2010-01-01

    The most striking change within football development is the application of science to its problems and in particular the use of increasingly sophisticated technology that, supported by scientific data, allows us to establish a "code of reading" the reality of the game. Therefore, this study describes the process of the development and validation of an ad hoc system of categorization, which allows the different methods of offensive game in football and the interaction to be analyzed. Therefore, through an exploratory phase of the study, we identified 10 vertebrate criteria and the respective behaviors observed for each of these criteria. We heard a panel of five experts with the purpose of a content validation. The resulting instrument is characterized by a combination of field formats and systems of categories. The reliability of the instrument was calculated by the intraobserver agreement, and values above 0.95 for all criteria were achieved. Two FC Barcelona games were coded and analyzed, which allowed the detection of various T-patterns. The results show that the instrument serves the purpose for which it was developed and can provide important information for the understanding of game interaction in football.

  8. Land Ice Verification and Validation Kit

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    2015-07-15

    To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&V involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and testmore » data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.« less

  9. Measuring Adverse Events in Helicopter Emergency Medical Services: Establishing Content Validity

    PubMed Central

    Patterson, P. Daniel; Lave, Judith R.; Martin-Gill, Christian; Weaver, Matthew D.; Wadas, Richard J.; Arnold, Robert M.; Roth, Ronald N.; Mosesso, Vincent N.; Guyette, Francis X.; Rittenberger, Jon C.; Yealy, Donald M.

    2015-01-01

    Introduction We sought to create a valid framework for detecting Adverse Events (AEs) in the high-risk setting of Helicopter Emergency Medical Services (HEMS). Methods We assembled a panel of 10 expert clinicians (n=6 emergency medicine physicians and n=4 prehospital nurses and flight paramedics) affiliated with a large multi-state HEMS organization in the Northeast U.S. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the Content Validity Index (CVI), to quantify the validity of the framework’s content. Results The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: 1) a trigger tool, 2) a method for rating proximal cause, and 3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. Conclusions We demonstrate a standardized process for the development of a content valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS. PMID:24003951

  10. Validation of Contamination Control in Rapid Transfer Port Chambers for Pharmaceutical Manufacturing Processes.

    PubMed

    Hu, Shih-Cheng; Shiue, Angus; Liu, Han-Yang; Chiu, Rong-Ben

    2016-11-12

    There is worldwide concern with regard to the adverse effects of drug usage. However, contaminants can gain entry into a drug manufacturing process stream from several sources such as personnel, poor facility design, incoming ventilation air, machinery and other equipment for production, etc. In this validation study, we aimed to determine the impact and evaluate the contamination control in the preparation areas of the rapid transfer port (RTP) chamber during the pharmaceutical manufacturing processes. The RTP chamber is normally tested for airflow velocity, particle counts, pressure decay of leakage, and sterility. The air flow balance of the RTP chamber is affected by the airflow quantity and the height above the platform. It is relatively easy to evaluate the RTP chamber's leakage by the pressure decay, where the system is charged with the air, closed, and the decay of pressure is measured by the time period. We conducted the determination of a vaporized H₂O₂ of a sufficient concentration to complete decontamination. The performance of the RTP chamber will improve safety and can be completely tested at an ISO Class 5 environment.

  11. Reliability and validity: Part II.

    PubMed

    Davis, Debora Winders

    2004-01-01

    Determining measurement reliability and validity involves complex processes. There is usually room for argument about most instruments. It is important that the researcher clearly describes the processes upon which she made the decision to use a particular instrument, and presents the evidence available showing that the instrument is reliable and valid for the current purposes. In some cases, the researcher may need to conduct pilot studies to obtain evidence upon which to decide whether the instrument is valid for a new population or a different setting. In all cases, the researcher must present a clear and complete explanation for the choices, she has made regarding reliability and validity. The consumer must then judge the degree to which the researcher has provided adequate and theoretically sound rationale. Although I have tried to touch on most of the important concepts related to measurement reliability and validity, it is beyond the scope of this column to be exhaustive. There are textbooks devoted entirely to specific measurement issues if readers require more in-depth knowledge.

  12. Generalized fluid theory including non-Maxwellian kinetic effects

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Izacard, Olivier

    The results obtained by the plasma physics community for the validation and the prediction of turbulence and transport in magnetized plasmas come mainly from the use of very central processing unit (CPU)-consuming particle-in-cell or (gyro)kinetic codes which naturally include non-Maxwellian kinetic effects. To date, fluid codes are not considered to be relevant for the description of these kinetic effects. Here, after revisiting the limitations of the current fluid theory developed in the 19th century, we generalize the fluid theory including kinetic effects such as non-Maxwellian super-thermal tails with as few fluid equations as possible. The collisionless and collisional fluid closuresmore » from the nonlinear Landau Fokker–Planck collision operator are shown for an arbitrary collisionality. Indeed, the first fluid models associated with two examples of collisionless fluid closures are obtained by assuming an analytic non-Maxwellian distribution function. One of the main differences with the literature is our analytic representation of the distribution function in the velocity phase space with as few hidden variables as possible thanks to the use of non-orthogonal basis sets. These new non-Maxwellian fluid equations could initiate the next generation of fluid codes including kinetic effects and can be expanded to other scientific disciplines such as astrophysics, condensed matter or hydrodynamics. As a validation test, we perform a numerical simulation based on a minimal reduced INMDF fluid model. The result of this test is the discovery of the origin of particle and heat diffusion. The diffusion is due to the competition between a growing INMDF on short time scales due to spatial gradients and the thermalization on longer time scales. Here, the results shown here could provide the insights to break some of the unsolved puzzles of turbulence.« less

  13. Generalized fluid theory including non-Maxwellian kinetic effects

    DOE PAGES

    Izacard, Olivier

    2017-03-29

    The results obtained by the plasma physics community for the validation and the prediction of turbulence and transport in magnetized plasmas come mainly from the use of very central processing unit (CPU)-consuming particle-in-cell or (gyro)kinetic codes which naturally include non-Maxwellian kinetic effects. To date, fluid codes are not considered to be relevant for the description of these kinetic effects. Here, after revisiting the limitations of the current fluid theory developed in the 19th century, we generalize the fluid theory including kinetic effects such as non-Maxwellian super-thermal tails with as few fluid equations as possible. The collisionless and collisional fluid closuresmore » from the nonlinear Landau Fokker–Planck collision operator are shown for an arbitrary collisionality. Indeed, the first fluid models associated with two examples of collisionless fluid closures are obtained by assuming an analytic non-Maxwellian distribution function. One of the main differences with the literature is our analytic representation of the distribution function in the velocity phase space with as few hidden variables as possible thanks to the use of non-orthogonal basis sets. These new non-Maxwellian fluid equations could initiate the next generation of fluid codes including kinetic effects and can be expanded to other scientific disciplines such as astrophysics, condensed matter or hydrodynamics. As a validation test, we perform a numerical simulation based on a minimal reduced INMDF fluid model. The result of this test is the discovery of the origin of particle and heat diffusion. The diffusion is due to the competition between a growing INMDF on short time scales due to spatial gradients and the thermalization on longer time scales. Here, the results shown here could provide the insights to break some of the unsolved puzzles of turbulence.« less

  14. Psychometric properties including reliability, validity and responsiveness of the Majeed pelvic score in patients with chronic sacroiliac joint pain.

    PubMed

    Bajada, Stefan; Mohanty, Khitish

    2016-06-01

    The Majeed scoring system is a disease-specific outcome measure that was originally designed to assess pelvic injuries. The aim of this study was to determine the psychometric properties of the Majeed scoring system for chronic sacroiliac joint pain. Internal consistency, content validity, criterion validity, construct validity and responsiveness to change was assessed prospectively for the Majeed scoring system in a cohort of 60 patients diagnosed with sacroiliac joint pain. This diagnosis was confirmed with CT-guided sacroiliac joint anaesthetic block. The overall Majeed score showed acceptable internal consistency (Cronbach alpha = 0.63). Similarly, it showed acceptable floor (0 %) and ceiling (0 %) effects. On the other hand, the domains of pain, work, sitting and sexual intercourse had high (>30 %) floor effects. Significant correlation with the physical component of the Short Form-36 (p = 0.005) and Oswestry disability index (p ≤ 0.001) was found indicating acceptable criterion validity. The overall Majeed score showed acceptable construct validity with all five developed hypotheses showing significance (p ≤ 0.05). The overall Majeed score showed acceptable responsiveness to change with a large (≥0.80) effect size and standardized response mean. Overall the Majeed scoring system demonstrated acceptable psychometric properties for outcome assessment in chronic sacroiliac joint pain. Thus, its use in this condition is adequate. However, some domains demonstrated suboptimal performance indicating that improvement might be achieved with the development of an outcome measure specific for sacroiliac joint dysfunction and degeneration.

  15. The validation of the Supervision of Thesis Questionnaire (STQ).

    PubMed

    Henricson, Maria; Fridlund, Bengt; Mårtensson, Jan; Hedberg, Berith

    2018-06-01

    The supervision process is characterized by differences between the supervisors' and the students' expectations before the start of writing a bachelor thesis as well as after its completion. A review of the literature did not reveal any scientifically tested questionnaire for evaluating nursing students' expectations of the supervision process when writing a bachelor thesis. The aim of the study was to determine the construct validity and internal consistency reliability of a questionnaire for measuring nursing students' expectations of the bachelor thesis supervision process. The study had a developmental and methodological design carried out in four steps including construct validity and internal consistency reliability statistical procedures: construction of the items, assessment of face validity, data collection and data analysis. This study was conducted at a university in southern Sweden, where students on the "Nursing student thesis, 15 ECTS" course were consecutively selected for participation. Of the 512 questionnaires distributed, 327 were returned, a response rate of 64%. Five factors with a total variance of 74% and good communalities, ≥0.64, were extracted from the 10-item STQ. The internal consistency of the 10 items was 0.68. The five factors were labelled: The nature of the supervision process, The supervisor's role as a coach, The students' progression to self-support, The interaction between students and supervisor and supervisor competence. A didactic, useful and secure questionnaire measuring nursing students' expectations of the bachelor thesis supervision process based on three main forms of supervision was created. Copyright © 2018 Elsevier Ltd. All rights reserved.

  16. Validation of sterilizing grade filtration.

    PubMed

    Jornitz, M W; Meltzer, T H

    2003-01-01

    Validation consideration of sterilizing grade filters, namely 0.2 micron, changed when FDA voiced concerns about the validity of Bacterial Challenge tests performed in the past. Such validation exercises are nowadays considered to be filter qualification. Filter validation requires more thorough analysis, especially Bacterial Challenge testing with the actual drug product under process conditions. To do so, viability testing is a necessity to determine the Bacterial Challenge test methodology. Additionally to these two compulsory tests, other evaluations like extractable, adsorption and chemical compatibility tests should be considered. PDA Technical Report # 26, Sterilizing Filtration of Liquids, describes all parameters and aspects required for the comprehensive validation of filters. The report is a most helpful tool for validation of liquid filters used in the biopharmaceutical industry. It sets the cornerstones of validation requirements and other filtration considerations.

  17. Validation of Case Finding Algorithms for Hepatocellular Cancer From Administrative Data and Electronic Health Records Using Natural Language Processing.

    PubMed

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2016-02-01

    Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC International Classification of Diseases, 9th Revision (ICD-9) codes, and evaluated whether natural language processing by the Automated Retrieval Console (ARC) for document classification improves HCC identification. We identified a cohort of patients with ICD-9 codes for HCC during 2005-2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared with manual classification. PPV, sensitivity, and specificity of ARC were calculated. A total of 1138 patients with HCC were identified by ICD-9 codes. On the basis of manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. A combined approach of ICD-9 codes and natural language processing of pathology and radiology reports improves HCC case identification in automated data.

  18. [Development and validation of quality standards for colonoscopy].

    PubMed

    Sánchez Del Río, Antonio; Baudet, Juan Salvador; Naranjo Rodríguez, Antonio; Campo Fernández de Los Ríos, Rafael; Salces Franco, Inmaculada; Aparicio Tormo, Jose Ramón; Sánchez Muñoz, Diego; Llach, Joseph; Hervás Molina, Antonio; Parra-Blanco, Adolfo; Díaz Acosta, Juan Antonio

    2010-01-30

    Before starting programs for colorectal cancer screening it is necessary to evaluate the quality of colonoscopy. Our objectives were to develop a group of quality indicators of colonoscopy easily applicable and to determine the variability of their achievement. After reviewing the bibliography we prepared 21 potential indicators of quality that were submitted to a process of selection in which we measured their facial validity, content validity, reliability and viability of their measurement. We estimated the variability of their achievement by means of the coefficient of variability (CV) and the variability of the achievement of the standards by means of chi(2). Six indicators overcome the selection process: informed consent, medication administered, completed colonoscopy, complications, every polyp removed and recovered, and adenoma detection rate in patients older than 50 years. 1928 colonoscopies were included from eight endoscopy units. Every unit included the same number of colonoscopies selected by means of simple random sampling with substitution. There was an important variability in the achievement of some indicators and standards: medication administered (CV 43%, p<0.01), complications registered (CV 37%, p<0.01), every polyp removed and recovered (CV 12%, p<0.01) and adenoma detection rate in older than fifty years (CV 2%, p<0.01). We have validated six quality indicators for colonoscopy which are easily measurable. An important variability exists in the achievement of some indicators and standards. Our data highlight the importance of the development of continuous quality improvement programmes for colonoscopy before starting colorectal cancer screening. Copyright (c) 2009 Elsevier España, S.L. All rights reserved.

  19. Biophysics: for HTS hit validation, chemical lead optimization, and beyond.

    PubMed

    Genick, Christine C; Wright, S Kirk

    2017-09-01

    There are many challenges to the drug discovery process, including the complexity of the target, its interactions, and how these factors play a role in causing the disease. Traditionally, biophysics has been used for hit validation and chemical lead optimization. With its increased throughput and sensitivity, biophysics is now being applied earlier in this process to empower target characterization and hit finding. Areas covered: In this article, the authors provide an overview of how biophysics can be utilized to assess the quality of the reagents used in screening assays, to validate potential tool compounds, to test the integrity of screening assays, and to create follow-up strategies for compound characterization. They also briefly discuss the utilization of different biophysical methods in hit validation to help avoid the resource consuming pitfalls caused by the lack of hit overlap between biophysical methods. Expert opinion: The use of biophysics early on in the drug discovery process has proven crucial to identifying and characterizing targets of complex nature. It also has enabled the identification and classification of small molecules which interact in an allosteric or covalent manner with the target. By applying biophysics in this manner and at the early stages of this process, the chances of finding chemical leads with novel mechanisms of action are increased. In the future, focused screens with biophysics as a primary readout will become increasingly common.

  20. Validation of quantitative and qualitative methods for detecting allergenic ingredients in processed foods in Japan.

    PubMed

    Sakai, Shinobu; Adachi, Reiko; Akiyama, Hiroshi; Teshima, Reiko

    2013-06-19

    A labeling system for food allergenic ingredients was established in Japan in April 2002. To monitor the labeling, the Japanese government announced official methods for detecting allergens in processed foods in November 2002. The official methods consist of quantitative screening tests using enzyme-linked immunosorbent assays (ELISAs) and qualitative confirmation tests using Western blotting or polymerase chain reactions (PCR). In addition, the Japanese government designated 10 μg protein/g food (the corresponding allergenic ingredient soluble protein weight/food weight), determined by ELISA, as the labeling threshold. To standardize the official methods, the criteria for the validation protocol were described in the official guidelines. This paper, which was presented at the Advances in Food Allergen Detection Symposium, ACS National Meeting and Expo, San Diego, CA, Spring 2012, describes the validation protocol outlined in the official Japanese guidelines, the results of interlaboratory studies for the quantitative detection method (ELISA for crustacean proteins) and the qualitative detection method (PCR for shrimp and crab DNAs), and the reliability of the detection methods.

  1. Comparison of C5 and C6 Aqua-MODIS Dark Target Aerosol Validation

    NASA Technical Reports Server (NTRS)

    Munchak, Leigh A.; Levy, Robert C.; Mattoo, Shana

    2014-01-01

    We compare C5 and C6 validation to compare the C6 10 km aerosol product against the well validated and trusted aerosol product on global and regional scales. Only the 10 km aerosol product is evaluated in this study, validation of the new C6 3 km aerosol product still needs to be performed. Not all of the time series has processed yet for C5 or C6, and the years processed for the 2 products is not exactly the same (this work is preliminary!). To reduce the impact of outlier observations, MODIS is spatially averaged within 27.5 km of the AERONET site, and AERONET is temporatally averaged within 30 minutes of the MODIS overpass time. Only high quality (QA = 3 over land, QA greater than 0 over ocean) pixels are included in the mean.

  2. A new dataset validation system for the Planetary Science Archive

    NASA Astrophysics Data System (ADS)

    Manaud, N.; Zender, J.; Heather, D.; Martinez, S.

    2007-08-01

    The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that

  3. Process, including membrane separation, for separating hydrogen from hydrocarbons

    DOEpatents

    Baker, Richard W.; Lokhandwala, Kaaeid A.; He, Zhenjie; Pinnau, Ingo

    2001-01-01

    Processes for providing improved methane removal and hydrogen reuse in reactors, particularly in refineries and petrochemical plants. The improved methane removal is achieved by selective purging, by passing gases in the reactor recycle loop across membranes selective in favor of methane over hydrogen, and capable of exhibiting a methane/hydrogen selectivity of at least about 2.5 under the process conditions.

  4. Modeling coupled Thermo-Hydro-Mechanical processes including plastic deformation in geological porous media

    NASA Astrophysics Data System (ADS)

    Kelkar, S.; Karra, S.; Pawar, R. J.; Zyvoloski, G.

    2012-12-01

    There has been an increasing interest in the recent years in developing computational tools for analyzing coupled thermal, hydrological and mechanical (THM) processes that occur in geological porous media. This is mainly due to their importance in applications including carbon sequestration, enhanced geothermal systems, oil and gas production from unconventional sources, degradation of Arctic permafrost, and nuclear waste isolation. Large changes in pressures, temperatures and saturation can result due to injection/withdrawal of fluids or emplaced heat sources. These can potentially lead to large changes in the fluid flow and mechanical behavior of the formation, including shear and tensile failure on pre-existing or induced fractures and the associated permeability changes. Due to this, plastic deformation and large changes in material properties such as permeability and porosity can be expected to play an important role in these processes. We describe a general purpose computational code FEHM that has been developed for the purpose of modeling coupled THM processes during multi-phase fluid flow and transport in fractured porous media. The code uses a continuum mechanics approach, based on control volume - finite element method. It is designed to address spatial scales on the order of tens of centimeters to tens of kilometers. While large deformations are important in many situations, we have adapted the small strain formulation as useful insight can be obtained in many problems of practical interest with this approach while remaining computationally manageable. Nonlinearities in the equations and the material properties are handled using a full Jacobian Newton-Raphson technique. Stress-strain relationships are assumed to follow linear elastic/plastic behavior. The code incorporates several plasticity models such as von Mises, Drucker-Prager, and also a large suite of models for coupling flow and mechanical deformation via permeability and stresses

  5. Point-to-Point! Validation of the Small Aircraft Transportation System Higher Volume Operations Concept

    NASA Technical Reports Server (NTRS)

    Williams, Daniel M.

    2006-01-01

    Described is the research process that NASA researchers used to validate the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept. The four phase building-block validation and verification process included multiple elements ranging from formal analysis of HVO procedures to flight test, to full-system architecture prototype that was successfully shown to the public at the June 2005 SATS Technical Demonstration in Danville, VA. Presented are significant results of each of the four research phases that extend early results presented at ICAS 2004. HVO study results have been incorporated into the development of the Next Generation Air Transportation System (NGATS) vision and offer a validated concept to provide a significant portion of the 3X capacity improvement sought after in the United States National Airspace System (NAS).

  6. 40 CFR Appendix A to Part 419 - Processes Included in the Determination of BAT Effluent Limitations for Total Chromium...

    Code of Federal Regulations, 2011 CFR

    2011-07-01

    ... 40 Protection of Environment 29 2011-07-01 2009-07-01 true Processes Included in the Determination of BAT Effluent Limitations for Total Chromium, Hexavalent Chromium, and Phenolic Compounds (4AAP) A...—Processes Included in the Determination of BAT Effluent Limitations for Total Chromium, Hexavalent Chromium...

  7. 40 CFR Appendix A to Part 419 - Processes Included in the Determination of BAT Effluent Limitations for Total Chromium...

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... 40 Protection of Environment 28 2010-07-01 2010-07-01 true Processes Included in the Determination of BAT Effluent Limitations for Total Chromium, Hexavalent Chromium, and Phenolic Compounds (4AAP) A...—Processes Included in the Determination of BAT Effluent Limitations for Total Chromium, Hexavalent Chromium...

  8. 40 CFR Appendix A to Part 419 - Processes Included in the Determination of BAT Effluent Limitations for Total Chromium...

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... 40 Protection of Environment 29 2014-07-01 2012-07-01 true Processes Included in the Determination of BAT Effluent Limitations for Total Chromium, Hexavalent Chromium, and Phenolic Compounds (4AAP) A...—Processes Included in the Determination of BAT Effluent Limitations for Total Chromium, Hexavalent Chromium...

  9. 40 CFR Appendix A to Part 419 - Processes Included in the Determination of BAT Effluent Limitations for Total Chromium...

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... 40 Protection of Environment 30 2012-07-01 2012-07-01 false Processes Included in the Determination of BAT Effluent Limitations for Total Chromium, Hexavalent Chromium, and Phenolic Compounds (4AAP... Part 419—Processes Included in the Determination of BAT Effluent Limitations for Total Chromium...

  10. Validation of gamma irradiator controls for quality and regulatory compliance

    NASA Astrophysics Data System (ADS)

    Harding, Rorry B.; Pinteric, Francis J. A.

    1995-09-01

    Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the Current Good Manufacturing Practice (CGMP) regulations in place to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However, it is only recently that FDA audits have focused on this component of the process validation. What is Irradiator Control System Validation? What constitutes evidence of control? How do owners obtain evidence? What is the irradiator supplier's role in validation? How does the ISO 9000 Quality Standard relate to the FDA's CGMP requirement for evidence of Control System Validation? This paper presents answers to these questions based on the recent experiences of Nordion's engineering and product management staff who have worked with several US-based irradiator owners. This topic — Validation of Irradiator Controls — is a significant regulatory compliance and operations issue within the irradiator suppliers' and users' community.

  11. Toward Supersonic Retropropulsion CFD Validation

    NASA Technical Reports Server (NTRS)

    Kleb, Bil; Schauerhamer, D. Guy; Trumble, Kerry; Sozer, Emre; Barnhardt, Michael; Carlson, Jan-Renee; Edquist, Karl

    2011-01-01

    This paper begins the process of verifying and validating computational fluid dynamics (CFD) codes for supersonic retropropulsive flows. Four CFD codes (DPLR, FUN3D, OVERFLOW, and US3D) are used to perform various numerical and physical modeling studies toward the goal of comparing predictions with a wind tunnel experiment specifically designed to support CFD validation. Numerical studies run the gamut in rigor from code-to-code comparisons to observed order-of-accuracy tests. Results indicate that this complex flowfield, involving time-dependent shocks and vortex shedding, design order of accuracy is not clearly evident. Also explored is the extent of physical modeling necessary to predict the salient flowfield features found in high-speed Schlieren images and surface pressure measurements taken during the validation experiment. Physical modeling studies include geometric items such as wind tunnel wall and sting mount interference, as well as turbulence modeling that ranges from a RANS (Reynolds-Averaged Navier-Stokes) 2-equation model to DES (Detached Eddy Simulation) models. These studies indicate that tunnel wall interference is minimal for the cases investigated; model mounting hardware effects are confined to the aft end of the model; and sparse grid resolution and turbulence modeling can damp or entirely dissipate the unsteadiness of this self-excited flow.

  12. Prediction of individual milk proteins including free amino acids in bovine milk using mid-infrared spectroscopy and their correlations with milk processing characteristics.

    PubMed

    McDermott, A; Visentin, G; De Marchi, M; Berry, D P; Fenelon, M A; O'Connor, P M; Kenny, O A; McParland, S

    2016-04-01

    The aim of this study was to evaluate the effectiveness of mid-infrared spectroscopy in predicting milk protein and free amino acid (FAA) composition in bovine milk. Milk samples were collected from 7 Irish research herds and represented cows from a range of breeds, parities, and stages of lactation. Mid-infrared spectral data in the range of 900 to 5,000 cm(-1) were available for 730 milk samples; gold standard methods were used to quantify individual protein fractions and FAA of these samples with a view to predicting these gold standard protein fractions and FAA levels with available mid-infrared spectroscopy data. Separate prediction equations were developed for each trait using partial least squares regression; accuracy of prediction was assessed using both cross validation on a calibration data set (n=400 to 591 samples) and external validation on an independent data set (n=143 to 294 samples). The accuracy of prediction in external validation was the same irrespective of whether undertaken on the entire external validation data set or just within the Holstein-Friesian breed. The strongest coefficient of correlation obtained for protein fractions in external validation was 0.74, 0.69, and 0.67 for total casein, total β-lactoglobulin, and β-casein, respectively. Total proteins (i.e., total casein, total whey, and total lactoglobulin) were predicted with greater accuracy then their respective component traits; prediction accuracy using the infrared spectrum was superior to prediction using just milk protein concentration. Weak to moderate prediction accuracies were observed for FAA. The greatest coefficient of correlation in both cross validation and external validation was for Gly (0.75), indicating a moderate accuracy of prediction. Overall, the FAA prediction models overpredicted the gold standard values. Near-unity correlations existed between total casein and β-casein irrespective of whether the traits were based on the gold standard (0.92) or mid

  13. Assessment of Social Information Processing in early childhood: development and initial validation of the Schultz Test of Emotion Processing-Preliminary Version.

    PubMed

    Schultz, David; Ambike, Archana; Logie, Sean Kevin; Bohner, Katherine E; Stapleton, Laura M; Vanderwalde, Holly; Min, Christopher B; Betkowski, Jennifer A

    2010-07-01

    Crick and Dodge's (Psychological Bulletin 115:74-101, 1994) social information processing model has proven very useful in guiding research focused on aggressive and peer-rejected children's social-cognitive functioning. Its application to early childhood, however, has been much more limited. The present study responds to this gap by developing and validating a video-based assessment tool appropriate for early childhood, the Schultz Test of Emotion Processing-Preliminary Version (STEP-P). One hundred twenty-five Head Start preschool children participated in the study. More socially competent children more frequently attributed sadness to the victims of provocation and labeled aggressive behaviors as both morally unacceptable and less likely to lead to positive outcomes. More socially competent girls labeled others' emotions more accurately. More disruptive children more frequently produced physically aggressive solutions to social provocations, and more disruptive boys less frequently interpreted social provocations as accidental. The STEP-P holds promise as an assessment tool that assesses knowledge structures related to the SIP model in early childhood.

  14. Application of process mining to assess the data quality of routinely collected time-based performance data sourced from electronic health records by validating process conformance.

    PubMed

    Perimal-Lewis, Lua; Teubner, David; Hakendorf, Paul; Horwood, Chris

    2016-12-01

    Effective and accurate use of routinely collected health data to produce Key Performance Indicator reporting is dependent on the underlying data quality. In this research, Process Mining methodology and tools were leveraged to assess the data quality of time-based Emergency Department data sourced from electronic health records. This research was done working closely with the domain experts to validate the process models. The hospital patient journey model was used to assess flow abnormalities which resulted from incorrect timestamp data used in time-based performance metrics. The research demonstrated process mining as a feasible methodology to assess data quality of time-based hospital performance metrics. The insight gained from this research enabled appropriate corrective actions to be put in place to address the data quality issues. © The Author(s) 2015.

  15. Bridging the gap between neurocognitive processing theory and performance validity assessment among the cognitively impaired: a review and methodological approach.

    PubMed

    Leighton, Angela; Weinborn, Michael; Maybery, Murray

    2014-10-01

    Bigler (2012) and Larrabee (2012) recently addressed the state of the science surrounding performance validity tests (PVTs) in a dialogue highlighting evidence for the valid and increased use of PVTs, but also for unresolved problems. Specifically, Bigler criticized the lack of guidance from neurocognitive processing theory in the PVT literature. For example, individual PVTs have applied the simultaneous forced-choice methodology using a variety of test characteristics (e.g., word vs. picture stimuli) with known neurocognitive processing implications (e.g., the "picture superiority effect"). However, the influence of such variations on classification accuracy has been inadequately evaluated, particularly among cognitively impaired individuals. The current review places the PVT literature in the context of neurocognitive processing theory, and identifies potential methodological factors to account for the significant variability we identified in classification accuracy across current PVTs. We subsequently evaluated the utility of a well-known cognitive manipulation to provide a Clinical Analogue Methodology (CAM), that is, to alter the PVT performance of healthy individuals to be similar to that of a cognitively impaired group. Initial support was found, suggesting the CAM may be useful alongside other approaches (analogue malingering methodology) for the systematic evaluation of PVTs, particularly the influence of specific neurocognitive processing components on performance.

  16. A high liquid yield process for retorting various organic materials including oil shale

    DOEpatents

    Coburn, T.T.

    1988-07-26

    This invention is a continuous retorting process for various high molecular weight organic materials, including oil shale, that yields an enhanced output of liquid product. The organic material, mineral matter, and an acidic catalyst, that appreciably adsorbs alkenes on surface sites at prescribed temperatures, are mixed and introduced into a pyrolyzer. A circulating stream of olefin enriched pyrolysis gas is continuously swept through the organic material and catalyst, whereupon, as the result of pyrolysis, the enhanced liquid product output is provided. Mixed spent organic material, mineral matter, and cool catalyst are continuously withdrawn from the pyrolyzer. Combustion of the spent organic material and mineral matter serves to reheat the catalyst. Olefin depleted pyrolysis gas, from the pyrolyzer, is enriched in olefins and recycled into the pyrolyzer. The reheated acidic catalyst is separated from the mineral matter and again mixed with fresh organic material, to maintain the continuously cyclic process. 2 figs.

  17. Individual differences in processing styles: validity of the Rational-Experiential Inventory.

    PubMed

    Björklund, Fredrik; Bäckström, Martin

    2008-10-01

    In Study 1 (N= 203) the factor structure of a Swedish translation of Pacini and Epstein's Rational-Experiential Inventory (REI-40) was investigated using confirmatory factor analysis. The hypothesized model with rationality and experientiality as orthogonal factors had satisfactory fit to the data, significantly better than alternative models (with two correlated factors or a single factor). Inclusion of "ability" and "favorability" subscales for rationality and experientiality increased fit further. It was concluded that the structural validity of the REI is adequate. In Study 2 (N= 72) the REI-factors were shown to have theoretically meaningful correlations to other personality traits, indicating convergent and discriminant validity. Finally, scores on the rationality scale were negatively related to risky choice framing effects in Kahneman and Tversky's Asian disease task, indicating concurrent validity. On the basis of these findings it was concluded that the test has satisfactory psychometric properties.

  18. Mining Twitter Data to Augment NASA GPM Validation

    NASA Technical Reports Server (NTRS)

    Teng, Bill; Albayrak, Arif; Huffman, George; Vollmer, Bruce; Loeser, Carlee; Acker, Jim

    2017-01-01

    The Twitter data stream is an important new source of real-time and historical global information for potentially augmenting the validation program of NASA's Global Precipitation Measurement (GPM) mission. There have been other similar uses of Twitter, though mostly related to natural hazards monitoring and management. The validation of satellite precipitation estimates is challenging, because many regions lack data or access to data, especially outside of the U.S. and in remote and developing areas. The time-varying set of "precipitation" tweets can be thought of as an organic network of rain gauges, potentially providing a widespread view of precipitation occurrence. Twitter provides a large source of crowd for crowdsourcing. During a 24-hour period in the middle of the snow storm this past March in the U.S. Northeast, we collected more than 13,000 relevant precipitation tweets with exact geolocation. The overall objective of our project is to determine the extent to which processed tweets can provide additional information that improves the validation of GPM data. Though our current effort focuses on tweets and precipitation, our approach is general and applicable to other social media and other geophysical measurements. Specifically, we have developed an operational infrastructure for processing tweets, in a format suitable for analysis with GPM data; engaged with potential participants, both passive and active, to "enrich" the Twitter stream; and inter-compared "precipitation" tweet data, ground station data, and GPM retrievals. In this presentation, we detail the technical capabilities of our tweet processing infrastructure, including data abstraction, feature extraction, search engine, context-awareness, real-time processing, and high volume (big) data processing; various means for "enriching" the Twitter stream; and results of inter-comparisons. Our project should bring a new kind of visibility to Twitter and engender a new kind of appreciation of the value

  19. [A Validation Study of the Modified Korean Version of Ethical Leadership at Work Questionnaire (K-ELW)].

    PubMed

    Kim, Jeong-Eon; Park, Eun-Jun

    2015-04-01

    The purpose of this study was to validate the Korean version of the Ethical Leadership at Work questionnaire (K-ELW) that measures RNs' perceived ethical leadership of their nurse managers. The strong validation process suggested by Benson (1998), including translation and cultural adaptation stage, structural stage, and external stage, was used. Participants were 241 RNs who reported their perceived ethical leadership using both the pre-version of K-ELW and a previously known Ethical Leadership Scale, and interactional justice of their managers, as well as their own demographics, organizational commitment and organizational citizenship behavior. Data analyses included descriptive statistics, Pearson correlation coefficients, reliability coefficients, exploratory factor analysis, and confirmatory factor analysis. SPSS 19.0 and Amos 18.0 versions were used. A modified K-ELW was developed from construct validity evidence and included 31 items in 7 domains: People orientation, task responsibility fairness, relationship fairness, power sharing, concern for sustainability, ethical guidance, and integrity. Convergent validity, discriminant validity, and concurrent validity were supported according to the correlation coefficients of the 7 domains with other measures. The results of this study provide preliminary evidence that the modified K-ELW can be adopted in Korean nursing organizations, and reliable and valid ethical leadership scores can be expected.

  20. Development and validation of the Bush-Francis Catatonia Rating Scale - Brazilian version.

    PubMed

    Nunes, Ana Letícia Santos; Filgueiras, Alberto; Nicolato, Rodrigo; Alvarenga, Jussara Mendonça; Silveira, Luciana Angélica Silva; Silva, Rafael Assis da; Cheniaux, Elie

    2017-01-01

    This article aims to describe the adaptation and translation process of the Bush-Francis Catatonia Rating Scale (BFCRS) and its reduced version, the Bush-Francis Catatonia Screening Instrument (BFCSI) for Brazilian Portuguese, as well as its validation. Semantic equivalence processes included four steps: translation, back translation, evaluation of semantic equivalence and a pilot-study. Validation consisted of simultaneous applications of the instrument in Portuguese by two examiners in 30 catatonic and 30 non-catatonic patients. Total scores averaged 20.07 for the complete scale and 7.80 for its reduced version among catatonic patients, compared with 0.47 and 0.20 among non-catatonic patients, respectively. Overall values of inter-rater reliability of the instruments were 0.97 for the BFCSI and 0.96 for the BFCRS. The scale's version in Portuguese proved to be valid and was able to distinguish between catatonic and non-catatonic patients. It was also reliable, with inter-evaluator reliability indexes as high as those of the original instrument.

  1. An adaptive management process for forest soil conservation.

    Treesearch

    Michael P. Curran; Douglas G. Maynard; Ronald L. Heninger; Thomas A. Terry; Steven W. Howes; Douglas M. Stone; Thomas Niemann; Richard E. Miller; Robert F. Powers

    2005-01-01

    Soil disturbance guidelines should be based on comparable disturbance categories adapted to specific local soil conditions, validated by monitoring and research. Guidelines, standards, and practices should be continually improved based on an adaptive management process, which is presented in this paper. Core components of this process include: reliable monitoring...

  2. Cross-cultural validation of instruments measuring health beliefs about colorectal cancer screening among Korean Americans.

    PubMed

    Lee, Shin-Young; Lee, Eunice E

    2015-02-01

    The purpose of this study was to report the instrument modification and validation processes to make existing health belief model scales culturally appropriate for Korean Americans (KAs) regarding colorectal cancer (CRC) screening utilization. Instrument translation, individual interviews using cognitive interviewing, and expert reviews were conducted during the instrument modification phase, and a pilot test and a cross-sectional survey were conducted during the instrument validation phase. Data analyses of the cross-sectional survey included internal consistency and construct validity using exploratory and confirmatory factor analysis. The main issues identified during the instrument modification phase were (a) cultural and linguistic translation issues and (b) newly developed items reflecting Korean cultural barriers. Cross-sectional survey analyses during the instrument validation phase revealed that all scales demonstrate good internal consistency reliability (Cronbach's alpha=.72~.88). Exploratory factor analysis showed that susceptibility and severity loaded on the same factor, which may indicate a threat variable. Items with low factor loadings in the confirmatory factor analysis may relate to (a) lack of knowledge about fecal occult blood testing and (b) multiple dimensions of the subscales. Methodological, sequential processes of instrument modification and validation, including translation, individual interviews, expert reviews, pilot testing and a cross-sectional survey, were provided in this study. The findings indicate that existing instruments need to be examined for CRC screening research involving KAs.

  3. Assessment of bachelor's theses in a nursing degree with a rubrics system: Development and validation study.

    PubMed

    González-Chordá, Víctor M; Mena-Tudela, Desirée; Salas-Medina, Pablo; Cervera-Gasch, Agueda; Orts-Cortés, Isabel; Maciá-Soler, Loreto

    2016-02-01

    Writing a bachelor thesis (BT) is the last step to obtain a nursing degree. In order to perform an effective assessment of a nursing BT, certain reliable and valid tools are required. To develop and validate a 3-rubric system (drafting process, dissertation, and viva) to assess final year nursing students' BT. A multi-disciplinary study of content validity and psychometric properties. The study was carried out between December 2014 and July 2015. Nursing Degree at Universitat Jaume I. Spain. Eleven experts (9 nursing professors and 2 education professors from 6 different universities) took part in the development and content validity stages. Fifty-two theses presented during the 2014-2015 academic year were included by consecutive sampling of cases in order to study the psychometric properties. First, a group of experts was created to validate the content of the assessment system based on three rubrics (drafting process, dissertation, and viva). Subsequently, a reliability and validity study of the rubrics was carried out on the 52 theses presented during the 2014-2015 academic year. The BT drafting process rubric has 8 criteria (S-CVI=0.93; α=0.837; ICC=0.614), the dissertation rubric has 7 criteria (S-CVI=0.9; α=0.893; ICC=0.74), and the viva rubric has 4 criteria (S-CVI=0.86; α=8.16; ICC=0.895). A nursing BT assessment system based on three rubrics (drafting process, dissertation, and viva) has been validated. This system may be transferred to other nursing degrees or degrees from other academic areas. It is necessary to continue with the validation process taking into account factors that may affect the results obtained. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. Bringing Value-Based Perspectives to Care: Including Patient and Family Members in Decision-Making Processes

    PubMed Central

    Kohler, Graeme; Sampalli, Tara; Ryer, Ashley; Porter, Judy; Wood, Les; Bedford, Lisa; Higgins-Bowser, Irene; Edwards, Lynn; Christian, Erin; Dunn, Susan; Gibson, Rick; Ryan Carson, Shannon; Vallis, Michael; Zed, Joanna; Tugwell, Barna; Van Zoost, Colin; Canfield, Carolyn; Rivoire, Eleanor

    2017-01-01

    Background: Recent evidence shows that patient engagement is an important strategy in achieving a high performing healthcare system. While there is considerable evidence of implementation initiatives in direct care context, there is limited investigation of implementation initiatives in decision-making context as it relates to program planning, service delivery and developing policies. Research has also shown a gap in consistent application of system-level strategies that can effectively translate organizational policies around patient and family engagement into practice. Methods: The broad objective of this initiative was to develop a system-level implementation strategy to include patient and family advisors (PFAs) at decision-making points in primary healthcare (PHC) based on wellestablished evidence and literature. In this opportunity sponsored by the Canadian Foundation for Healthcare Improvement (CFHI) a co-design methodology, also well-established was applied in identifying and developing a suitable implementation strategy to engage PFAs as members of quality teams in PHC. Diabetes management centres (DMCs) was selected as the pilot site to develop the strategy. Key steps in the process included review of evidence, review of the current state in PHC through engagement of key stakeholders and a co-design approach. Results: The project team included a diverse representation of members from the PHC system including patient advisors, DMC team members, system leads, providers, Public Engagement team members and CFHI improvement coaches. Key outcomes of this 18-month long initiative included development of a working definition of patient and family engagement, development of a Patient and Family Engagement Resource Guide and evaluation of the resource guide. Conclusion: This novel initiative provided us an opportunity to develop a supportive system-wide implementation plan and a strategy to include PFAs in decision-making processes in PHC. The well-established co

  5. The teamwork in assertive community treatment (TACT) scale: development and validation.

    PubMed

    Wholey, Douglas R; Zhu, Xi; Knoke, David; Shah, Pri; Zellmer-Bruhn, Mary; Witheridge, Thomas F

    2012-11-01

    Team design is meticulously specified for assertive community treatment (ACT) teams, yet performance can vary across ACT teams, even those with high fidelity. By developing and validating the Teamwork in Assertive Community Treatment (TACT) scale, investigators examined the role of team processes in ACT performance. The TACT scale measuring ACT teamwork was developed from a conceptual model grounded in organizational research and adapted for the ACT and mental health context. TACT subscales were constructed after exploratory and confirmatory factor analyses. The reliability, discriminant validity, predictive validity, temporal stability, internal consistency, and within-team agreement were established with surveys from approximately 300 members of 26 Minnesota ACT teams who completed the questionnaire three times, at six-month intervals. Nine TACT subscales emerged from the analyses: exploration, exploitation of new and existing knowledge, psychological safety, goal agreement, conflict, constructive controversy, information accessibility, encounter preparedness, and consumer-centered care. These nine subscales demonstrated fit and temporal stability (confirmatory factor analysis), high internal consistency (Cronbach's alpha), and within-team agreement and between-team differences (rwg and intraclass correlations). Correlational analyses of the subscales revealed that they measure related yet distinctive aspects of ACT team processes, and regression analyses demonstrated predictive validity (encounter preparedness is related to staff outcomes). The TACT scale demonstrated high reliability and validity and can be included in research and evaluation of teamwork in ACT and mental health teams.

  6. Model Validation Status Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    E.L. Hardin

    The primary objective for the Model Validation Status Review was to perform a one-time evaluation of model validation associated with the analysis/model reports (AMRs) containing model input to total-system performance assessment (TSPA) for the Yucca Mountain site recommendation (SR). This review was performed in response to Corrective Action Request BSC-01-C-01 (Clark 2001, Krisha 2001) pursuant to Quality Assurance review findings of an adverse trend in model validation deficiency. The review findings in this report provide the following information which defines the extent of model validation deficiency and the corrective action needed: (1) AMRs that contain or support models are identified,more » and conversely, for each model the supporting documentation is identified. (2) The use for each model is determined based on whether the output is used directly for TSPA-SR, or for screening (exclusion) of features, events, and processes (FEPs), and the nature of the model output. (3) Two approaches are used to evaluate the extent to which the validation for each model is compliant with AP-3.10Q (Analyses and Models). The approaches differ in regard to whether model validation is achieved within individual AMRs as originally intended, or whether model validation could be readily achieved by incorporating information from other sources. (4) Recommendations are presented for changes to the AMRs, and additional model development activities or data collection, that will remedy model validation review findings, in support of licensing activities. The Model Validation Status Review emphasized those AMRs that support TSPA-SR (CRWMS M&O 2000bl and 2000bm). A series of workshops and teleconferences was held to discuss and integrate the review findings. The review encompassed 125 AMRs (Table 1) plus certain other supporting documents and data needed to assess model validity. The AMRs were grouped in 21 model areas representing the modeling of processes affecting the natural

  7. Validation of the filament winding process model

    NASA Technical Reports Server (NTRS)

    Calius, Emilo P.; Springer, George S.; Wilson, Brian A.; Hanson, R. Scott

    1987-01-01

    Tests were performed toward validating the WIND model developed previously for simulating the filament winding of composite cylinders. In these tests two 24 in. long, 8 in. diam and 0.285 in. thick cylinders, made of IM-6G fibers and HBRF-55 resin, were wound at + or - 45 deg angle on steel mandrels. The temperatures on the inner and outer surfaces and inside the composite cylinders were recorded during oven cure. The temperatures inside the cylinders were also calculated by the WIND model. The measured and calculated temperatures were then compared. In addition, the degree of cure and resin viscosity distributions inside the cylinders were calculated for the conditions which existed in the tests.

  8. Alternative Vocabularies in the Test Validity Literature

    ERIC Educational Resources Information Center

    Markus, Keith A.

    2016-01-01

    Justification of testing practice involves moving from one state of knowledge about the test to another. Theories of test validity can (a) focus on the beginning of the process, (b) focus on the end, or (c) encompass the entire process. Analyses of four case studies test and illustrate three claims: (a) restrictions on validity entail a supplement…

  9. Spanish Validation of the Care Evaluation Scale for Measuring the Quality of Structure and Process of Palliative Care From the Family Perspective.

    PubMed

    Benitez-Rosario, Miguel Angel; Caceres-Miranda, Raquel; Aguirre-Jaime, Armando

    2016-03-01

    A reliable and valid measure of the structure and process of end-of-life care is important for improving the outcomes of care. This study evaluated the validity and reliability of the Spanish adaptation of a satisfaction tool of the Care Evaluation Scale (CES), which was developed in Japan to evaluate palliative care structure and process from the perspective of family members. Standard forward-backward translation and a pilot test were conducted. A multicenter survey was conducted with the relatives of patients admitted to palliative care units for symptom control. The dimensional structure was assessed using confirmatory factor analyses. Concurrent and discriminant validity were tested by correlation with the SERQVHOS, a Spanish hospital care satisfaction scale and with an 11-point rating scale on satisfaction with care. The reliability of the CES was tested by Cronbach α and by test-retest correlation. A total of 284 primary caregivers completed the CES, with low missing response rates. The results of the factor analysis suggested a six-factor solution explaining 69% of the total variance. The CES moderately correlated with the SERQVHOS and with the overall satisfaction scale (intraclass correlation coefficients of 0.66 and 0.44, respectively; P = 0.001). Cronbach α was 0.90 overall and ranged from 0.85 to 0.89 for subdomains. Intraclass correlation coefficient was 0.88 (P = 0.001) for test-retest analysis. The Spanish CES was found to be a reliable and valid measure of the satisfaction with end-of-life care structure and process from family members' perspectives. Copyright © 2016 American Academy of Hospice and Palliative Medicine. Published by Elsevier Inc. All rights reserved.

  10. MotiveValidator: interactive web-based validation of ligand and residue structure in biomolecular complexes.

    PubMed

    Vařeková, Radka Svobodová; Jaiswal, Deepti; Sehnal, David; Ionescu, Crina-Maria; Geidl, Stanislav; Pravda, Lukáš; Horský, Vladimír; Wimmerová, Michaela; Koča, Jaroslav

    2014-07-01

    Structure validation has become a major issue in the structural biology community, and an essential step is checking the ligand structure. This paper introduces MotiveValidator, a web-based application for the validation of ligands and residues in PDB or PDBx/mmCIF format files provided by the user. Specifically, MotiveValidator is able to evaluate in a straightforward manner whether the ligand or residue being studied has a correct annotation (3-letter code), i.e. if it has the same topology and stereochemistry as the model ligand or residue with this annotation. If not, MotiveValidator explicitly describes the differences. MotiveValidator offers a user-friendly, interactive and platform-independent environment for validating structures obtained by any type of experiment. The results of the validation are presented in both tabular and graphical form, facilitating their interpretation. MotiveValidator can process thousands of ligands or residues in a single validation run that takes no more than a few minutes. MotiveValidator can be used for testing single structures, or the analysis of large sets of ligands or fragments prepared for binding site analysis, docking or virtual screening. MotiveValidator is freely available via the Internet at http://ncbr.muni.cz/MotiveValidator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  11. Validation of Contamination Control in Rapid Transfer Port Chambers for Pharmaceutical Manufacturing Processes

    PubMed Central

    Hu, Shih-Cheng; Shiue, Angus; Liu, Han-Yang; Chiu, Rong-Ben

    2016-01-01

    There is worldwide concern with regard to the adverse effects of drug usage. However, contaminants can gain entry into a drug manufacturing process stream from several sources such as personnel, poor facility design, incoming ventilation air, machinery and other equipment for production, etc. In this validation study, we aimed to determine the impact and evaluate the contamination control in the preparation areas of the rapid transfer port (RTP) chamber during the pharmaceutical manufacturing processes. The RTP chamber is normally tested for airflow velocity, particle counts, pressure decay of leakage, and sterility. The air flow balance of the RTP chamber is affected by the airflow quantity and the height above the platform. It is relatively easy to evaluate the RTP chamber′s leakage by the pressure decay, where the system is charged with the air, closed, and the decay of pressure is measured by the time period. We conducted the determination of a vaporized H2O2 of a sufficient concentration to complete decontamination. The performance of the RTP chamber will improve safety and can be completely tested at an ISO Class 5 environment. PMID:27845748

  12. The Process of Including Elementary Students with Autism and Intellectual Impairments in Their Typical Classrooms.

    ERIC Educational Resources Information Center

    Downing, June E.; And Others

    A qualitative case study methodology was used to examine the process of including three students with autism, intellectual impairments, and behavioral challenges in age-appropriate typical classrooms and home schools. Data were obtained over a 9-month period from field notes of a participant researcher and three paraeducators, structured…

  13. Threats to the Valid Use of Assessment of Prior Learning in Higher Education: Claimants' Experiences of the Assessment Process

    ERIC Educational Resources Information Center

    Stenlund, Tova

    2012-01-01

    Assessment of Prior Learning (APL) refers to a process where adults' prior learning, formal as well as informal, is assessed and acknowledged. In the first section of this paper, APL and current conceptions of validity in assessments and its evaluation are presented. It is argued that participants in the assessment are an important source of…

  14. Validating archetypes for the Multiple Sclerosis Functional Composite.

    PubMed

    Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin

    2014-08-03

    Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes

  15. Validating archetypes for the Multiple Sclerosis Functional Composite

    PubMed Central

    2014-01-01

    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both

  16. Automatic, semi-automatic and manual validation of urban drainage data.

    PubMed

    Branisavljević, N; Prodanović, D; Pavlović, D

    2010-01-01

    Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.

  17. Validity Study of U.T. Austin Test for Use in Credit by Examination in Introduction to Electronic Data Processing (DPA 310), Fall 1987.

    ERIC Educational Resources Information Center

    Appenzellar, Anne B.; Kelley, H. Paul

    The Measurement and Evaluation Center of the University of Texas (Austin) conducted a validity study to assist the Department of Management Science and Information (DMSI) at the College of Business Administration in establishing a program of credit by examination for an introductory course in electronic data processing--Data Processing Analysis…

  18. Preliminary data on validity of the Drug Addiction Treatment Efficacy Questionnaire.

    PubMed

    Kastelic, Andrej; Mlakar, Janez; Pregelj, Peter

    2013-09-01

    This study describes the validation process for the Slovenian version of the Drug Addiction Treatment Efficacy Questionnaire (DATEQ). DATEQ was constructed from the questionnaires used at the Centre for the Treatment of Drug Addiction, Ljubljana University Psychiatric Hospital, and within the network of Centres for the Prevention and Treatment of Drug Addiction in Slovenia during the past 14 years. The Slovenian version of the DATEQ was translated to English using the 'forward-backward' procedure by its authors and their co-workers. The validation process included 100 male and female patients with established addiction to illicit drugs who had been prescribed opioid substitution therapy. The DATEQ questionnaire was used in the study, together with clinical evaluation to measure psychological state and to evaluate the efficacy of treatment in the last year. To determinate the validity of DATEQ the correlation with the clinical assessments of the outcome was calculated using one-way ANOVA. The F value was 44.4, p<0.001 (sum of squares: between groups 210.4, df=2, within groups 229.7, df=97, total 440.1, df=99). At the cut-off 4 the sensitivity is 81% and specificity 83%. The validation process for the Slovenian DATEQ version shows metric properties similar to those found in international studies of similar questionnaires, suggesting that it measures the same constructs, in the same way and as similar questionnaires. However, the relatively low sensitivity and specificity suggests caution when using DATEQ as the only measure of outcome.

  19. Validity and validation of expert (Q)SAR systems.

    PubMed

    Hulzebos, E; Sijm, D; Traas, T; Posthumus, R; Maslankiewicz, L

    2005-08-01

    At a recent workshop in Setubal (Portugal) principles were drafted to assess the suitability of (quantitative) structure-activity relationships ((Q)SARs) for assessing the hazards and risks of chemicals. In the present study we applied some of the Setubal principles to test the validity of three (Q)SAR expert systems and validate the results. These principles include a mechanistic basis, the availability of a training set and validation. ECOSAR, BIOWIN and DEREK for Windows have a mechanistic or empirical basis. ECOSAR has a training set for each QSAR. For half of the structural fragments the number of chemicals in the training set is >4. Based on structural fragments and log Kow, ECOSAR uses linear regression to predict ecotoxicity. Validating ECOSAR for three 'valid' classes results in predictivity of > or = 64%. BIOWIN uses (non-)linear regressions to predict the probability of biodegradability based on fragments and molecular weight. It has a large training set and predicts non-ready biodegradability well. DEREK for Windows predictions are supported by a mechanistic rationale and literature references. The structural alerts in this program have been developed with a training set of positive and negative toxicity data. However, to support the prediction only a limited number of chemicals in the training set is presented to the user. DEREK for Windows predicts effects by 'if-then' reasoning. The program predicts best for mutagenicity and carcinogenicity. Each structural fragment in ECOSAR and DEREK for Windows needs to be evaluated and validated separately.

  20. Including Delbrück scattering in GEANT4

    NASA Astrophysics Data System (ADS)

    Omer, Mohamed; Hajima, Ryoichi

    2017-08-01

    Elastic scattering of γ-rays is a significant interaction among γ-ray interactions with matter. Therefore, the planning of experiments involving measurements of γ-rays using Monte Carlo simulations usually includes elastic scattering. However, current simulation tools do not provide a complete picture of elastic scattering. The majority of these tools assume Rayleigh scattering is the primary contributor to elastic scattering and neglect other elastic scattering processes, such as nuclear Thomson and Delbrück scattering. Here, we develop a tabulation-based method to simulate elastic scattering in one of the most common open-source Monte Carlo simulation toolkits, GEANT4. We collectively include three processes, Rayleigh scattering, nuclear Thomson scattering, and Delbrück scattering. Our simulation more appropriately uses differential cross sections based on the second-order scattering matrix instead of current data, which are based on the form factor approximation. Moreover, the superposition of these processes is carefully taken into account emphasizing the complex nature of the scattering amplitudes. The simulation covers an energy range of 0.01 MeV ≤ E ≤ 3 MeV and all elements with atomic numbers of 1 ≤ Z ≤ 99. In addition, we validated our simulation by comparing the differential cross sections measured in earlier experiments with those extracted from the simulations. We find that the simulations are in good agreement with the experimental measurements. Differences between the experiments and the simulations are 21% for uranium, 24% for lead, 3% for tantalum, and 8% for cerium at 2.754 MeV. Coulomb corrections to the Delbrück amplitudes may account for the relatively large differences that appear at higher Z values.

  1. Enhancement and Validation of an Arab Surname Database

    PubMed Central

    Schwartz, Kendra; Beebani, Ganj; Sedki, Mai; Tahhan, Mamon; Ruterbusch, Julie J.

    2015-01-01

    Objectives Arab Americans constitute a large, heterogeneous, and quickly growing subpopulation in the United States. Health statistics for this group are difficult to find because US governmental offices do not recognize Arab as separate from white. The development and validation of an Arab- and Chaldean-American name database will enhance research efforts in this population subgroup. Methods A previously validated name database was supplemented with newly identified names gathered primarily from vital statistic records and then evaluated using a multistep process. This process included 1) review by 4 Arabic- and Chaldean-speaking reviewers, 2) ethnicity assessment by social media searches, and 3) self-report of ancestry obtained from a telephone survey. Results Our Arab- and Chaldean-American name algorithm has a positive predictive value of 91% and a negative predictive value of 100%. Conclusions This enhanced name database and algorithm can be used to identify Arab Americans in health statistics data, such as cancer and hospital registries, where they are often coded as white, to determine the extent of health disparities in this population. PMID:24625771

  2. Magnetic Field Satellite (Magsat) data processing system specifications

    NASA Technical Reports Server (NTRS)

    Berman, D.; Gomez, R.; Miller, A.

    1980-01-01

    The software specifications for the MAGSAT data processing system (MDPS) are presented. The MDPS is divided functionally into preprocessing of primary input data, data management, chronicle processing, and postprocessing. Data organization and validity, and checks of spacecraft and instrumentation are dicussed. Output products of the MDPS, including various plots and data tapes, are described. Formats for important tapes are presented. Dicussions and mathematical formulations for coordinate transformations and field model coefficients are included.

  3. Global approach for the validation of an in-line Raman spectroscopic method to determine the API content in real-time during a hot-melt extrusion process.

    PubMed

    Netchacovitch, L; Thiry, J; De Bleye, C; Dumont, E; Cailletaud, J; Sacré, P-Y; Evrard, B; Hubert, Ph; Ziemons, E

    2017-08-15

    Since the Food and Drug Administration (FDA) published a guidance based on the Process Analytical Technology (PAT) approach, real-time analyses during manufacturing processes are in real expansion. In this study, in-line Raman spectroscopic analyses were performed during a Hot-Melt Extrusion (HME) process to determine the Active Pharmaceutical Ingredient (API) content in real-time. The method was validated based on a univariate and a multivariate approach and the analytical performances of the obtained models were compared. Moreover, on one hand, in-line data were correlated with the real API concentration present in the sample quantified by a previously validated off-line confocal Raman microspectroscopic method. On the other hand, in-line data were also treated in function of the concentration based on the weighing of the components in the prepared mixture. The importance of developing quantitative methods based on the use of a reference method was thus highlighted. The method was validated according to the total error approach fixing the acceptance limits at ±15% and the α risk at ±5%. This method reaches the requirements of the European Pharmacopeia norms for the uniformity of content of single-dose preparations. The validation proves that future results will be in the acceptance limits with a previously defined probability. Finally, the in-line validated method was compared with the off-line one to demonstrate its ability to be used in routine analyses. Copyright © 2017 Elsevier B.V. All rights reserved.

  4. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  5. Producibility improvements suggested by a validated process model of seeded CdZnTe vertical Bridgman growth

    NASA Astrophysics Data System (ADS)

    Larson, David J., Jr.; Casagrande, Louis G.; Di Marzio, Don; Levy, Alan; Carlson, Frederick M.; Lee, Taipao; Black, David R.; Wu, Jun; Dudley, Michael

    1994-07-01

    We have successfully validated theoretical models of seeded vertical Bridgman-Stockbarger CdZnTe crystal growth and post-solidification processing, using in-situ thermal monitoring and innovative material characterization techniques. The models predict the thermal gradients, interface shape, fluid flow and solute redistribution during solidification, as well as the distributions of accumulated excess stress that causes defect generation and redistribution. Data from the furnace and ampoule wall have validated predictions from the thermal model. Results are compared to predictions of the thermal and thermo-solutal models. We explain the measured initial, change-of-rate, and terminal compositional transients as well as the macrosegregation. Macro and micro-defect distributions have been imaged on CdZnTe wafers from 40 mm diameter boules. Superposition of topographic defect images and predicted excess stress patterns suggests the origin of some frequently encountered defects, particularly on a macro scale, to result from the applied and accumulated stress fields and the anisotropic nature of the CdZnTe crystal. Implications of these findings with respect to producibility are discussed.

  6. Integrated Medical Model Verification, Validation, and Credibility

    NASA Technical Reports Server (NTRS)

    Walton, Marlei; Kerstman, Eric; Foy, Millennia; Shah, Ronak; Saile, Lynn; Boley, Lynn; Butler, Doug; Myers, Jerry

    2014-01-01

    The Integrated Medical Model (IMM) was designed to forecast relative changes for a specified set of crew health and mission success risk metrics by using a probabilistic (stochastic process) model based on historical data, cohort data, and subject matter expert opinion. A probabilistic approach is taken since exact (deterministic) results would not appropriately reflect the uncertainty in the IMM inputs. Once the IMM was conceptualized, a plan was needed to rigorously assess input information, framework and code, and output results of the IMM, and ensure that end user requests and requirements were considered during all stages of model development and implementation. METHODS: In 2008, the IMM team developed a comprehensive verification and validation (VV) plan, which specified internal and external review criteria encompassing 1) verification of data and IMM structure to ensure proper implementation of the IMM, 2) several validation techniques to confirm that the simulation capability of the IMM appropriately represents occurrences and consequences of medical conditions during space missions, and 3) credibility processes to develop user confidence in the information derived from the IMM. When the NASA-STD-7009 (7009) was published, the IMM team updated their verification, validation, and credibility (VVC) project plan to meet 7009 requirements and include 7009 tools in reporting VVC status of the IMM. RESULTS: IMM VVC updates are compiled recurrently and include 7009 Compliance and Credibility matrices, IMM VV Plan status, and a synopsis of any changes or updates to the IMM during the reporting period. Reporting tools have evolved over the lifetime of the IMM project to better communicate VVC status. This has included refining original 7009 methodology with augmentation from the NASA-STD-7009 Guidance Document. End user requests and requirements are being satisfied as evidenced by ISS Program acceptance of IMM risk forecasts, transition to an operational model and

  7. Performing Verification and Validation in Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1999-01-01

    The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.

  8. Design and validation of instruments to measure knowledge.

    PubMed

    Elliott, T E; Regal, R R; Elliott, B A; Renier, C M

    2001-01-01

    Measuring health care providers' learning after they have participated in educational interventions that use experimental designs requires valid, reliable, and practical instruments. A literature review was conducted. In addition, experience gained from designing and validating instruments for measuring the effect of an educational intervention informed this process. The eight main steps for designing, validating, and testing the reliability of instruments for measuring learning outcomes are presented. The key considerations and rationale for this process are discussed. Methods for critiquing and adapting existent instruments and creating new ones are offered. This study may help other investigators in developing valid, reliable, and practical instruments for measuring the outcomes of educational activities.

  9. Analysis procedures and subjective flight results of a simulator validation and cue fidelity experiment

    NASA Technical Reports Server (NTRS)

    Carr, Peter C.; Mckissick, Burnell T.

    1988-01-01

    A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.

  10. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1993-01-01

    Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.

  11. Evaluation of biologic occupational risk control practices: quality indicators development and validation.

    PubMed

    Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara

    2010-05-01

    There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  12. GNSS-Based Space Weather Systems Including COSMIC Ionospheric Measurements

    NASA Technical Reports Server (NTRS)

    Komjathy, Attila; Mandrake, Lukas; Wilson, Brian; Iijima, Byron; Pi, Xiaoqing; Hajj, George; Mannucci, Anthony J.

    2006-01-01

    The presentation outline includes University Corporation for Atmospheric Research (UCAR) and Jet Propulsion Laboratory (JPL) product comparisons, assimilating ground-based global positioning satellites (GPS) and COSMIC into JPL/University of Southern California (USC) Global Assimilative Ionospheric Model (GAIM), and JPL/USC GAIM validation. The discussion of comparisons examines Abel profiles and calibrated TEC. The JPL/USC GAIM validation uses Arecibo ISR, Jason-2 VTEC, and Abel profiles.

  13. Processes in healthcare teams that include nurse practitioners: what do patients and families perceive to be effective?

    PubMed

    Kilpatrick, Kelley; Jabbour, Mira; Fortin, Chantal

    2016-03-01

    To explore patient and family perceptions of team effectiveness of teams those include nurse practitioners in acute and primary care. Nurse practitioners provide safe and effective care. Patients are satisfied with the care provided by nurse practitioners. Research examining patient and family perceptions of team effectiveness following the implementation of nurse practitioners in teams is lacking. A descriptive qualitative design was used. We used purposeful sampling to identify participants in four clinical specialties. We collected data from March 2014-January 2015 using semi-structured interviews and demographic questionnaires. Content analysis was used. Descriptive statistics were generated. Participants (n = 49) believed that the teams were more effective after the implementation of a nurse practitioner and this was important to them. They described processes that teams with nurse practitioners used to effectively provide care. These processes included improved communication, involvement in decision-making, cohesion, care coordination, problem-solving, and a focus on the needs of patients and families. Participants highlighted the importance of interpersonal team dynamics. A human approach, trust, being open to discussion, listening to patient and family concerns and respect were particularly valued by participants. Different processes emerged as priorities when data were examined by speciality. However, communication, trust and taking the time to provide care were the most important processes. The study provides new insights into the views of patients and families and micro-level processes in teams with nurse practitioners. The relative importance of each process varied according to the patient's health condition. Patients and providers identified similar team processes. Future research is needed to identify how team processes influence care outcomes. The findings can support patients, clinicians and decision-makers to determine the processes to focus on to

  14. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers.

    PubMed

    Reeves, Todd D; Marbach-Ad, Gili

    2016-01-01

    Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology--either quantitative or qualitative--on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. © 2016 T. D. Reeves and G. Marbach-Ad. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  15. An Examination of the Construct Validity of the Inventory of Classroom Management Style.

    ERIC Educational Resources Information Center

    Martin, Nancy K.; Baldwin, Beatrice

    Confirmatory factor analysis was used to examine the construct validity of a new instrument measuring perceptions toward classroom management: the Inventory of Classroom Management Style (ICMS). Classroom management was defined as a multifaceted process that includes three broad dimensions: (1) what teachers believe about students as persons; (2)…

  16. Exploration, Development, and Validation of Patient-reported Outcomes in Antineutrophil Cytoplasmic Antibody–associated Vasculitis Using the OMERACT Process

    PubMed Central

    Robson, Joanna C.; Milman, Nataliya; Tomasson, Gunnar; Dawson, Jill; Cronholm, Peter F.; Kellom, Katherine; Shea, Judy; Ashdown, Susan; Boers, Maarten; Boonen, Annelies; Casey, George C.; Farrar, John T.; Gebhart, Don; Krischer, Jeffrey; Lanier, Georgia; McAlear, Carol A.; Peck, Jacqueline; Sreih, Antoine G.; Tugwell, Peter; Luqmani, Raashid A.; Merkel, Peter A.

    2016-01-01

    Objective Antineutrophil cytoplasmic antibody (ANCA)-associated vasculitis (AAV) is a group of linked multisystem life- and organ-threatening diseases. The Outcome Measures in Rheumatology (OMERACT) vasculitis working group has been at the forefront of outcome development in the field and has achieved OMERACT endorsement of a core set of outcomes for AAV. Patients with AAV report as important some manifestations of disease not routinely collected through physician-completed outcome tools; and they rate common manifestations differently from investigators. The core set includes the domain of patient-reported outcomes (PRO). However, PRO currently used in clinical trials of AAV do not fully characterize patients’ perspectives on their burden of disease. The OMERACT vasculitis working group is addressing the unmet needs for PRO in AAV. Methods Current activities of the working group include (1) evaluating the feasibility and construct validity of instruments within the PROMIS (Patient-Reported Outcome Measurement Information System) to record components of the disease experience among patients with AAV; (2) creating a disease-specific PRO measure for AAV; and (3) applying The International Classification of Functioning, Disability and Health to examine the scope of outcome measures used in AAV. Results The working group has developed a comprehensive research strategy, organized an investigative team, included patient research partners, obtained peer-reviewed funding, and is using a considerable research infrastructure to complete these interrelated projects to develop evidence-based validated outcome instruments that meet the OMERACT filter of truth, discrimination, and feasibility. Conclusion The OMERACT vasculitis working group is on schedule to achieve its goals of developing validated PRO for use in clinical trials of AAV. (First Release September 1 2015; J Rheumatol 2015;42:2204–9; doi:10.3899/jrheum.141143) PMID:26329344

  17. Linguistic Validation of an Interactive Communication Tool to Help French-Speaking Children Express Their Cancer Symptoms.

    PubMed

    Tsimicalis, Argerie; Le May, Sylvie; Stinson, Jennifer; Rennick, Janet; Vachon, Marie-France; Louli, Julie; Bérubé, Sarah; Treherne, Stephanie; Yoon, Sunmoo; Nordby Bøe, Trude; Ruland, Cornelia

    Sisom is an interactive tool designed to help children communicate their cancer symptoms. Important design issues relevant to other cancer populations remain unexplored. This single-site, descriptive, qualitative study was conducted to linguistically validate Sisom with a group of French-speaking children with cancer, their parents, and health care professionals. The linguistic validation process included 6 steps: (1) forward translation, (2) backward translation, (3) patient testing, (4) production of a Sisom French version, (5) patient testing this version, and (6) production of the final Sisom French prototype. Five health care professionals and 10 children and their parents participated in the study. Health care professionals oversaw the translation process providing clinically meaningful suggestions. Two rounds of patient testing, which included parental participation, resulted in the following themes: (1) comprehension, (2) suggestions for improving the translations, (3) usability, (4) parental engagement, and (5) overall impression. Overall, Sisom was well received by participants who were forthcoming with input and suggestions for improving the French translations. Our proposed methodology may be replicated for the linguistic validation of other e-health tools.

  18. Mining Twitter Data Stream to Augment NASA GPM Validation

    NASA Astrophysics Data System (ADS)

    Teng, W. L.; Albayrak, A.; Huffman, G. J.; Vollmer, B.

    2017-12-01

    The Twitter data stream is an important new source of real-time and historical global information for potentially augmenting the validation program of NASA's Global Precipitation Measurement (GPM) mission. There have been other similar uses of Twitter, though mostly related to natural hazards monitoring and management. The validation of satellite precipitation estimates is challenging, because many regions lack data or access to data, especially outside of the U.S. and in remote and developing areas. The time-varying set of "precipitation" tweets can be thought of as an organic network of rain gauges, potentially providing a widespread view of precipitation occurrence. Twitter provides a large source of crowd for crowdsourcing. During a 24-hour period in the middle of the snow storm this past March in the U.S. Northeast, we collected more than 13,000 relevant precipitation tweets with exact geolocation. The overall objective of our project is to determine the extent to which processed tweets can provide additional information that improves the validation of GPM data. Though our current effort focuses on tweets and precipitation, our approach is general and applicable to other social media and other geophysical measurements. Specifically, we have developed an operational infrastructure for processing tweets, in a format suitable for analysis with GPM data; engaged with potential participants, both passive and active, to "enrich" the Twitter stream; and inter-compared "precipitation" tweet data, ground station data, and GPM retrievals. In this presentation, we detail the technical capabilities of our tweet processing infrastructure, including data abstraction, feature extraction, search engine, context-awareness, real-time processing, and high volume (big) data processing; various means for "enriching" the Twitter stream; and results of inter-comparisons. Our project should bring a new kind of visibility to Twitter and engender a new kind of appreciation of the value

  19. Brief International Cognitive Assessment for MS (BICAMS): international standards for validation.

    PubMed

    Benedict, Ralph H B; Amato, Maria Pia; Boringa, Jan; Brochet, Bruno; Foley, Fred; Fredrikson, Stan; Hamalainen, Paivi; Hartung, Hans; Krupp, Lauren; Penner, Iris; Reder, Anthony T; Langdon, Dawn

    2012-07-16

    An international expert consensus committee recently recommended a brief battery of tests for cognitive evaluation in multiple sclerosis. The Brief International Cognitive Assessment for MS (BICAMS) battery includes tests of mental processing speed and memory. Recognizing that resources for validation will vary internationally, the committee identified validation priorities, to facilitate international acceptance of BICAMS. Practical matters pertaining to implementation across different languages and countries were discussed. Five steps to achieve optimal psychometric validation were proposed. In Step 1, test stimuli should be standardized for the target culture or language under consideration. In Step 2, examiner instructions must be standardized and translated, including all information from manuals necessary for administration and interpretation. In Step 3, samples of at least 65 healthy persons should be studied for normalization, matched to patients on demographics such as age, gender and education. The objective of Step 4 is test-retest reliability, which can be investigated in a small sample of MS and/or healthy volunteers over 1-3 weeks. Finally, in Step 5, criterion validity should be established by comparing MS and healthy controls. At this time, preliminary studies are underway in a number of countries as we move forward with this international assessment tool for cognition in MS.

  20. Validation of SAM 2 and SAGE satellite

    NASA Technical Reports Server (NTRS)

    Kent, G. S.; Wang, P.-H.; Farrukh, U. O.; Yue, G. K.

    1987-01-01

    Presented are the results of a validation study of data obtained by the Stratospheric Aerosol and Gas Experiment I (SAGE I) and Stratospheric Aerosol Measurement II (SAM II) satellite experiments. The study includes the entire SAGE I data set (February 1979 - November 1981) and the first four and one-half years of SAM II data (October 1978 - February 1983). These data sets have been validated by their use in the analysis of dynamical, physical and chemical processes in the stratosphere. They have been compared with other existing data sets and the SAGE I and SAM II data sets intercompared where possible. The study has shown the data to be of great value in the study of the climatological behavior of stratospheric aerosols and ozone. Several scientific publications and user-oriented data summaries have appeared as a result of the work carried out under this contract.

  1. Validation of the ENVISAT atmospheric chemistry instruments

    NASA Astrophysics Data System (ADS)

    Snoeij, P.; Koopman, R.; Attema, E.; Zehner, C.; Wursteisen, P.; Dehn, A.; de Laurentius, M.; Frerick, J.; Mantovani, R.; Saavedra de Miguel, L.

    Three atmospheric-chemistry sensors form part of the ENVISAT payload that has been placed into orbit in March 2002. This paper presents the ENVISAT mission status and data policy, and reviews the end-to-end performance of the GOMOS, MIPAS and SCIAMACHY observation systems and will discuss the validation aspects of these instruments. In particular, for each instrument, the review addresses mission planning, in-orbit performance, calibration, data processor algorithms and configuration, reprocessing strategy, and product quality control assessment. An important part of the quality assessment is the Geophysical Validation. At the ACVT Validation workshop held in Frascati, Italy, from 3-7 May 2004, scientists and engineers presented analyses of the exhaustive series of tests that have been run on each of ENVISAT atmospheric chemistry sensors since the spacecraft was launched in March 2002. On the basis of workshop results it was decided that most of the data products provided by the ENVISAT atmospheric chemistry instruments are ready for operational delivery. Although the main validation phase for the atmospheric instruments of ENVISAT will be completed soon, ongoing validation products will continue throughout the lifetime of the ENVISAT mission. The long-term validation phase will: Provide assurance of data quality and accuracy for applications such as climate change research Investigate the fully representative range of geophysical conditions Investigate the fully representative range of seasonal cycles Perform long term monitoring for instrumental drifts and other artefacts Validate new products. This paper will also discuss the general status of the validation activities for GOMOS, MIPAS and SCIAMACHY. The main and long-term geophysical validation programme will be presented. The flight and ground-segment planning, configuration and performance characterization will be discussed. The evolution of each of the observation systems has been distinct during the mission

  2. [Validation of measurement methods and estimation of uncertainty of measurement of chemical agents in the air at workstations].

    PubMed

    Dobecki, Marek

    2012-01-01

    This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.

  3. Validation and Demonstration of the NOAA Unique Combined Atmospheric Processing System (NUCAPS) in Support of User Applications

    NASA Astrophysics Data System (ADS)

    Nalli, N. R.; Gambacorta, A.; Tan, C.; Iturbide, F.; Barnet, C. D.; Reale, A.; Sun, B.; Liu, Q.

    2017-12-01

    This presentation overviews the performance of the operational SNPP NOAA Unique Combined Atmospheric Processing System (NUCAPS) environmental data record (EDR) products. The SNPP Cross-track Infrared Sounder and Advanced Technology Microwave Sounder (CrIS/ATMS) suite, the first of the Joint Polar Satellite System (JPSS) Program, is one of NOAA's major investments in our nation's future operational environmental observation capability. The NUCAPS algorithm is a world-class NOAA-operational IR/MW retrieval algorithm based upon the well-established AIRS science team algorithm for deriving temperature, moisture, ozone and carbon trace gas to provide users with state-of-the-art EDR products. Operational use of the products includes the NOAA National Weather Service (NWS) Advanced Weather Interactive Processing System (AWIPS), along with numerous science-user applications. NUCAPS EDR product assessments are made with reference to JPSS Level 1 global requirements, which provide the definitive metrics for assessing that the products have minimally met predefined global performance specifications. The NESDIS/STAR NUCAPS development and validation team recently delivered the Phase 4 algorithm which incorporated critical updates necessary for compatibility with full spectral-resolution (FSR) CrIS sensor data records (SDRs). Based on comprehensive analyses, the NUCAPS Phase 4 CrIS-FSR temperature, moisture and ozone profile EDRs, as well as the carbon trace gas EDRs (CO, CH4 and CO2), are shown o be meeting or close to meeting the JPSS program global requirements. Regional and temporal assessments of interest to EDR users (e.g., AWIPS) will also be presented.

  4. 77 FR 41807 - New Gear Process, a Division of Magna Powertrain, Including On-Site Leased Workers From ABM...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2012-07-16

    ... Service Northeast, Inc. were employed on-site at the East Syracuse, New York location of New Process Gear... DEPARTMENT OF LABOR Employment and Training Administration [TA-W-74,940] New Gear Process, a Division of Magna Powertrain, Including On- Site Leased Workers From ABM Janitorial Service Northeast, Inc...

  5. Validation of a Videoconferenced Speaking Test

    ERIC Educational Resources Information Center

    Kim, Jungtae; Craig, Daniel A.

    2012-01-01

    Videoconferencing offers new opportunities for language testers to assess speaking ability in low-stakes diagnostic tests. To be considered a trusted testing tool in language testing, a test should be examined employing appropriate validation processes [Chapelle, C.A., Jamieson, J., & Hegelheimer, V. (2003). "Validation of a web-based ESL…

  6. Matrix Extension and Multilaboratory Validation of Arsenic Speciation Method EAM §4.10 to Include Wine.

    PubMed

    Tanabe, Courtney K; Hopfer, Helene; Ebeler, Susan E; Nelson, Jenny; Conklin, Sean D; Kubachka, Kevin M; Wilson, Robert A

    2017-05-24

    A multilaboratory validation (MLV) was performed to extend the U.S. Food and Drug Administration's (FDA) analytical method Elemental Analysis Manual (EAM) §4.10, High Performance Liquid Chromatography-Inductively Coupled Plasma-Mass Spectrometric Determination of Four Arsenic Species in Fruit Juice, to include wine. Several method modifications were examined to optimize the method for the analysis of dimethylarsinic acid, monomethylarsonic acid, arsenate (AsV), and arsenite (AsIII) in various wine matrices with a range of ethanol concentrations by liquid chromatography-inductively coupled plasma-mass spectrometry. The optimized method was used for the analysis of five wines of different classifications (red, white, sparkling, rosé, and fortified) by three laboratories. Additionally, the samples were fortified in duplicate at levels of approximately 5, 10, and 30 μg kg -1 and analyzed by each participating laboratory. The combined average fortification recoveries of dimethylarsinic acid, monomethylarsonic acid, and inorganic arsenic (iAs the sum of AsV and AsIII) in these samples were 101, 100, and 100%, respectively. To further demonstrate the method, 46 additional wine samples were analyzed. The total As levels of all the wines analyzed in this study were between 1.0 and 38.2 μg kg -1 . The overall average mass balance based on the sum of the species recovered from the chromatographic separation compared to the total As measured was 89% with a range of 51-135%. In the 51 analyzed samples, iAs accounted for an average of 91% of the sum of the species with a range of 37-100%.

  7. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  8. Biomarker validation of a decline in semantic processing in preclinical Alzheimer's disease.

    PubMed

    Papp, Kathryn V; Mormino, Elizabeth C; Amariglio, Rebecca E; Munro, Catherine; Dagley, Alex; Schultz, Aaron P; Johnson, Keith A; Sperling, Reisa A; Rentz, Dorene M

    2016-07-01

    Differentially worse performance on category versus letter fluency suggests greater semantic versus retrieval difficulties. This discrepancy, combined with reduced episodic memory, has widespread clinical utility in diagnosing Alzheimer's disease (AD). Our objective was to investigate whether changes in semantic processing, as measured by the discrepancy between category and letter fluency, was detectable in preclinical AD: in clinically normal older adults with abnormal β-amyloid (Aβ) deposition on positron emission tomography (PET) neuroimaging. Clinically normal older adults (mean Mini Mental State Exam (MMSE) score = 29) were classified as Aβ+ (n = 70) or Aβ- (n = 205) using Pittsburgh Compound B-(PET) imaging. Participants completed letter fluency (FAS; word generation to letters F-A-S) and category fluency (CAT; word generation to animals, vegetables, fruits) annually (mean follow-up = 2.42 years). The effect of Aβ status on fluency over time was examined using linear mixed models controlling for age, sex, and education. To dissociate effects related to semantic (CAT) versus retrieval processes (CAT and FAS), we repeated models predicting CAT over time, controlling for FAS and likewise for CAT controlling for FAS. At baseline, the Aβ+ group performed better on FAS compared with the Aβ- group but comparably on CAT. Longitudinally, the Aβ+ group demonstrated greater decline on CAT compared with the Aβ- group (p = .0011). This finding remained significant even when covarying for FAS (p = .0107). Aβ+ participants similarly declined compared with Aβ- participants on FAS (p = .0112), but this effect became insignificant when covarying for CAT (p = .1607). These findings provide biomarker validation for the greater specificity of declines in category versus letter fluency to underlying AD pathology. Our results also suggest that changes in semantic processing occur earlier in the AD trajectory than previously hypothesized. (PsycINFO Database Record (c

  9. Design and validation of a consistent and reproducible manufacture process for the production of clinical-grade bone marrow-derived multipotent mesenchymal stromal cells.

    PubMed

    Codinach, Margarita; Blanco, Margarita; Ortega, Isabel; Lloret, Mireia; Reales, Laura; Coca, Maria Isabel; Torrents, Sílvia; Doral, Manel; Oliver-Vila, Irene; Requena-Montero, Miriam; Vives, Joaquim; Garcia-López, Joan

    2016-09-01

    Multipotent mesenchymal stromal cells (MSC) have achieved a notable prominence in the field of regenerative medicine, despite the lack of common standards in the production processes and suitable quality controls compatible with Good Manufacturing Practice (GMP). Herein we describe the design of a bioprocess for bone marrow (BM)-derived MSC isolation and expansion, its validation and production of 48 consecutive batches for clinical use. BM samples were collected from the iliac crest of patients for autologous therapy. Manufacturing procedures included: (i) isolation of nucleated cells (NC) by automated density-gradient centrifugation and plating; (ii) trypsinization and expansion of secondary cultures; and (iii) harvest and formulation of a suspension containing 40 ± 10 × 10(6) viable cells. Quality controls were defined as: (i) cell count and viability assessment; (ii) immunophenotype; and (iii) sterility tests, Mycoplasma detection, endotoxin test and Gram staining. A 3-week manufacturing bioprocess was first designed and then validated in 3 consecutive mock productions, prior to producing 48 batches of BM-MSC for clinical use. Validation included the assessment of MSC identity and genetic stability. Regarding production, 139.0 ± 17.8 mL of BM containing 2.53 ± 0.92 × 10(9) viable NC were used as starting material, yielding 38.8 ± 5.3 × 10(6) viable cells in the final product. Surface antigen expression was consistent with the expected phenotype for MSC, displaying high levels of CD73, CD90 and CD105, lack of expression of CD31 and CD45 and low levels of HLA-DR. Tests for sterility, Mycoplasma, Gram staining and endotoxin had negative results in all cases. Herein we demonstrated the establishment of a feasible, consistent and reproducible bioprocess for the production of safe BM-derived MSC for clinical use. Copyright © 2016 International Society for Cellular Therapy. Published by Elsevier Inc. All rights reserved.

  10. Associations among Classroom Emotional Processes, Student Interest, and Engagement: A Convergent Validity Test

    ERIC Educational Resources Information Center

    Mazer, Joseph P.

    2017-01-01

    The results of this study compile convergent validity evidence for the Student Interest Scale and Student Engagement Scale through associations among emotional support, emotion work, student interest, and engagement. Confirmatory factor analysis indicates that the factor structures of the measures are stable, reliable, and valid. The results…

  11. Development and validation of the Myasthenia Gravis Impairment Index.

    PubMed

    Barnett, Carolina; Bril, Vera; Kapral, Moira; Kulkarni, Abhaya; Davis, Aileen M

    2016-08-30

    We aimed to develop a measure of myasthenia gravis impairment using a previously developed framework and to evaluate reliability and validity, specifically face, content, and construct validity. The first draft of the Myasthenia Gravis Impairment Index (MGII) included examination items from available measures enriched with newly developed, patient-reported items, modified after patient input. International neuromuscular specialists evaluated face and content validity via an e-mail survey. Test-retest reliability was assessed in stable patients at a 3-week interval and interrater reliability was evaluated in the same day. Construct validity was assessed through correlations between the MGII and other measures and by comparing scores in different patient groups. The first draft was assessed by 18 patients, and 72 specialists answered the survey. The second draft had 7 examination and 22 patient-reported items. Field testing included 200 patients, with 54 patients completing the reliability studies. Test-retest reliability of the total score was good (intraclass correlation coefficient 0.92; 95% confidence interval 0.79-0.94), as was interrater reliability of the examination component (intraclass correlation coefficient 0.81; 95% confidence interval 0.79-0.94). The MGII correlated well with comparison measures, with higher correlations with the MG-activities of daily living (r = 0.91) and MG-specific quality of life 15-item scale (r = 0.78). When assessing different patient groups, the scores followed expected patterns. The MGII was developed using a patient-centered framework of myasthenia-related impairments and incorporating patient input throughout the development process. It is reliable in an outpatient setting and has demonstrated construct validity. Responsiveness studies are under way. © 2016 American Academy of Neurology.

  12. Development and validation of the Myasthenia Gravis Impairment Index

    PubMed Central

    Bril, Vera; Kapral, Moira; Kulkarni, Abhaya; Davis, Aileen M.

    2016-01-01

    Objective: We aimed to develop a measure of myasthenia gravis impairment using a previously developed framework and to evaluate reliability and validity, specifically face, content, and construct validity. Methods: The first draft of the Myasthenia Gravis Impairment Index (MGII) included examination items from available measures enriched with newly developed, patient-reported items, modified after patient input. International neuromuscular specialists evaluated face and content validity via an e-mail survey. Test–retest reliability was assessed in stable patients at a 3-week interval and interrater reliability was evaluated in the same day. Construct validity was assessed through correlations between the MGII and other measures and by comparing scores in different patient groups. Results: The first draft was assessed by 18 patients, and 72 specialists answered the survey. The second draft had 7 examination and 22 patient-reported items. Field testing included 200 patients, with 54 patients completing the reliability studies. Test–retest reliability of the total score was good (intraclass correlation coefficient 0.92; 95% confidence interval 0.79–0.94), as was interrater reliability of the examination component (intraclass correlation coefficient 0.81; 95% confidence interval 0.79–0.94). The MGII correlated well with comparison measures, with higher correlations with the MG–activities of daily living (r = 0.91) and MG-specific quality of life 15-item scale (r = 0.78). When assessing different patient groups, the scores followed expected patterns. Conclusions: The MGII was developed using a patient-centered framework of myasthenia-related impairments and incorporating patient input throughout the development process. It is reliable in an outpatient setting and has demonstrated construct validity. Responsiveness studies are under way. PMID:27402891

  13. Validation of Student and Parent Reported Data on the Basic Grant Application Form, 1978-79 Comprehensive Validation Guide. Procedural Manual for: Validation of Cases Referred by Institutions; Validation of Cases Referred by the Office of Education; Recovery of Overpayments.

    ERIC Educational Resources Information Center

    Smith, Karen; And Others

    Procedures for validating data reported by students and parents on an application for Basic Educational Opportunity Grants were developed in 1978 for the U.S. Office of Education (OE). Validation activities include: validation of flagged Student Eligibility Reports (SERs) for students whose schools are part of the Alternate Disbursement System;…

  14. [The validation of the process and the results of an information system in primary care].

    PubMed

    Bolíbar Ribas, B; Juncosa Font, S

    1992-01-01

    The needs of information for the primary health care centers planning and management, and the poor situation we started from, have generated a large amount of information systems, which, as a general rule, have not been sufficiently evaluated. Since 1986, in the Area de Gestión, 7, Centro, of the ICS here exists an information system of the general medicine services with a sampling method (ANAC-2). The validation of some aspects of the process and content is shown in order to evaluate the quality of information. The problems arisen during the process of collecting data from nine centers are analyzed during six months and its information content is compared with the one of each system with a standard respect their value. To evaluate the concordance, we have used a graphic representation of the differences of each system with a standard respect their mean value, and the "limits of agreement". On relation with the problems of collecting data, two centers show a nonfulfillment of the observation calendar higher than 20% and the logical divergences are not important. The kind of visits distribution is quite correct, even if the estimate of the whole number of visits is higher than 20% in two centers. In the activity indicators, the system of reference has a tendency to give average values lower than the ANAC-2, with the exception of prescription/visit. In referrals and prescriptions, the use of different sources of information between systems produces an average difference of 3.3 interconsults/100 visits and 0.8 prescriptions/visit respectively. Generally, the limits of agreement are wide and become unacceptable in laboratory. The study carried out is evaluated positively, for it detects the problematical areas which can be modified or require further studies. The importance of validating the information systems is emphasized, in spite of difficulties.

  15. Validation of Student and Parent Reported Data on the Basic Grant Application Form: Pre-Award Validation Analysis Study. Revised Final Report.

    ERIC Educational Resources Information Center

    Applied Management Sciences, Inc., Silver Spring, MD.

    The 1978-1979 pre-award institution validation process for the Basic Educational Opportunity Grant (BEOG) program was studied, based on applicant and grant recipient files as of the end of February 1979. The objective was to assess the impact of the validation process on the proper award of BEOGs, and to determine whether the criteria for…

  16. Bringing Value-Based Perspectives to Care: Including Patient and Family Members in Decision-Making Processes.

    PubMed

    Kohler, Graeme; Sampalli, Tara; Ryer, Ashley; Porter, Judy; Wood, Les; Bedford, Lisa; Higgins-Bowser, Irene; Edwards, Lynn; Christian, Erin; Dunn, Susan; Gibson, Rick; Ryan Carson, Shannon; Vallis, Michael; Zed, Joanna; Tugwell, Barna; Van Zoost, Colin; Canfield, Carolyn; Rivoire, Eleanor

    2017-03-06

    Recent evidence shows that patient engagement is an important strategy in achieving a high performing healthcare system. While there is considerable evidence of implementation initiatives in direct care context, there is limited investigation of implementation initiatives in decision-making context as it relates to program planning, service delivery and developing policies. Research has also shown a gap in consistent application of system-level strategies that can effectively translate organizational policies around patient and family engagement into practice. The broad objective of this initiative was to develop a system-level implementation strategy to include patient and family advisors (PFAs) at decision-making points in primary healthcare (PHC) based on wellestablished evidence and literature. In this opportunity sponsored by the Canadian Foundation for Healthcare Improvement (CFHI) a co-design methodology, also well-established was applied in identifying and developing a suitable implementation strategy to engage PFAs as members of quality teams in PHC. Diabetes management centres (DMCs) was selected as the pilot site to develop the strategy. Key steps in the process included review of evidence, review of the current state in PHC through engagement of key stakeholders and a co-design approach. The project team included a diverse representation of members from the PHC system including patient advisors, DMC team members, system leads, providers, Public Engagement team members and CFHI improvement coaches. Key outcomes of this 18-month long initiative included development of a working definition of patient and family engagement, development of a Patient and Family Engagement Resource Guide and evaluation of the resource guide. This novel initiative provided us an opportunity to develop a supportive system-wide implementation plan and a strategy to include PFAs in decision-making processes in PHC. The well-established co-design methodology further allowed us to

  17. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  18. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    PubMed

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  19. Processing and validation of JEFF-3.1.1 and ENDF/B-VII.0 group-wise cross section libraries for shielding calculations

    NASA Astrophysics Data System (ADS)

    Pescarini, M.; Sinitsa, V.; Orsi, R.; Frisoni, M.

    2013-03-01

    This paper presents a synthesis of the ENEA-Bologna Nuclear Data Group programme dedicated to generate and validate group-wise cross section libraries for shielding and radiation damage deterministic calculations in nuclear fission reactors, following the data processing methodology recommended in the ANSI/ANS-6.1.2-1999 (R2009) American Standard. The VITJEFF311.BOLIB and VITENDF70.BOLIB finegroup coupled n-γ (199 n + 42 γ - VITAMIN-B6 structure) multi-purpose cross section libraries, based on the Bondarenko method for neutron resonance self-shielding and respectively on JEFF-3.1.1 and ENDF/B-VII.0 evaluated nuclear data, were produced in AMPX format using the NJOY-99.259 and the ENEA-Bologna 2007 Revision of the SCAMPI nuclear data processing systems. Two derived broad-group coupled n-γ (47 n + 20 γ - BUGLE-96 structure) working cross section libraries in FIDO-ANISN format for LWR shielding and pressure vessel dosimetry calculations, named BUGJEFF311.BOLIB and BUGENDF70.BOLIB, were generated by the revised version of SCAMPI, through problem-dependent cross section collapsing and self-shielding from the cited fine-group libraries. The validation results on the criticality safety benchmark experiments for the fine-group libraries and the preliminary validation results for the broad-group working libraries on the PCA-Replica and VENUS-3 engineering neutron shielding benchmark experiments are reported in synthesis.

  20. Non-isothermal processes during the drying of bare soil: Model Development and Validation

    NASA Astrophysics Data System (ADS)

    Sleep, B.; Talebi, A.; O'Carrol, D. M.

    2017-12-01

    Several coupled liquid water, water vapor, and heat transfer models have been developed either to study non-isothermal processes in the subsurface immediately below the ground surface, or to predict the evaporative flux from the ground surface. Equilibrium phase change between water and gas phases is typically assumed in these models. Recently, a few studies have questioned this assumption and proposed a coupled model considering kinetic phase change. However, none of these models were validated against real field data. In this study, a non-isothermal coupled model incorporating kinetic phase change was developed and examined against the measured data from a green roof test module. The model also incorporated a new surface boundary condition for water vapor transport at the ground surface. The measured field data included soil moisture content and temperature at different depths up to the depth of 15 cm below the ground surface. Lysimeter data were collected to determine the evaporation rates. Short and long wave radiation, wind velocity, air ambient temperature and relative humidity were measured and used as model input. Field data were collected for a period of three months during the warm seasons in south eastern Canada. The model was calibrated using one drying period and then several other drying periods were simulated. In general, the model underestimated the evaporation rates in the early stage of the drying period, however, the cumulative evaporation was in good agreement with the field data. The model predicted the trends in temperature and moisture content at the different depths in the green roof module. The simulated temperature was lower than the measured temperature for most of the simulation time with the maximum difference of 5 ° C. The simulated moisture content changes had the same temporal trend as the lysimeter data for the events simulated.

  1. Microbiological Validation of the IVGEN System

    NASA Technical Reports Server (NTRS)

    Porter, David A.

    2013-01-01

    The principal purpose of this report is to describe a validation process that can be performed in part on the ground prior to launch, and in space for the IVGEN system. The general approach taken is derived from standard pharmaceutical industry validation schemes modified to fit the special requirements of in-space usage.

  2. System verification and validation: a fundamental systems engineering task

    NASA Astrophysics Data System (ADS)

    Ansorge, Wolfgang R.

    2004-09-01

    Systems Engineering (SE) is the discipline in a project management team, which transfers the user's operational needs and justifications for an Extremely Large Telescope (ELT) -or any other telescope-- into a set of validated required system performance characteristics. Subsequently transferring these validated required system performance characteris-tics into a validated system configuration, and eventually into the assembled, integrated telescope system with verified performance characteristics and provided it with "objective evidence that the particular requirements for the specified intended use are fulfilled". The latter is the ISO Standard 8402 definition for "Validation". This presentation describes the verification and validation processes of an ELT Project and outlines the key role System Engineering plays in these processes throughout all project phases. If these processes are implemented correctly into the project execution and are started at the proper time, namely at the very beginning of the project, and if all capabilities of experienced system engineers are used, the project costs and the life-cycle costs of the telescope system can be reduced between 25 and 50 %. The intention of this article is, to motivate and encourage project managers of astronomical telescopes and scientific instruments to involve the entire spectrum of Systems Engineering capabilities performed by trained and experienced SYSTEM engineers for the benefit of the project by explaining them the importance of Systems Engineering in the AIV and validation processes.

  3. The Dendritic Cell Major Histocompatibility Complex II (MHC II) Peptidome Derives from a Variety of Processing Pathways and Includes Peptides with a Broad Spectrum of HLA-DM Sensitivity*

    PubMed Central

    Clement, Cristina C.; Becerra, Aniuska; Yin, Liusong; Zolla, Valerio; Huang, Liling; Merlin, Simone; Follenzi, Antonia; Shaffer, Scott A.; Stern, Lawrence J.; Santambrogio, Laura

    2016-01-01

    The repertoire of peptides displayed in vivo by MHC II molecules derives from a wide spectrum of proteins produced by different cell types. Although intracellular endosomal processing in dendritic cells and B cells has been characterized for a few antigens, the overall range of processing pathways responsible for generating the MHC II peptidome are currently unclear. To determine the contribution of non-endosomal processing pathways, we eluted and sequenced over 3000 HLA-DR1-bound peptides presented in vivo by dendritic cells. The processing enzymes were identified by reference to a database of experimentally determined cleavage sites and experimentally validated for four epitopes derived from complement 3, collagen II, thymosin β4, and gelsolin. We determined that self-antigens processed by tissue-specific proteases, including complement, matrix metalloproteases, caspases, and granzymes, and carried by lymph, contribute significantly to the MHC II self-peptidome presented by conventional dendritic cells in vivo. Additionally, the presented peptides exhibited a wide spectrum of binding affinity and HLA-DM susceptibility. The results indicate that the HLA-DR1-restricted self-peptidome presented under physiological conditions derives from a variety of processing pathways. Non-endosomal processing enzymes add to the number of epitopes cleaved by cathepsins, altogether generating a wider peptide repertoire. Taken together with HLA-DM-dependent and-independent loading pathways, this ensures that a broad self-peptidome is presented by dendritic cells. This work brings attention to the role of “self-recognition” as a dynamic interaction between dendritic cells and the metabolic/catabolic activities ongoing in every parenchymal organ as part of tissue growth, remodeling, and physiological apoptosis. PMID:26740625

  4. Validation of psoriatic arthritis diagnoses in electronic medical records using natural language processing

    PubMed Central

    Cai, Tianxi; Karlson, Elizabeth W.

    2013-01-01

    Objectives To test whether data extracted from full text patient visit notes from an electronic medical record (EMR) would improve the classification of PsA compared to an algorithm based on codified data. Methods From the > 1,350,000 adults in a large academic EMR, all 2318 patients with a billing code for PsA were extracted and 550 were randomly selected for chart review and algorithm training. Using codified data and phrases extracted from narrative data using natural language processing, 31 predictors were extracted and three random forest algorithms trained using coded, narrative, and combined predictors. The receiver operator curve (ROC) was used to identify the optimal algorithm and a cut point was chosen to achieve the maximum sensitivity possible at a 90% positive predictive value (PPV). The algorithm was then used to classify the remaining 1768 charts and finally validated in a random sample of 300 cases predicted to have PsA. Results The PPV of a single PsA code was 57% (95%CI 55%–58%). Using a combination of coded data and NLP the random forest algorithm reached a PPV of 90% (95%CI 86%–93%) at sensitivity of 87% (95% CI 83% – 91%) in the training data. The PPV was 93% (95%CI 89%–96%) in the validation set. Adding NLP predictors to codified data increased the area under the ROC (p < 0.001). Conclusions Using NLP with text notes from electronic medical records improved the performance of the prediction algorithm significantly. Random forests were a useful tool to accurately classify psoriatic arthritis cases to enable epidemiological research. PMID:20701955

  5. Development and validation of a predictive model for the influences of selected product and process variables on ascorbic acid degradation in simulated fruit juice.

    PubMed

    Gabriel, Alonzo A; Cayabyab, Jochelle Elysse C; Tan, Athalie Kaye L; Corook, Mark Lester F; Ables, Errol John O; Tiangson-Bayaga, Cecile Leah P

    2015-06-15

    A predictive response surface model for the influences of product (soluble solids and titratable acidity) and process (temperature and heating time) parameters on the degradation of ascorbic acid (AA) in heated simulated fruit juices (SFJs) was established. Physicochemical property ranges of freshly squeezed and processed juices, and a previously established decimal reduction times of Escherichiacoli O157:H7 at different heating temperatures were used in establishing a Central Composite Design of Experiment that determined the combinations of product and process variable used in the model building. Only the individual linear effects of temperature and heating time significantly (P<0.05) affected AA reduction (%AAr). Validating systems either over- or underestimated actual %AAr with bias factors 0.80-1.20. However, all validating systems still resulted in acceptable predictive efficacy, with accuracy factor 1.00-1.26. The model may be useful in establishing unique process schedules for specific products, for the simultaneous control and improvement of food safety and quality. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Construct Validity: Advances in Theory and Methodology

    PubMed Central

    Strauss, Milton E.; Smith, Gregory T.

    2008-01-01

    Measures of psychological constructs are validated by testing whether they relate to measures of other constructs as specified by theory. Each test of relations between measures reflects on the validity of both the measures and the theory driving the test. Construct validation concerns the simultaneous process of measure and theory validation. In this chapter, we review the recent history of validation efforts in clinical psychological science that has led to this perspective, and we review five recent advances in validation theory and methodology of importance for clinical researchers. These are: the emergence of nonjustificationist philosophy of science; an increasing appreciation for theory and the need for informative tests of construct validity; valid construct representation in experimental psychopathology; the need to avoid representing multidimensional constructs with a single score; and the emergence of effective new statistical tools for the evaluation of convergent and discriminant validity. PMID:19086835

  7. Contributions of Middle Grade Students to the Validation Process of a National Science Assessment Study

    ERIC Educational Resources Information Center

    Morell, Linda

    2008-01-01

    This study used a national validity project to investigate specific research questions regarding the intersections among aspects of validity, educational measurement, and cognitive theory. Validity evidence was collected through traditional paper and pencil tests, surveys, think-alouds, and exit interviews of fifth and sixth grade students, as…

  8. Validation of New Wind Resource Maps

    NASA Astrophysics Data System (ADS)

    Elliott, D.; Schwartz, M.

    2002-05-01

    The National Renewable Energy Laboratory (NREL) recently led a project to validate updated state wind resource maps for the northwestern United States produced by a private U.S. company, TrueWind Solutions (TWS). The independent validation project was a cooperative activity among NREL, TWS, and meteorological consultants. The independent validation concept originated at a May 2001 technical workshop held at NREL to discuss updating the Wind Energy Resource Atlas of the United States. Part of the workshop, which included more than 20 attendees from the wind resource mapping and consulting community, was dedicated to reviewing the latest techniques for wind resource assessment. It became clear that using a numerical modeling approach for wind resource mapping was rapidly gaining ground as a preferred technique and if the trend continues, it will soon become the most widely-used technique around the world. The numerical modeling approach is a relatively fast application compared to older mapping methods and, in theory, should be quite accurate because it directly estimates the magnitude of boundary-layer processes that affect the wind resource of a particular location. Numerical modeling output combined with high resolution terrain data can produce useful wind resource information at a resolution of 1 km or lower. However, because the use of the numerical modeling approach is new (last 35 years) and relatively unproven, meteorological consultants question the accuracy of the approach. It was clear that new state or regional wind maps produced by this method would have to undergo independent validation before the results would be accepted by the wind energy community and developers.

  9. Turbulence Modeling Verification and Validation

    NASA Technical Reports Server (NTRS)

    Rumsey, Christopher L.

    2014-01-01

    Computational fluid dynamics (CFD) software that solves the Reynolds-averaged Navier-Stokes (RANS) equations has been in routine use for more than a quarter of a century. It is currently employed not only for basic research in fluid dynamics, but also for the analysis and design processes in many industries worldwide, including aerospace, automotive, power generation, chemical manufacturing, polymer processing, and petroleum exploration. A key feature of RANS CFD is the turbulence model. Because the RANS equations are unclosed, a model is necessary to describe the effects of the turbulence on the mean flow, through the Reynolds stress terms. The turbulence model is one of the largest sources of uncertainty in RANS CFD, and most models are known to be flawed in one way or another. Alternative methods such as direct numerical simulations (DNS) and large eddy simulations (LES) rely less on modeling and hence include more physics than RANS. In DNS all turbulent scales are resolved, and in LES the large scales are resolved and the effects of the smallest turbulence scales are modeled. However, both DNS and LES are too expensive for most routine industrial usage on today's computers. Hybrid RANS-LES, which blends RANS near walls with LES away from walls, helps to moderate the cost while still retaining some of the scale-resolving capability of LES, but for some applications it can still be too expensive. Even considering its associated uncertainties, RANS turbulence modeling has proved to be very useful for a wide variety of applications. For example, in the aerospace field, many RANS models are considered to be reliable for computing attached flows. However, existing turbulence models are known to be inaccurate for many flows involving separation. Research has been ongoing for decades in an attempt to improve turbulence models for separated and other nonequilibrium flows. When developing or improving turbulence models, both verification and validation are important

  10. Simulation validation and management

    NASA Astrophysics Data System (ADS)

    Illgen, John D.

    1995-06-01

    Illgen Simulation Technologies, Inc., has been working interactive verification and validation programs for the past six years. As a result, they have evolved a methodology that has been adopted and successfully implemented by a number of different verification and validation programs. This methodology employs a unique case of computer-assisted software engineering (CASE) tools to reverse engineer source code and produce analytical outputs (flow charts and tables) that aid the engineer/analyst in the verification and validation process. We have found that the use of CASE tools saves time,which equate to improvements in both schedule and cost. This paper will describe the ISTI-developed methodology and how CASe tools are used in its support. Case studies will be discussed.

  11. Validity of palpation of the C1 transverse process: comparison with a radiographic reference standard

    PubMed Central

    Cooperstein, Robert; Young, Morgan; Lew, Makani

    2015-01-01

    Objectives: Primary goal: to determine the validity of C1 transverse process (TVP) palpation compared to an imaging reference standard. Methods: Radiopaque markers were affixed to the skin at the putative location of the C1 TVPs in 21 participants receiving APOM radiographs. The radiographic vertical distances from the marker to the C1 TVP, mastoid process, and C2 TVP were evaluated to determine palpatory accuracy. Results: Interexaminer agreement for radiometric analysis was “excellent.” Stringent accuracy (marker placed ±4mm from the most lateral projection of the C1 TVP) = 57.1%; expansive accuracy (marker placed closer to contiguous structures) = 90.5%. Mean Absolute Deviation (MAD) = 4.34 (3.65, 5.03) mm; root-mean-squared error = 5.40mm. Conclusions: Manual palpation of the C1 TVP can be very accurate and likely to direct a manual therapist or other health professional to the intended diagnostic or therapeutic target. This work is relevant to manual therapists, anesthetists, surgeons, and other health professionals. PMID:26136601

  12. Concurrent Validity and Diagnostic Accuracy of the Dynamic Indicators of Basic Early Literacy Skills and the Comprehensive Test of Phonological Processing

    ERIC Educational Resources Information Center

    Hintze, John M.; Ryan, Amanda L.; Stoner, Gary

    2003-01-01

    The purpose of this study was to (a) examine the concurrent validity of the Dynamic Indicators of Basic Early Literacy Skills (DIBELS) with the Comprehensive Test of Phonological Processing (CTOPP), and (b) explore the diagnostic accuracy of the DIBELS in predicting CTOPP performance using suggested and alternative cut-scores. Eighty-six students…

  13. NASA Countermeasures Evaluation and Validation Project

    NASA Technical Reports Server (NTRS)

    Lundquist, Charlie M.; Paloski, William H. (Technical Monitor)

    2000-01-01

    To support its ISS and exploration class mission objectives, NASA has developed a Countermeasure Evaluation and Validation Project (CEVP). The goal of this project is to evaluate and validate the optimal complement of countermeasures required to maintain astronaut health, safety, and functional ability during and after short- and long-duration space flight missions. The CEVP is the final element of the process in which ideas and concepts emerging from basic research evolve into operational countermeasures. The CEVP is accomplishing these objectives by conducting operational/clinical research to evaluate and validate countermeasures to mitigate these maladaptive responses. Evaluation is accomplished by testing in space flight analog facilities, and validation is accomplished by space flight testing. Both will utilize a standardized complement of integrated physiological and psychological tests, termed the Integrated Testing Regimen (ITR) to examine candidate countermeasure efficacy and intersystem effects. The CEVP emphasis is currently placed on validating the initial complement of ISS countermeasures targeting bone, muscle, and aerobic fitness; followed by countermeasures for neurological, psychological, immunological, nutrition and metabolism, and radiation risks associated with space flight. This presentation will review the processes, plans, and procedures that will enable CEVP to play a vital role in transitioning promising research results into operational countermeasures necessary to maintain crew health and performance during long duration space flight.

  14. Spacecraft materials guide. [including: encapsulants and conformal coatings; optical materials; lubrication; and, bonding and joining processes

    NASA Technical Reports Server (NTRS)

    Staugaitis, C. L. (Editor)

    1975-01-01

    Materials which have demonstrated their suitability for space application are summarized. Common, recurring problems in encapsulants and conformal coatings, optical materials, lubrication, and bonding and joining are noted. The subjects discussed include: low density and syntactic foams, electrical encapsulants; optical glasses, interference filter, mirrors; oils, greases, lamillar lubricants; and, soldering and brazing processes.

  15. Development and Validation of a Natural Language Processing Tool to Identify Patients Treated for Pneumonia across VA Emergency Departments.

    PubMed

    Jones, B E; South, B R; Shao, Y; Lu, C C; Leng, J; Sauer, B C; Gundlapalli, A V; Samore, M H; Zeng, Q

    2018-01-01

    Identifying pneumonia using diagnosis codes alone may be insufficient for research on clinical decision making. Natural language processing (NLP) may enable the inclusion of cases missed by diagnosis codes. This article (1) develops a NLP tool that identifies the clinical assertion of pneumonia from physician emergency department (ED) notes, and (2) compares classification methods using diagnosis codes versus NLP against a gold standard of manual chart review to identify patients initially treated for pneumonia. Among a national population of ED visits occurring between 2006 and 2012 across the Veterans Affairs health system, we extracted 811 physician documents containing search terms for pneumonia for training, and 100 random documents for validation. Two reviewers annotated span- and document-level classifications of the clinical assertion of pneumonia. An NLP tool using a support vector machine was trained on the enriched documents. We extracted diagnosis codes assigned in the ED and upon hospital discharge and calculated performance characteristics for diagnosis codes, NLP, and NLP plus diagnosis codes against manual review in training and validation sets. Among the training documents, 51% contained clinical assertions of pneumonia; in the validation set, 9% were classified with pneumonia, of which 100% contained pneumonia search terms. After enriching with search terms, the NLP system alone demonstrated a recall/sensitivity of 0.72 (training) and 0.55 (validation), and a precision/positive predictive value (PPV) of 0.89 (training) and 0.71 (validation). ED-assigned diagnostic codes demonstrated lower recall/sensitivity (0.48 and 0.44) but higher precision/PPV (0.95 in training, 1.0 in validation); the NLP system identified more "possible-treated" cases than diagnostic coding. An approach combining NLP and ED-assigned diagnostic coding classification achieved the best performance (sensitivity 0.89 and PPV 0.80). System-wide application of NLP to clinical text

  16. The consultation and relational empathy (CARE) measure: development and preliminary validation and reliability of an empathy-based consultation process measure.

    PubMed

    Mercer, Stewart W; Maxwell, Margaret; Heaney, David; Watt, Graham Cm

    2004-12-01

    Empathy is a key aspect of the clinical encounter but there is a lack of patient-assessed measures suitable for general clinical settings. Our aim was to develop a consultation process measure based on a broad definition of empathy, which is meaningful to patients irrespective of their socio-economic background. Qualitative and quantitative approaches were used to develop and validate the new measure, which we have called the consultation and relational empathy (CARE) measure. Concurrent validity was assessed by correlational analysis against other validated measures in a series of three pilot studies in general practice (in areas of high or low socio-economic deprivation). Face and content validity was investigated by 43 interviews with patients from both types of areas, and by feedback from GPs and expert researchers in the field. The initial version of the new measure (pilot 1; high deprivation practice) correlated strongly (r = 0.85) with the Reynolds empathy measure (RES) and the Barrett-Lennard empathy subscale (BLESS) (r = 0.63), but had a highly skewed distribution (skew -1.879, kurtosis 3.563). Statistical analysis, and feedback from the 20 patients interviewed, the GPs and the expert researchers, led to a number of modifications. The revised, second version of the CARE measure, tested in an area of low deprivation (pilot 2) also correlated strongly with the established empathy measures (r = 0.84 versus RES and r = 0.77 versus BLESS) but had a less skewed distribution (skew -0.634, kurtosis -0.067). Internal reliability of the revised version was high (Cronbach's alpha 0.92). Patient feedback at interview (n = 13) led to only minor modification. The final version of the CARE measure, tested in pilot 3 (high deprivation practice) confirmed the validation with the other empathy measures (r = 0.85 versus RES and r = 0.84 versus BLESS) and the face validity (feedback from 10 patients). These preliminary results support the validity and reliability of the CARE

  17. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1992-01-01

    Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.

  18. KENNEDY SPACE CENTER, FLA. -- Endeavour backs out of the Orbiter Processing Facility for temporary transfer to the Vehicle Assembly Building. The move allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.

    NASA Image and Video Library

    2004-01-09

    KENNEDY SPACE CENTER, FLA. -- Endeavour backs out of the Orbiter Processing Facility for temporary transfer to the Vehicle Assembly Building. The move allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.

  19. Virtual Model Validation of Complex Multiscale Systems: Applications to Nonlinear Elastostatics

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Oden, John Tinsley; Prudencio, Ernest E.; Bauman, Paul T.

    We propose a virtual statistical validation process as an aid to the design of experiments for the validation of phenomenological models of the behavior of material bodies, with focus on those cases in which knowledge of the fabrication process used to manufacture the body can provide information on the micro-molecular-scale properties underlying macroscale behavior. One example is given by models of elastomeric solids fabricated using polymerization processes. We describe a framework for model validation that involves Bayesian updates of parameters in statistical calibration and validation phases. The process enables the quanti cation of uncertainty in quantities of interest (QoIs) andmore » the determination of model consistency using tools of statistical information theory. We assert that microscale information drawn from molecular models of the fabrication of the body provides a valuable source of prior information on parameters as well as a means for estimating model bias and designing virtual validation experiments to provide information gain over calibration posteriors.« less

  20. Short residence time coal liquefaction process including catalytic hydrogenation

    DOEpatents

    Anderson, R.P.; Schmalzer, D.K.; Wright, C.H.

    1982-05-18

    Normally solid dissolved coal product and a distillate liquid product are produced by continuously passing a feed slurry comprising raw feed coal and a recycle solvent oil and/or slurry together with hydrogen to a preheating-reaction zone, the hydrogen pressure in the preheating-reaction zone being at least 1,500 psig (105 kg/cm[sup 2]), reacting the slurry in the preheating-reaction zone at a temperature in the range of between about 455 and about 500 C to dissolve the coal to form normally liquid coal and normally solid dissolved coal. A total slurry residence time is maintained in the reaction zone ranging from a finite value from about 0 to about 0.2 hour, and reaction effluent is continuously and directly contacted with a quenching fluid to substantially immediately reduce the temperature of the reaction effluent to below 425 C to substantially inhibit polymerization so that the yield of insoluble organic matter comprises less than 9 weight percent of said feed coal on a moisture-free basis. The reaction is performed under conditions of temperature, hydrogen pressure and residence time such that the quantity of distillate liquid boiling within the range C[sub 5]-454 C is an amount at least equal to that obtainable by performing the process under the same condition except for a longer total slurry residence time, e.g., 0.3 hour. Solvent boiling range liquid is separated from the reaction effluent and recycled as process solvent. The amount of solvent boiling range liquid is sufficient to provide at least 80 weight percent of that required to maintain the process in overall solvent balance. 6 figs.

  1. HDF-EOS 5 Validator

    NASA Technical Reports Server (NTRS)

    Ullman, Richard; Bane, Bob; Yang, Jingli

    2008-01-01

    A computer program partly automates the task of determining whether an HDF-EOS 5 file is valid in that it conforms to specifications for such characteristics as attribute names, dimensionality of data products, and ranges of legal data values. ["HDF-EOS" and variants thereof are defined in "Converting EOS Data From HDF-EOS to netCDF" (GSC-15007-1), which is the first of several preceding articles in this issue of NASA Tech Briefs.] Previously, validity of a file was determined in a tedious and error-prone process in which a person examined human-readable dumps of data-file-format information. The present software helps a user to encode the specifications for an HDFEOS 5 file, and then inspects the file for conformity with the specifications: First, the user writes the specifications in Extensible Markup Language (XML) by use of a document type definition (DTD) that is part of the program. Next, the portion of the program (denoted the validator) that performs the inspection is executed, using, as inputs, the specifications in XML and the HDF-EOS 5 file to be validated. Finally, the user examines the output of the validator.

  2. On demand processing of climate station sensor data

    NASA Astrophysics Data System (ADS)

    Wöllauer, Stephan; Forteva, Spaska; Nauss, Thomas

    2015-04-01

    Large sets of climate stations with several sensors produce big amounts of finegrained time series data. To gain value of this data, further processing and aggregation is needed. We present a flexible system to process the raw data on demand. Several aspects need to be considered to process the raw data in a way that scientists can use the processed data conveniently for their specific research interests. First of all, it is not feasible to pre-process the data in advance because of the great variety of ways it can be processed. Therefore, in this approach only the raw measurement data is archived in a database. When a scientist requires some time series, the system processes the required raw data according to the user-defined request. Based on the type of measurement sensor, some data validation is needed, because the climate station sensors may produce erroneous data. Currently, three validation methods are integrated in the on demand processing system and are optionally selectable. The most basic validation method checks if measurement values are within a predefined range of possible values. For example, it may be assumed that an air temperature sensor measures values within a range of -40 °C to +60 °C. Values outside of this range are considered as a measurement error by this validation method and consequently rejected. An other validation method checks for outliers in the stream of measurement values by defining a maximum change rate between subsequent measurement values. The third validation method compares measurement data to the average values of neighboring stations and rejects measurement values with a high variance. These quality checks are optional, because especially extreme climatic values may be valid but rejected by some quality check method. An other important task is the preparation of measurement data in terms of time. The observed stations measure values in intervals of minutes to hours. Often scientists need a coarser temporal resolution

  3. Validation of microbiological testing in cardiovascular tissue banks: results of a quality round trial.

    PubMed

    de By, Theo M M H; McDonald, Carl; Süßner, Susanne; Davies, Jill; Heng, Wee Ling; Jashari, Ramadan; Bogers, Ad J J C; Petit, Pieter

    2017-11-01

    Surgeons needing human cardiovascular tissue for implantation in their patients are confronted with cardiovascular tissue banks that use different methods to identify and decontaminate micro-organisms. To elucidate these differences, we compared the quality of processing methods in 20 tissue banks and 1 reference laboratory. We did this to validate the results for accepting or rejecting tissue. We included the decontamination methods used and the influence of antibiotic cocktails and residues with results and controls. The minor details of the processes were not included. To compare the outcomes of microbiological testing and decontamination methods of heart valve allografts in cardiovascular tissue banks, an international quality round was organized. Twenty cardiovascular tissue banks participated in this quality round. The quality round method was validated first and consisted of sending purposely contaminated human heart valve tissue samples with known micro-organisms to the participants. The participants identified the micro-organisms using their local decontamination methods. Seventeen of the 20 participants correctly identified the micro-organisms; if these samples were heart valves to be released for implantation, 3 of the 20 participants would have decided to accept their result for release. Decontamination was shown not to be effective in 13 tissue banks because of growth of the organisms after decontamination. Articles in the literature revealed that antibiotics are effective at 36°C and not, or less so, at 2-8°C. The decontamination procedure, if it is validated, will ensure that the tissue contains no known micro-organisms. This study demonstrates that the quality round method of sending contaminated tissues and assessing the results of the microbiological cultures is an effective way of validating the processes of tissue banks. Only when harmonization, based on validated methods, has been achieved, will surgeons be able to fully rely on the methods

  4. Evaluation of an Innovative Approach to Validation of ...

    EPA Pesticide Factsheets

    UV disinfection is an effective process for inactivating many microbial pathogens found in source waters with the potential as stand-alone treatment or in combination with other disinfectants. For surface and groundwater sourced drinking water applications, the U.S. Environmental Protection Agency (USEPA) provided guidance on the validation of UV reactors nearly a decade ago. The focus of the guidance was primarily for inactivation of Cryptosporidium and Giardia. Over the last ten years many lessons have been learned, validation practices have been modified, new science issues discovered, and changes in operation & monitoring of UV systems need to be addressed. Also, there remains no standard approach for validating UV reactors to meet a 4-log (99.99%) inactivation of viruses. USEPA in partnership with the Cadmus Group, Carollo Engineers, and other State & Industry collaborators, are evaluating new approaches for validating UV reactors to meet groundwater & surface water pathogen inactivation including viruses for low-pressure and medium-pressure UV systems. A particular challenge for medium-pressure UV is the monitoring of low-wavelength germicidal contributions for appropriate crediting of disinfection under varying reactor conditions of quartz sleeve fouling, lamp aging, and changes in UV absorbance of the water over time. In the current effort, bench and full-scale studies are being conducted on a low pressure (LP) UV reactor and a medium pressure (MP) UV re

  5. Enhancing the cross-cultural adaptation and validation process: linguistic and psychometric testing of the Brazilian-Portuguese version of a self-report measure for dry eye.

    PubMed

    Santo, Ruth Miyuki; Ribeiro-Ferreira, Felipe; Alves, Milton Ruiz; Epstein, Jonathan; Novaes, Priscila

    2015-04-01

    To provide a reliable, validated, and culturally adapted instrument that may be used in monitoring dry eye in Brazilian patients and to discuss the strategies for the enhancement of the cross-cultural adaptation and validation process of a self-report measure for dry eye. The cross-cultural adaptation process (CCAP) of the original Ocular Surface Disease Index (OSDI) into Brazilian-Portuguese was conducted using a 9-step guideline. The synthesis of translations was tested twice, for face and content validity, by different subjects (focus groups and cognitive interviews). The expert committee contributed on several steps, and back translations were based on the final rather than the prefinal version. For validation, the adapted version was applied in a prospective longitudinal study to 101 patients from the Dry Eye Clinic at the General Hospital of the University of São Paulo, Brazil. Simultaneously to the OSDI, patients answered the short form-36 health survey (SF-36) and the 25-item visual function questionnaire (VFQ-25) and underwent clinical evaluation. Internal consistency, test-retest reliability, and measure validity were assessed. Cronbach's alpha value of the cross-culturally adapted Brazilian-Portuguese version of the OSDI was 0.905, and the intraclass correlation coefficient was 0.801. There was a statistically significant difference between OSDI scores in patients with dry eye (41.15 ± 27.40) and without dry eye (17.88 ± 17.09). There was a negative association between OSDI and VFQ-25 total score (P < 0.01) and between the OSDI and five SF-36 domains. OSDI scores correlated positively with lissamine green and fluorescein staining scores (P < 0.001) and negatively with Schirmer test I and tear break-up time values (P < 0.001). Although most of the reviewed guidelines on CCAP involve well-defined steps (translation, synthesis/reconciliation, back translation, expert committee review, pretesting), the proposed methodological steps have not been applied

  6. Validation of a home food inventory among low-income Spanish- and Somali-speaking families.

    PubMed

    Hearst, Mary O; Fulkerson, Jayne A; Parke, Michelle; Martin, Lauren

    2013-07-01

    To refine and validate an existing home food inventory (HFI) for low-income Somali- and Spanish-speaking families. Formative assessment was conducted using two focus groups, followed by revisions of the HFI, translation of written materials and instrument validation in participants’ homes. Twin Cities Metropolitan Area, Minnesota, USA. Thirty low-income families with children of pre-school age (fifteen Spanish-speaking; fifteen Somali-speaking) completed the HFI simultaneously with, but independently of, a trained staff member. Analysis consisted of calculation of both item-specific and average food group kappa coefficients, specificity, sensitivity and Spearman’s correlation between participants’ and staff scores as a means of assessing criterion validity of individual items, food categories and the obesogenic score. The formative assessment revealed the need for few changes/additions for food items typically found in Spanish-speaking households. Somali-speaking participants requested few additions, but many deletions, including frozen processed food items, non-perishable produce and many sweets as they were not typical food items kept in the home. Generally, all validity indices were within an acceptable range, with the exception of values associated with items such as ‘whole wheat bread’ (k = 0.16). The obesogenic score (presence of high-fat, high-energy foods) had high criterion validity with k = 0.57, sensitivity = 91.8%, specificity = 70.6% and Spearman correlation = 0.78. The revised HFI is a valid assessment tool for use among Spanish and Somali households. This instrument refinement and validation process can be replicated with other population groups.

  7. Validation of alternative methods for toxicity testing.

    PubMed Central

    Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M

    1998-01-01

    Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695

  8. Further Validation of the Coach Identity Prominence Scale

    ERIC Educational Resources Information Center

    Pope, J. Paige; Hall, Craig R.

    2014-01-01

    This study was designed to examine select psychometric properties of the Coach Identity Prominence Scale (CIPS), including the reliability, factorial validity, convergent validity, discriminant validity, and predictive validity. Coaches (N = 338) who averaged 37 (SD = 12.27) years of age, had a mean of 13 (SD = 9.90) years of coaching experience,…

  9. Making processes reliable: a validated pubmed search strategy for identifying new or emerging technologies.

    PubMed

    Varela-Lema, Leonora; Punal-Riobóo, Jeanette; Acción, Beatriz Casal; Ruano-Ravina, Alberto; García, Marisa López

    2012-10-01

    Horizon scanning systems need to handle a wide range of sources to identify new or emerging health technologies. The objective of this study is to develop a validated Medline bibliographic search strategy (PubMed search engine) to systematically identify new or emerging health technologies. The proposed Medline search strategy combines free text terms commonly used in article titles to denote innovation within index terms that make reference to the specific fields of interest. Efficacy was assessed by running the search over a period of 1 year (2009) and analyzing its retrieval performance (number and characteristics). For comparison purposes, all article abstracts published during 2009 in six preselected key research journals and eight high impact surgery journals were scanned. Sensitivity was defined as the proportion of relevant new or emerging technologies published in key journals that would be identified in the search strategy within the first 2 years of publication. The search yielded 6,228 abstracts of potentially new or emerging technologies. Of these, 459 were classified as new or emerging (383 truly new or emerging and 76 new indications). The scanning of 12,061 journal abstracts identified 35 relevant new or emerging technologies. Of these, twenty-nine were located within the Medline search strategy during the first 2 years of publication (sensitivity = 83 percent). The current search strategy, validated against key journals, has demonstrated to be effective for horizon scanning. Even though it can require adaptations depending on the scope of the horizon scanning system, it could serve to simplify and standardize scanning processes.

  10. Development and validation of the pro-environmental behaviour scale for women's health.

    PubMed

    Kim, HyunKyoung

    2017-05-01

    This study was aimed to develop and test the Pro-environmental Behavior Scale for Women's Health. Women adopt sustainable behaviours and alter their life styles to protect the environment and their health from environmental pollution. The conceptual framework of pro-environmental behaviours was based on Rogers' protection motivation theory and Weinstein's precaution adoption process model. The cross-sectional design was used for instrument development. The instrument development process consisted of a literature review, personal depth interviews and focus group interviews. The sample comprised 356 adult women recruited in April-May 2012 in South Korea using quota sampling. For construct validity, exploratory factor analysis was conducted to examine the factor structure, after which convergent and discriminant validity and known-group comparisons were tested. Principal component analysis yielded 17 items with four factors, including 'women's health protection,' 'chemical exposure prevention,' 'alternative consumption,' and 'community-oriented behaviour'. The Cronbach's α was 0·81. Convergent and discriminant validity were supported by performing correlations with other environmental-health and health-behaviour measures. Nursing professionals can reliably use the instrument to assess women's behaviours, which protect their health and the environment. © 2016 John Wiley & Sons Ltd.

  11. Short residence time coal liquefaction process including catalytic hydrogenation

    DOEpatents

    Anderson, Raymond P.; Schmalzer, David K.; Wright, Charles H.

    1982-05-18

    Normally solid dissolved coal product and a distillate liquid product are produced by continuously passing a feed slurry comprising raw feed coal and a recycle solvent oil and/or slurry together with hydrogen to a preheating-reaction zone (26, alone, or 26 together with 42), the hydrogen pressure in the preheating-reaction zone being at least 1500 psig (105 kg/cm.sup.2), reacting the slurry in the preheating-reaction zone (26, or 26 with 42) at a temperature in the range of between about 455.degree. and about 500.degree. C. to dissolve the coal to form normally liquid coal and normally solid dissolved coal. A total slurry residence time is maintained in the reaction zone ranging from a finite value from about 0 to about 0.2 hour, and reaction effluent is continuously and directly contacted with a quenching fluid (40, 68) to substantially immediately reduce the temperature of the reaction effluent to below 425.degree. C. to substantially inhibit polymerization so that the yield of insoluble organic matter comprises less than 9 weight percent of said feed coal on a moisture-free basis. The reaction is performed under conditions of temperature, hydrogen pressure and residence time such that the quantity of distillate liquid boiling within the range C.sub.5 -454.degree. C. is an amount at least equal to that obtainable by performing the process under the same condition except for a longer total slurry residence time, e.g., 0.3 hour. Solvent boiling range liquid is separated from the reaction effluent (83) and recycled as process solvent (16). The amount of solvent boiling range liquid is sufficient to provide at least 80 weight percent of that required to maintain the process in overall solvent balance.

  12. The pros and cons of code validation

    NASA Technical Reports Server (NTRS)

    Bobbitt, Percy J.

    1988-01-01

    Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.

  13. Validation of contractor HMA testing data in the materials acceptance process - phase II : final report.

    DOT National Transportation Integrated Search

    2016-08-01

    This study conducted an analysis of the SCDOT HMA specification. A Research Steering Committee provided oversight : of the process. The research process included extensive statistical analyses of test data supplied by SCDOT. : A total of 2,789 AC tes...

  14. Performing a Content Validation Study.

    ERIC Educational Resources Information Center

    Spool, Mark D.

    Content validity is concerned with three components: (1) the job content; (2) the test content, and (3) the strength of the relationship between the two. A content validation study, to be considered adequate and defensible should include at least the following four procedures: (1) A thorough and accurate job analysis (to define the job content);…

  15. PSI-Center Validation Studies

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Sutherland, D. A.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2014-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with 3D extended MHD simulations using the NIMROD, HiFi, and PSI-TET codes. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), HBT-EP (Columbia), HIT-SI (U Wash-UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition (BOD) is used to compare experiments with simulations. BOD separates data sets into spatial and temporal structures, giving greater weight to dominant structures. Several BOD metrics are being formulated with the goal of quantitive validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  16. Ares I-X Range Safety Simulation Verification and Analysis Independent Validation and Verification

    NASA Technical Reports Server (NTRS)

    Merry, Carl M.; Tarpley, Ashley F.; Craig, A. Scott; Tartabini, Paul V.; Brewer, Joan D.; Davis, Jerel G.; Dulski, Matthew B.; Gimenez, Adrian; Barron, M. Kyle

    2011-01-01

    NASA s Ares I-X vehicle launched on a suborbital test flight from the Eastern Range in Florida on October 28, 2009. To obtain approval for launch, a range safety final flight data package was generated to meet the data requirements defined in the Air Force Space Command Manual 91-710 Volume 2. The delivery included products such as a nominal trajectory, trajectory envelopes, stage disposal data and footprints, and a malfunction turn analysis. The Air Force s 45th Space Wing uses these products to ensure public and launch area safety. Due to the criticality of these data, an independent validation and verification effort was undertaken to ensure data quality and adherence to requirements. As a result, the product package was delivered with the confidence that independent organizations using separate simulation software generated data to meet the range requirements and yielded consistent results. This document captures Ares I-X final flight data package verification and validation analysis, including the methodology used to validate and verify simulation inputs, execution, and results and presents lessons learned during the process

  17. Fluorescence In Situ Hybridization Probe Validation for Clinical Use.

    PubMed

    Gu, Jun; Smith, Janice L; Dowling, Patricia K

    2017-01-01

    In this chapter, we provide a systematic overview of the published guidelines and validation procedures for fluorescence in situ hybridization (FISH) probes for clinical diagnostic use. FISH probes-which are classified as molecular probes or analyte-specific reagents (ASRs)-have been extensively used in vitro for both clinical diagnosis and research. Most commercially available FISH probes in the United States are strictly regulated by the U.S. Food and Drug Administration (FDA), the Centers for Disease Control and Prevention (CDC), the Centers for Medicare & Medicaid Services (CMS) the Clinical Laboratory Improvement Amendments (CLIA), and the College of American Pathologists (CAP). Although home-brewed FISH probes-defined as probes made in-house or acquired from a source that does not supply them to other laboratories-are not regulated by these agencies, they too must undergo the same individual validation process prior to clinical use as their commercial counterparts. Validation of a FISH probe involves initial validation and ongoing verification of the test system. Initial validation includes assessment of a probe's technical specifications, establishment of its standard operational procedure (SOP), determination of its clinical sensitivity and specificity, development of its cutoff, baseline, and normal reference ranges, gathering of analytics, confirmation of its applicability to a specific research or clinical setting, testing of samples with or without the abnormalities that the probe is meant to detect, staff training, and report building. Ongoing verification of the test system involves testing additional normal and abnormal samples using the same method employed during the initial validation of the probe.

  18. Empirical agreement in model validation.

    PubMed

    Jebeile, Julie; Barberousse, Anouk

    2016-04-01

    Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  19. Validation of biomarkers of food intake-critical assessment of candidate biomarkers.

    PubMed

    Dragsted, L O; Gao, Q; Scalbert, A; Vergères, G; Kolehmainen, M; Manach, C; Brennan, L; Afman, L A; Wishart, D S; Andres Lacueva, C; Garcia-Aloy, M; Verhagen, H; Feskens, E J M; Praticò, G

    2018-01-01

    Biomarkers of food intake (BFIs) are a promising tool for limiting misclassification in nutrition research where more subjective dietary assessment instruments are used. They may also be used to assess compliance to dietary guidelines or to a dietary intervention. Biomarkers therefore hold promise for direct and objective measurement of food intake. However, the number of comprehensively validated biomarkers of food intake is limited to just a few. Many new candidate biomarkers emerge from metabolic profiling studies and from advances in food chemistry. Furthermore, candidate food intake biomarkers may also be identified based on extensive literature reviews such as described in the guidelines for Biomarker of Food Intake Reviews (BFIRev). To systematically and critically assess the validity of candidate biomarkers of food intake, it is necessary to outline and streamline an optimal and reproducible validation process. A consensus-based procedure was used to provide and evaluate a set of the most important criteria for systematic validation of BFIs. As a result, a validation procedure was developed including eight criteria, plausibility, dose-response, time-response, robustness, reliability, stability, analytical performance, and inter-laboratory reproducibility. The validation has a dual purpose: (1) to estimate the current level of validation of candidate biomarkers of food intake based on an objective and systematic approach and (2) to pinpoint which additional studies are needed to provide full validation of each candidate biomarker of food intake. This position paper on biomarker of food intake validation outlines the second step of the BFIRev procedure but may also be used as such for validation of new candidate biomarkers identified, e.g., in food metabolomic studies.

  20. Articles which include chevron film cooling holes, and related processes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bunker, Ronald Scott; Lacy, Benjamin Paul

    An article is described, including an inner surface which can be exposed to a first fluid; an inlet; and an outer surface spaced from the inner surface, which can be exposed to a hotter second fluid. The article further includes at least one row or other pattern of passage holes. Each passage hole includes an inlet bore extending through the substrate from the inlet at the inner surface to a passage hole-exit proximate to the outer surface, with the inlet bore terminating in a chevron outlet adjacent the hole-exit. The chevron outlet includes a pair of wing troughs having amore » common surface region between them. The common surface region includes a valley which is adjacent the hole-exit; and a plateau adjacent the valley. The article can be an airfoil. Related methods for preparing the passage holes are also described.« less

  1. Reliability and Construct Validity of the Psychopathic Personality Inventory-Revised in a Swedish Non-Criminal Sample - A Multimethod Approach including Psychophysiological Correlates of Empathy for Pain.

    PubMed

    Sörman, Karolina; Nilsonne, Gustav; Howner, Katarina; Tamm, Sandra; Caman, Shilan; Wang, Hui-Xin; Ingvar, Martin; Edens, John F; Gustavsson, Petter; Lilienfeld, Scott O; Petrovic, Predrag; Fischer, Håkan; Kristiansson, Marianne

    2016-01-01

    Cross-cultural investigation of psychopathy measures is important for clarifying the nomological network surrounding the psychopathy construct. The Psychopathic Personality Inventory-Revised (PPI-R) is one of the most extensively researched self-report measures of psychopathic traits in adults. To date however, it has been examined primarily in North American criminal or student samples. To address this gap in the literature, we examined PPI-R's reliability, construct validity and factor structure in non-criminal individuals (N = 227) in Sweden, using a multimethod approach including psychophysiological correlates of empathy for pain. PPI-R construct validity was investigated in subgroups of participants by exploring its degree of overlap with (i) the Psychopathy Checklist: Screening Version (PCL:SV), (ii) self-rated empathy and behavioral and physiological responses in an experiment on empathy for pain, and (iii) additional self-report measures of alexithymia and trait anxiety. The PPI-R total score was significantly associated with PCL:SV total and factor scores. The PPI-R Coldheartedness scale demonstrated significant negative associations with all empathy subscales and with rated unpleasantness and skin conductance responses in the empathy experiment. The PPI-R higher order Self-Centered Impulsivity and Fearless Dominance dimensions were associated with trait anxiety in opposite directions (positively and negatively, respectively). Overall, the results demonstrated solid reliability (test-retest and internal consistency) and promising but somewhat mixed construct validity for the Swedish translation of the PPI-R.

  2. Specification Reformulation During Specification Validation

    NASA Technical Reports Server (NTRS)

    Benner, Kevin M.

    1992-01-01

    The goal of the ARIES Simulation Component (ASC) is to uncover behavioral errors by 'running' a specification at the earliest possible points during the specification development process. The problems to be overcome are the obvious ones the specification may be large, incomplete, underconstrained, and/or uncompilable. This paper describes how specification reformulation is used to mitigate these problems. ASC begins by decomposing validation into specific validation questions. Next, the specification is reformulated to abstract out all those features unrelated to the identified validation question thus creating a new specialized specification. ASC relies on a precise statement of the validation question and a careful application of transformations so as to preserve the essential specification semantics in the resulting specialized specification. This technique is a win if the resulting specialized specification is small enough so the user my easily handle any remaining obstacles to execution. This paper will: (1) describe what a validation question is; (2) outline analysis techniques for identifying what concepts are and are not relevant to a validation question; and (3) identify and apply transformations which remove these less relevant concepts while preserving those which are relevant.

  3. CosmoQuest:Using Data Validation for More Than Just Data Validation

    NASA Astrophysics Data System (ADS)

    Lehan, C.; Gay, P.

    2016-12-01

    It is often taken for granted that different scientists completing the same task (e.g. mapping geologic features) will get the same results, and data validation is often skipped or under-utilized due to time and funding constraints. Robbins et. al (2014), however, demonstrated that this is a needed step, as large variation can exist even among collaborating team members completing straight-forward tasks like marking craters. Data Validation should be much more than a simple post-project verification of results. The CosmoQuest virtual research facility employs regular data-validation for a variety of benefits, including real-time user feedback, real-time tracking to observe user activity while it's happening, and using pre-solved data to analyze users' progress and to help them retain skills. Some creativity in this area can drastically improve project results. We discuss methods of validating data in citizen science projects and outline the variety of uses for validation, which, when used properly, improves the scientific output of the project and the user experience for the citizens doing the work. More than just a tool for scientists, validation can assist users in both learning and retaining important information and skills, improving the quality and quantity of data gathered. Real-time analysis of user data can give key information in the effectiveness of the project that a broad glance would miss, and properly presenting that analysis is vital. Training users to validate their own data, or the data of others, can significantly improve the accuracy of misinformed or novice users.

  4. Identification and location tasks rely on different mental processes: a diffusion model account of validity effects in spatial cueing paradigms with emotional stimuli.

    PubMed

    Imhoff, Roland; Lange, Jens; Germar, Markus

    2018-02-22

    Spatial cueing paradigms are popular tools to assess human attention to emotional stimuli, but different variants of these paradigms differ in what participants' primary task is. In one variant, participants indicate the location of the target (location task), whereas in the other they indicate the shape of the target (identification task). In the present paper we test the idea that although these two variants produce seemingly comparable cue validity effects on response times, they rest on different underlying processes. Across four studies (total N = 397; two in the supplement) using both variants and manipulating the motivational relevance of cue content, diffusion model analyses revealed that cue validity effects in location tasks are primarily driven by response biases, whereas the same effect rests on delay due to attention to the cue in identification tasks. Based on this, we predict and empirically support that a symmetrical distribution of valid and invalid cues would reduce cue validity effects in location tasks to a greater extent than in identification tasks. Across all variants of the task, we fail to replicate the effect of greater cue validity effects for arousing (vs. neutral) stimuli. We discuss the implications of these findings for best practice in spatial cueing research.

  5. Validity and reliability of an application review process using dedicated reviewers in one stage of a multi-stage admissions model.

    PubMed

    Zeeman, Jacqueline M; McLaughlin, Jacqueline E; Cox, Wendy C

    2017-11-01

    With increased emphasis placed on non-academic skills in the workplace, a need exists to identify an admissions process that evaluates these skills. This study assessed the validity and reliability of an application review process involving three dedicated application reviewers in a multi-stage admissions model. A multi-stage admissions model was utilized during the 2014-2015 admissions cycle. After advancing through the academic review, each application was independently reviewed by two dedicated application reviewers utilizing a six-construct rubric (written communication, extracurricular and community service activities, leadership experience, pharmacy career appreciation, research experience, and resiliency). Rubric scores were extrapolated to a three-tier ranking to select candidates for on-site interviews. Kappa statistics were used to assess interrater reliability. A three-facet Many-Facet Rasch Model (MFRM) determined reviewer severity, candidate suitability, and rubric construct difficulty. The kappa statistic for candidates' tier rank score (n = 388 candidates) was 0.692 with a perfect agreement frequency of 84.3%. There was substantial interrater reliability between reviewers for the tier ranking (kappa: 0.654-0.710). Highest construct agreement occurred in written communication (kappa: 0.924-0.984). A three-facet MFRM analysis explained 36.9% of variance in the ratings, with 0.06% reflecting application reviewer scoring patterns (i.e., severity or leniency), 22.8% reflecting candidate suitability, and 14.1% reflecting construct difficulty. Utilization of dedicated application reviewers and a defined tiered rubric provided a valid and reliable method to effectively evaluate candidates during the application review process. These analyses provide insight into opportunities for improving the application review process among schools and colleges of pharmacy. Copyright © 2017 Elsevier Inc. All rights reserved.

  6. Validation study and routine control monitoring of moist heat sterilization procedures.

    PubMed

    Shintani, Hideharu

    2012-06-01

    The proposed approach to validation of steam sterilization in autoclaves follows the basic life cycle concepts applicable to all validation programs. Understand the function of sterilization process, develop and understand the cycles to carry out the process, and define a suitable test or series of tests to confirm that the function of the process is suitably ensured by the structure provided. Sterilization of product and components and parts that come in direct contact with sterilized product is the most critical of pharmaceutical processes. Consequently, this process requires a most rigorous and detailed approach to validation. An understanding of the process requires a basic understanding of microbial death, the parameters that facilitate that death, the accepted definition of sterility, and the relationship between the definition and sterilization parameters. Autoclaves and support systems need to be designed, installed, and qualified in a manner that ensures their continued reliability. Lastly, the test program must be complete and definitive. In this paper, in addition to validation study, documentation of IQ, OQ and PQ concretely were described.

  7. Cleaning and Sterilization of Used Cardiac Implantable Electronic Devices With Process Validation: The Next Hurdle in Device Recycling.

    PubMed

    Crawford, Thomas C; Allmendinger, Craig; Snell, Jay; Weatherwax, Kevin; Lavan, Balasundaram; Baman, Timir S; Sovitch, Pat; Alyesh, Daniel; Carrigan, Thomas; Klugman, Noah; Kune, Denis; Hughey, Andrew; Lautenbach, Daniel; Sovitch, Nathan; Tandon, Karman; Samson, George; Newman, Charles; Davis, Sheldon; Brown, Archie; Wasserman, Brad; Goldman, Ed; Arlinghaus, Sandra L; Oral, Hakan; Eagle, Kim A

    2017-06-01

    This study sought to develop a validated, reproducible sterilization protocol, which could be used in the reprocessing of cardiac implantable electronic devices (CIEDs). Access to cardiac CIED therapy in high-income and in low- and middle-income countries varies greatly. CIED reuse may reduce this disparity. A cleaning and sterilization protocol was developed that includes washing CIEDs in an enzymatic detergent, screw cap and set screw replacement, brushing, inspection, and sterilization in ethylene oxide. Validation testing was performed to assure compliance with accepted standards. With cleaning, the total mean bioburden for each of 3 batches of 10 randomly chosen devices was reduced from 754 to 10.1 colony-forming units. After sterilization with ethylene oxide, with 3 half-cycle and 3 full-cycle processes, none of the 90 biological indicator testers exhibited growth after 7 days. Through cleaning and sterilization, protein and hemoglobin concentrations were reduced from 99.2 to 1.42 μg/cm 2 and from 21.4 to 1.03 μg/cm 2 , respectively. Mean total organic carbon residual was 1.44 parts per million (range 0.36 to 2.9 parts per million). Endotoxin concentration was not detectable at the threshold of <0.03 endotoxin units/ml or <3.0 endotoxin units/device. Cytotoxicity and intracutaneous reactivity tests met the standards set by the Association for Advancement of Medical Instrumentation and the International Organization for Standardization. CIEDs can be cleaned and sterilized according to a standardized protocol achieving a 12-log reduction of inoculated product, resulting in sterility assurance level of 10 -6 . Copyright © 2017 American College of Cardiology Foundation. Published by Elsevier Inc. All rights reserved.

  8. Validation of NASA Thermal Ice Protection Computer Codes. Part 1; Program Overview

    NASA Technical Reports Server (NTRS)

    Miller, Dean; Bond, Thomas; Sheldon, David; Wright, William; Langhals, Tammy; Al-Khalil, Kamel; Broughton, Howard

    1996-01-01

    The Icing Technology Branch at NASA Lewis has been involved in an effort to validate two thermal ice protection codes developed at the NASA Lewis Research Center. LEWICE/Thermal (electrothermal deicing & anti-icing), and ANTICE (hot-gas & electrothermal anti-icing). The Thermal Code Validation effort was designated as a priority during a 1994 'peer review' of the NASA Lewis Icing program, and was implemented as a cooperative effort with industry. During April 1996, the first of a series of experimental validation tests was conducted in the NASA Lewis Icing Research Tunnel(IRT). The purpose of the April 96 test was to validate the electrothermal predictive capabilities of both LEWICE/Thermal, and ANTICE. A heavily instrumented test article was designed and fabricated for this test, with the capability of simulating electrothermal de-icing and anti-icing modes of operation. Thermal measurements were then obtained over a range of test conditions, for comparison with analytical predictions. This paper will present an overview of the test, including a detailed description of: (1) the validation process; (2) test article design; (3) test matrix development; and (4) test procedures. Selected experimental results will be presented for de-icing and anti-icing modes of operation. Finally, the status of the validation effort at this point will be summarized. Detailed comparisons between analytical predictions and experimental results are contained in the following two papers: 'Validation of NASA Thermal Ice Protection Computer Codes: Part 2- The Validation of LEWICE/Thermal' and 'Validation of NASA Thermal Ice Protection Computer Codes: Part 3-The Validation of ANTICE'

  9. Validation of ISS Floating Potential Measurement Unit Electron Densities and Temperatures

    NASA Technical Reports Server (NTRS)

    Coffey, Victoria N.; Minow, Joseph I.; Parker, Linda N.; Bui, Them; Wright, Kenneth, Jr.; Koontz, Steven L.; Schneider, T.; Vaughn, J.; Craven, P.

    2007-01-01

    Validation of the Floating Potential Measurement Unit (FPMU) electron density and temperature measurements is an important step in the process of evaluating International Space Station spacecraft charging issues .including vehicle arcing and hazards to crew during extravehicular activities. The highest potentials observed on Space Station are due to the combined VxB effects on a large spacecraft and the collection of ionospheric electron and ion currents by the 160 V US solar array modules. Ionospheric electron environments are needed for input to the ISS spacecraft charging models used to predict the severity and frequency of occurrence of ISS charging hazards. Validation of these charging models requires comparing their predictions with measured FPMU values. Of course, the FPMU measurements themselves must also be validated independently for use in manned flight safety work. This presentation compares electron density and temperatures derived from the FPMU Langmuir probes and Plasma Impedance Probe against the independent density and temperature measurements from ultraviolet imagers, ground based incoherent scatter radar, and ionosonde sites.

  10. Validation of high throughput sequencing and microbial forensics applications

    PubMed Central

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security. PMID:25101166

  11. Validation of high throughput sequencing and microbial forensics applications.

    PubMed

    Budowle, Bruce; Connell, Nancy D; Bielecka-Oder, Anna; Colwell, Rita R; Corbett, Cindi R; Fletcher, Jacqueline; Forsman, Mats; Kadavy, Dana R; Markotic, Alemka; Morse, Stephen A; Murch, Randall S; Sajantila, Antti; Schmedes, Sarah E; Ternus, Krista L; Turner, Stephen D; Minot, Samuel

    2014-01-01

    High throughput sequencing (HTS) generates large amounts of high quality sequence data for microbial genomics. The value of HTS for microbial forensics is the speed at which evidence can be collected and the power to characterize microbial-related evidence to solve biocrimes and bioterrorist events. As HTS technologies continue to improve, they provide increasingly powerful sets of tools to support the entire field of microbial forensics. Accurate, credible results allow analysis and interpretation, significantly influencing the course and/or focus of an investigation, and can impact the response of the government to an attack having individual, political, economic or military consequences. Interpretation of the results of microbial forensic analyses relies on understanding the performance and limitations of HTS methods, including analytical processes, assays and data interpretation. The utility of HTS must be defined carefully within established operating conditions and tolerances. Validation is essential in the development and implementation of microbial forensics methods used for formulating investigative leads attribution. HTS strategies vary, requiring guiding principles for HTS system validation. Three initial aspects of HTS, irrespective of chemistry, instrumentation or software are: 1) sample preparation, 2) sequencing, and 3) data analysis. Criteria that should be considered for HTS validation for microbial forensics are presented here. Validation should be defined in terms of specific application and the criteria described here comprise a foundation for investigators to establish, validate and implement HTS as a tool in microbial forensics, enhancing public safety and national security.

  12. 2nd NASA CFD Validation Workshop

    NASA Technical Reports Server (NTRS)

    1990-01-01

    The purpose of the workshop was to review NASA's progress in CFD validation since the first workshop (held at Ames in 1987) and to affirm the future direction of the NASA CFD validation program. The first session consisted of overviews of CFD validation research at each of the three OAET research centers and at Marshall Space Flight Center. The second session consisted of in-depth technical presentations of the best examples of CFD validation work at each center (including Marshall). On the second day the workshop divided into three working groups to discuss CFD validation progress and needs in the subsonic, high-speed, and hypersonic speed ranges. The emphasis of the working groups was on propulsion.

  13. Process evaluation to explore internal and external validity of the "Act in Case of Depression" care program in nursing homes.

    PubMed

    Leontjevas, Ruslan; Gerritsen, Debby L; Koopmans, Raymond T C M; Smalbrugge, Martin; Vernooij-Dassen, Myrra J F J

    2012-06-01

    A multidisciplinary, evidence-based care program to improve the management of depression in nursing home residents was implemented and tested using a stepped-wedge design in 23 nursing homes (NHs): "Act in case of Depression" (AiD). Before effect analyses, to evaluate AiD process data on sampling quality (recruitment and randomization, reach) and intervention quality (relevance and feasibility, extent to which AiD was performed), which can be used for understanding internal and external validity. In this article, a model is presented that divides process evaluation data into first- and second-order process data. Qualitative and quantitative data based on personal files of residents, interviews of nursing home professionals, and a research database were analyzed according to the following process evaluation components: sampling quality and intervention quality. Nursing home. The pattern of residents' informed consent rates differed for dementia special care units and somatic units during the study. The nursing home staff was satisfied with the AiD program and reported that the program was feasible and relevant. With the exception of the first screening step (nursing staff members using a short observer-based depression scale), AiD components were not performed fully by NH staff as prescribed in the AiD protocol. Although NH staff found the program relevant and feasible and was satisfied with the program content, individual AiD components may have different feasibility. The results on sampling quality implied that statistical analyses of AiD effectiveness should account for the type of unit, whereas the findings on intervention quality implied that, next to the type of unit, analyses should account for the extent to which individual AiD program components were performed. In general, our first-order process data evaluation confirmed internal and external validity of the AiD trial, and this evaluation enabled further statistical fine tuning. The importance of

  14. Validation of Immunohistochemical Assays for Integral Biomarkers in the NCI-MATCH EAY131 Clinical Trial.

    PubMed

    Khoury, Joseph D; Wang, Wei-Lien; Prieto, Victor G; Medeiros, L Jeffrey; Kalhor, Neda; Hameed, Meera; Broaddus, Russell; Hamilton, Stanley R

    2018-02-01

    Biomarkers that guide therapy selection are gaining unprecedented importance as targeted therapy options increase in scope and complexity. In conjunction with high-throughput molecular techniques, therapy-guiding biomarker assays based upon immunohistochemistry (IHC) have a critical role in cancer care in that they inform about the expression status of a protein target. Here, we describe the validation procedures for four clinical IHC biomarker assays-PTEN, RB, MLH1, and MSH2-for use as integral biomarkers in the nationwide NCI-Molecular Analysis for Therapy Choice (NCI-MATCH) EAY131 clinical trial. Validation procedures were developed through an iterative process based on collective experience and adaptation of broad guidelines from the FDA. The steps included primary antibody selection; assay optimization; development of assay interpretation criteria incorporating biological considerations; and expected staining patterns, including indeterminate results, orthogonal validation, and tissue validation. Following assay lockdown, patient samples and cell lines were used for analytic and clinical validation. The assays were then approved as laboratory-developed tests and used for clinical trial decisions for treatment selection. Calculations of sensitivity and specificity were undertaken using various definitions of gold-standard references, and external validation was required for the PTEN IHC assay. In conclusion, validation of IHC biomarker assays critical for guiding therapy in clinical trials is feasible using comprehensive preanalytic, analytic, and postanalytic steps. Implementation of standardized guidelines provides a useful framework for validating IHC biomarker assays that allow for reproducibility across institutions for routine clinical use. Clin Cancer Res; 24(3); 521-31. ©2017 AACR . ©2017 American Association for Cancer Research.

  15. The Validation by Measurement Theory of Proposed Object-Oriented Software Metrics

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1996-01-01

    Moving software development into the engineering arena requires controllability, and to control a process, it must be measurable. Measuring the process does no good if the product is not also measured, i.e., being the best at producing an inferior product does not define a quality process. Also, not every number extracted from software development is a valid measurement. A valid measurement only results when we are able to verify that the number is representative of the attribute that we wish to measure. Many proposed software metrics are used by practitioners without these metrics ever having been validated, leading to costly but often useless calculations. Several researchers have bemoaned the lack of scientific precision in much of the published software measurement work and have called for validation of software metrics by measurement theory. This dissertation applies measurement theory to validate fifty proposed object-oriented software metrics.

  16. Setup, Validation and Quality Control of a Centralized WGS laboratory - Lessons Learned.

    PubMed

    Arnold, Cath; Edwards, Kirstin; Desai, Meeta; Platt, Steve; Green, Jonathan; Conway, David

    2018-04-25

    Routine use of Whole Genome analysis for infectious diseases can be used to enlighten various scenarios pertaining to Public Health, including identification of microbial pathogens; relating individual cases to an outbreak of infectious disease; establishing an association between an outbreak of food poisoning and a specific food vehicle; inferring drug susceptibility; source tracing of contaminants and study of variations in the genome affect pathogenicity/virulence. We describe the setup, validation and ongoing verification of a centralised WGS laboratory to carry out the sequencing for these public health functions for the National Infection Services, Public Health England in the UK. The performance characteristics and Quality Control metrics measured during validation and verification of the entire end to end process (accuracy, precision, reproducibility and repeatability) are described and include information regarding the automated pass and release of data to service users without intervention. © Crown copyright 2018.

  17. Developmental validation of the DNAscan™ Rapid DNA Analysis™ instrument and expert system for reference sample processing.

    PubMed

    Della Manna, Angelo; Nye, Jeffrey V; Carney, Christopher; Hammons, Jennifer S; Mann, Michael; Al Shamali, Farida; Vallone, Peter M; Romsos, Erica L; Marne, Beth Ann; Tan, Eugene; Turingan, Rosemary S; Hogan, Catherine; Selden, Richard F; French, Julie L

    2016-11-01

    Since the implementation of forensic DNA typing in labs more than 20 years ago, the analysis procedures and data interpretation have always been conducted in a laboratory by highly trained and qualified scientific personnel. Rapid DNA technology has the potential to expand testing capabilities within forensic laboratories and to allow forensic STR analysis to be performed outside the physical boundaries of the traditional laboratory. The developmental validation of the DNAscan/ANDE Rapid DNA Analysis System was completed using a BioChipSet™ Cassette consumable designed for high DNA content samples, such as single source buccal swabs. A total of eight laboratories participated in the testing which totaled over 2300 swabs, and included nearly 1400 unique individuals. The goal of this extensive study was to obtain, document, analyze, and assess DNAscan and its internal Expert System to reliably genotype reference samples in a manner compliant with the FBI's Quality Assurance Standards (QAS) and the NDIS Operational Procedures. The DNAscan System provided high quality, concordant results for reference buccal swabs, including automated data analysis with an integrated Expert System. Seven external laboratories and NetBio, the developer of the technology, participated in the validation testing demonstrating the reproducibility and reliability of the system and its successful use in a variety of settings by numerous operators. The DNAscan System demonstrated limited cross reactivity with other species, was resilient in the presence of numerous inhibitors, and provided reproducible results for both buccal and purified DNA samples with sensitivity at a level appropriate for buccal swabs. The precision and resolution of the system met industry standards for detection of micro-variants and displayed single base resolution. PCR-based studies provided confidence that the system was robust and that the amplification reaction had been optimized to provide high quality results

  18. Performance Validation Approach for the GTX Air-Breathing Launch Vehicle

    NASA Technical Reports Server (NTRS)

    Trefny, Charles J.; Roche, Joseph M.

    2002-01-01

    The primary objective of the GTX effort is to determine whether or not air-breathing propulsion can enable a launch vehicle to achieve orbit in a single stage. Structural weight, vehicle aerodynamics, and propulsion performance must be accurately known over the entire flight trajectory in order to make a credible assessment. Structural, aerodynamic, and propulsion parameters are strongly interdependent, which necessitates a system approach to design, evaluation, and optimization of a single-stage-to-orbit concept. The GTX reference vehicle serves this purpose, by allowing design, development, and validation of components and subsystems in a system context. The reference vehicle configuration (including propulsion) was carefully chosen so as to provide high potential for structural and volumetric efficiency, and to allow the high specific impulse of air-breathing propulsion cycles to be exploited. Minor evolution of the configuration has occurred as analytical and experimental results have become available. With this development process comes increasing validation of the weight and performance levels used in system performance determination. This paper presents an overview of the GTX reference vehicle and the approach to its performance validation. Subscale test rigs and numerical studies used to develop and validate component performance levels and unit structural weights are outlined. The sensitivity of the equivalent, effective specific impulse to key propulsion component efficiencies is presented. The role of flight demonstration in development and validation is discussed.

  19. Validation of a questionnaire to measure sexual health knowledge and understanding (Sexual Health Questionnaire) in Nepalese secondary school: A psychometric process.

    PubMed

    Acharya, Dev Raj; Thomas, Malcolm; Cann, Rosemary

    2016-01-01

    School-based sex education has the potential to prevent unwanted pregnancy and to promote positive sexual health at the individual, family and community level. To develop and validate a sexual health questionnaire to measure young peoples' sexual health knowledge and understanding (SHQ) in Nepalese secondary school. Secondary school students (n = 259, male = 43.63%, female = 56.37%) and local experts (n = 9, male = 90%, female = 10%) were participated in this study. Evaluation processes were; content validity (>0.89), plausibility check (>95), item-total correlation (>0.3), factor loading (>0.4), principal component analysis (4 factors Kaiser's criterion), Chronbach's alpha (>0.65), face validity and internal consistency using test-retest reliability (P > 0.05). The principal component analysis revealed four factors to be extracted; sexual health norms and beliefs, source of sexual health information, sexual health knowledge and understanding, and level of sexual awareness. Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy demonstrated that the patterns of correlations are relatively compact (>0.80). Chronbach's alpha for each factors were above the cut-off point (0.65). Face validity indicated that the questions were clear to the majority of the respondent. Moreover, there were no significant differences (P > 0.05) in the responses to the items at two time points at seven weeks later. The finding suggests that SHQ is a valid and reliable instrument to be used in schools to measure sexual health knowledge and understanding. Further analysis such as structured equation modelling (SEM) and confirmatory factor analysis could make the questionnaire more robust and applicable to the wider school population.

  20. Validation results of specifications for motion control interoperability

    NASA Astrophysics Data System (ADS)

    Szabo, Sandor; Proctor, Frederick M.

    1997-01-01

    The National Institute of Standards and Technology (NIST) is participating in the Department of Energy Technologies Enabling Agile Manufacturing (TEAM) program to establish interface standards for machine tool, robot, and coordinate measuring machine controllers. At NIST, the focus is to validate potential application programming interfaces (APIs) that make it possible to exchange machine controller components with a minimal impact on the rest of the system. This validation is taking place in the enhanced machine controller (EMC) consortium and is in cooperation with users and vendors of motion control equipment. An area of interest is motion control, including closed-loop control of individual axes and coordinated path planning. Initial tests of the motion control APIs are complete. The APIs were implemented on two commercial motion control boards that run on two different machine tools. The results for a baseline set of APIs look promising, but several issues were raised. These include resolving differing approaches in how motions are programmed and defining a standard measurement of performance for motion control. This paper starts with a summary of the process used in developing a set of specifications for motion control interoperability. Next, the EMC architecture and its classification of motion control APIs into two classes, Servo Control and Trajectory Planning, are reviewed. Selected APIs are presented to explain the basic functionality and some of the major issues involved in porting the APIs to other motion controllers. The paper concludes with a summary of the main issues and ways to continue the standards process.

  1. Application of validity theory and methodology to patient-reported outcome measures (PROMs): building an argument for validity.

    PubMed

    Hawkins, Melanie; Elsworth, Gerald R; Osborne, Richard H

    2018-07-01

    Data from subjective patient-reported outcome measures (PROMs) are now being used in the health sector to make or support decisions about individuals, groups and populations. Contemporary validity theorists define validity not as a statistical property of the test but as the extent to which empirical evidence supports the interpretation of test scores for an intended use. However, validity testing theory and methodology are rarely evident in the PROM validation literature. Application of this theory and methodology would provide structure for comprehensive validation planning to support improved PROM development and sound arguments for the validity of PROM score interpretation and use in each new context. This paper proposes the application of contemporary validity theory and methodology to PROM validity testing. The validity testing principles will be applied to a hypothetical case study with a focus on the interpretation and use of scores from a translated PROM that measures health literacy (the Health Literacy Questionnaire or HLQ). Although robust psychometric properties of a PROM are a pre-condition to its use, a PROM's validity lies in the sound argument that a network of empirical evidence supports the intended interpretation and use of PROM scores for decision making in a particular context. The health sector is yet to apply contemporary theory and methodology to PROM development and validation. The theoretical and methodological processes in this paper are offered as an advancement of the theory and practice of PROM validity testing in the health sector.

  2. Posterior odontoid process angulation in pediatric Chiari I malformation: an MRI morphometric external validation study.

    PubMed

    Ladner, Travis R; Dewan, Michael C; Day, Matthew A; Shannon, Chevis N; Tomycz, Luke; Tulipan, Noel; Wellons, John C

    2015-08-01

    OBJECT Osseous anomalies of the craniocervical junction are hypothesized to precipitate the hindbrain herniation observed in Chiari I malformation (CM-I). Previous work by Tubbs et al. showed that posterior angulation of the odontoid process is more prevalent in children with CM-I than in healthy controls. The present study is an external validation of that report. The goals of our study were 3-fold: 1) to externally validate the results of Tubbs et al. in a different patient population; 2) to compare how morphometric parameters vary with age, sex, and symptomatology; and 3) to develop a correlative model for tonsillar ectopia in CM-I based on these measurements. METHODS The authors performed a retrospective review of 119 patients who underwent posterior fossa decompression with duraplasty at the Monroe Carell Jr. Children's Hospital at Vanderbilt University; 78 of these patients had imaging available for review. Demographic and clinical variables were collected. A neuroradiologist retrospectively evaluated preoperative MRI examinations in these 78 patients and recorded the following measurements: McRae line length; obex displacement length; odontoid process parameters (height, angle of retroflexion, and angle of retroversion); perpendicular distance to the basion-C2 line (pB-C2 line); length of cerebellar tonsillar ectopia; caudal extent of the cerebellar tonsils; and presence, location, and size of syringomyelia. Odontoid retroflexion grade was classified as Grade 0, > 90°; Grade I,85°-89°; Grade II, 80°-84°; and Grade III, < 80°. Age groups were defined as 0-6 years, 7-12 years, and 13-17 years at the time of surgery. Univariate and multivariate linear regression analyses, Kruskal-Wallis 1-way ANOVA, and Fisher's exact test were performed to assess the relationship between age, sex, and symptomatology with these craniometric variables. RESULTS The prevalence of posterior odontoid angulation was 81%, which is almost identical to that in the previous report

  3. Ionospheric Modeling: Development, Verification and Validation

    DTIC Science & Technology

    2005-09-01

    facilitate the automated processing of a large network of GPS receiver data. 4.; CALIBRATION AND VALIDATION OF IONOSPHERIC SENSORS We have been...NOFS Workshop, Estes Park, CO, January 2005. W. Rideout, A. Coster, P. Doherty, MIT Haystack Automated Processing of GPS Data to Produce Worldwide TEC

  4. Perception of competence in middle school physical education: instrument development and validation.

    PubMed

    Scrabis-Fletcher, Kristin; Silverman, Stephen

    2010-03-01

    Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.

  5. Validation of the Social Appearance Anxiety Scale: factor, convergent, and divergent validity.

    PubMed

    Levinson, Cheri A; Rodebaugh, Thomas L

    2011-09-01

    The Social Appearance Anxiety Scale (SAAS) was created to assess fear of overall appearance evaluation. Initial psychometric work indicated that the measure had a single-factor structure and exhibited excellent internal consistency, test-retest reliability, and convergent validity. In the current study, the authors further examined the factor, convergent, and divergent validity of the SAAS in two samples of undergraduates. In Study 1 (N = 323), the authors tested the factor structure, convergent, and divergent validity of the SAAS with measures of the Big Five personality traits, negative affect, fear of negative evaluation, and social interaction anxiety. In Study 2 (N = 118), participants completed a body evaluation that included measurements of height, weight, and body fat content. The SAAS exhibited excellent convergent and divergent validity with self-report measures (i.e., self-esteem, trait anxiety, ethnic identity, and sympathy), predicted state anxiety experienced during the body evaluation, and predicted body fat content. In both studies, results confirmed a single-factor structure as the best fit to the data. These results lend additional support for the use of the SAAS as a valid measure of social appearance anxiety.

  6. Adaptation and validation of a Spanish-language version of the Frontotemporal Dementia Rating Scale (FTD-FRS).

    PubMed

    Turró-Garriga, O; Hermoso Contreras, C; Olives Cladera, J; Mioshi, E; Pelegrín Valero, C; Olivera Pueyo, J; Garre-Olmo, J; Sánchez-Valle, R

    2017-06-01

    The Frontotemporal Dementia Rating Scale (FTD-FRS) is a tool designed to aid with clinical staging and assessment of the progression of frontotemporal dementia (FTD-FRS). Present a multicentre adaptation and validation study of a Spanish version of the FRS. The adapted version was created using 2 translation-back translation processes (English to Spanish, Spanish to English) and verified by the scale's original authors. We validated the adapted version in a sample of consecutive patients diagnosed with FTD. The procedure included evaluating internal consistency, testing unidimensionality with the Rasch model, analysing construct validity and discriminant validity, and calculating the degree of agreement between the Clinical Dementia Rating scale (CDR) and FTD-FRS for FTD cases. The study included 60 patients with DFT. The mean score on the FRS was 12.1 points (SD=6.5; range, 2-25) with inter-group differences (F=120.3; df=3; P<.001). Cronbach's alpha was 0.897 and principal component analysis of residuals delivered an acceptable eigenvalue for 5 contrasts (1.6-2.7) and 36.1% raw variance. FRS was correlated with the Mini-mental State Examination (r=0.572; P<.001) and functional capacity (DAD; r=0.790; P<.001). FTD-FRS also showed a significant correlation with CDR (r=-0.641; P<.001), but we did observe variability in the severity levels; cases appeared to be less severe according to the CDR than when measured with the FTD-FRS (kappa=0.055). This process of validating the Spanish translation of the FTD-FRS yielded satisfactory results for validity and unidimensionality (severity) in the assessment of patients with FTD. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.

  7. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets

    PubMed Central

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-01-01

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage. PMID:28273801

  8. Analytical and Experimental Performance Evaluation of BLE Neighbor Discovery Process Including Non-Idealities of Real Chipsets.

    PubMed

    Perez-Diaz de Cerio, David; Hernández, Ángela; Valenzuela, Jose Luis; Valdovinos, Antonio

    2017-03-03

    The purpose of this paper is to evaluate from a real perspective the performance of Bluetooth Low Energy (BLE) as a technology that enables fast and reliable discovery of a large number of users/devices in a short period of time. The BLE standard specifies a wide range of configurable parameter values that determine the discovery process and need to be set according to the particular application requirements. Many previous works have been addressed to investigate the discovery process through analytical and simulation models, according to the ideal specification of the standard. However, measurements show that additional scanning gaps appear in the scanning process, which reduce the discovery capabilities. These gaps have been identified in all of the analyzed devices and respond to both regular patterns and variable events associated with the decoding process. We have demonstrated that these non-idealities, which are not taken into account in other studies, have a severe impact on the discovery process performance. Extensive performance evaluation for a varying number of devices and feasible parameter combinations has been done by comparing simulations and experimental measurements. This work also includes a simple mathematical model that closely matches both the standard implementation and the different chipset peculiarities for any possible parameter value specified in the standard and for any number of simultaneous advertising devices under scanner coverage.

  9. Calibration and validation of wearable monitors.

    PubMed

    Bassett, David R; Rowlands, Alex; Trost, Stewart G

    2012-01-01

    Wearable monitors are increasingly being used to objectively monitor physical activity in research studies within the field of exercise science. Calibration and validation of these devices are vital to obtaining accurate data. This article is aimed primarily at the physical activity measurement specialist, although the end user who is conducting studies with these devices also may benefit from knowing about this topic. Initially, wearable physical activity monitors should undergo unit calibration to ensure interinstrument reliability. The next step is to simultaneously collect both raw signal data (e.g., acceleration) from the wearable monitors and rates of energy expenditure, so that algorithms can be developed to convert the direct signals into energy expenditure. This process should use multiple wearable monitors and a large and diverse subject group and should include a wide range of physical activities commonly performed in daily life (from sedentary to vigorous). New methods of calibration now use "pattern recognition" approaches to train the algorithms on various activities, and they provide estimates of energy expenditure that are much better than those previously available with the single-regression approach. Once a method of predicting energy expenditure has been established, the next step is to examine its predictive accuracy by cross-validating it in other populations. In this article, we attempt to summarize the best practices for calibration and validation of wearable physical activity monitors. Finally, we conclude with some ideas for future research ideas that will move the field of physical activity measurement forward.

  10. Validation of the procedures. [integrated multidisciplinary optimization of rotorcraft

    NASA Technical Reports Server (NTRS)

    Mantay, Wayne R.

    1989-01-01

    Validation strategies are described for procedures aimed at improving the rotor blade design process through a multidisciplinary optimization approach. Validation of the basic rotor environment prediction tools and the overall rotor design are discussed.

  11. Validation in the Absence of Observed Events.

    PubMed

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  12. Validation of Case Finding Algorithms for Hepatocellular Cancer from Administrative Data and Electronic Health Records using Natural Language Processing

    PubMed Central

    Sada, Yvonne; Hou, Jason; Richardson, Peter; El-Serag, Hashem; Davila, Jessica

    2013-01-01

    Background Accurate identification of hepatocellular cancer (HCC) cases from automated data is needed for efficient and valid quality improvement initiatives and research. We validated HCC ICD-9 codes, and evaluated whether natural language processing (NLP) by the Automated Retrieval Console (ARC) for document classification improves HCC identification. Methods We identified a cohort of patients with ICD-9 codes for HCC during 2005–2010 from Veterans Affairs administrative data. Pathology and radiology reports were reviewed to confirm HCC. The positive predictive value (PPV), sensitivity, and specificity of ICD-9 codes were calculated. A split validation study of pathology and radiology reports was performed to develop and validate ARC algorithms. Reports were manually classified as diagnostic of HCC or not. ARC generated document classification algorithms using the Clinical Text Analysis and Knowledge Extraction System. ARC performance was compared to manual classification. PPV, sensitivity, and specificity of ARC were calculated. Results 1138 patients with HCC were identified by ICD-9 codes. Based on manual review, 773 had HCC. The HCC ICD-9 code algorithm had a PPV of 0.67, sensitivity of 0.95, and specificity of 0.93. For a random subset of 619 patients, we identified 471 pathology reports for 323 patients and 943 radiology reports for 557 patients. The pathology ARC algorithm had PPV of 0.96, sensitivity of 0.96, and specificity of 0.97. The radiology ARC algorithm had PPV of 0.75, sensitivity of 0.94, and specificity of 0.68. Conclusion A combined approach of ICD-9 codes and NLP of pathology and radiology reports improves HCC case identification in automated data. PMID:23929403

  13. Validation of biomarkers to predict response to immunotherapy in cancer: Volume II - clinical validation and regulatory considerations.

    PubMed

    Dobbin, Kevin K; Cesano, Alessandra; Alvarez, John; Hawtin, Rachael; Janetzki, Sylvia; Kirsch, Ilan; Masucci, Giuseppe V; Robbins, Paul B; Selvan, Senthamil R; Streicher, Howard Z; Zhang, Jenny; Butterfield, Lisa H; Thurin, Magdalena

    2016-01-01

    There is growing recognition that immunotherapy is likely to significantly improve health outcomes for cancer patients in the coming years. Currently, while a subset of patients experience substantial clinical benefit in response to different immunotherapeutic approaches, the majority of patients do not but are still exposed to the significant drug toxicities. Therefore, a growing need for the development and clinical use of predictive biomarkers exists in the field of cancer immunotherapy. Predictive cancer biomarkers can be used to identify the patients who are or who are not likely to derive benefit from specific therapeutic approaches. In order to be applicable in a clinical setting, predictive biomarkers must be carefully shepherded through a step-wise, highly regulated developmental process. Volume I of this two-volume document focused on the pre-analytical and analytical phases of the biomarker development process, by providing background, examples and "good practice" recommendations. In the current Volume II, the focus is on the clinical validation, validation of clinical utility and regulatory considerations for biomarker development. Together, this two volume series is meant to provide guidance on the entire biomarker development process, with a particular focus on the unique aspects of developing immune-based biomarkers. Specifically, knowledge about the challenges to clinical validation of predictive biomarkers, which has been gained from numerous successes and failures in other contexts, will be reviewed together with statistical methodological issues related to bias and overfitting. The different trial designs used for the clinical validation of biomarkers will also be discussed, as the selection of clinical metrics and endpoints becomes critical to establish the clinical utility of the biomarker during the clinical validation phase of the biomarker development. Finally, the regulatory aspects of submission of biomarker assays to the U.S. Food and

  14. Volpe Aircraft Noise Certification DGPS Validation/Audit General Information, Data Submittal Guidelines, and Process Details; Letter Report V324-FB48B3-LR5

    DOT National Transportation Integrated Search

    2018-01-09

    As required by Federal Aviation Administration Order 8110.4C, Type Certification Process, the Volpe Center Acoustics Facility (Volpe), in support of the Federal Aviation Administration Office of Environment and Energy (AEE), has completed valid...

  15. Level 2 processing for the imaging Fourier transform spectrometer GLORIA: derivation and validation of temperature and trace gas volume mixing ratios from calibrated dynamics mode spectra

    NASA Astrophysics Data System (ADS)

    Ungermann, J.; Blank, J.; Dick, M.; Ebersoldt, A.; Friedl-Vallon, F.; Giez, A.; Guggenmoser, T.; Höpfner, M.; Jurkat, T.; Kaufmann, M.; Kaufmann, S.; Kleinert, A.; Krämer, M.; Latzko, T.; Oelhaf, H.; Olchewski, F.; Preusse, P.; Rolf, C.; Schillings, J.; Suminska-Ebersoldt, O.; Tan, V.; Thomas, N.; Voigt, C.; Zahn, A.; Zöger, M.; Riese, M.

    2014-12-01

    The Gimballed Limb Observer for Radiance Imaging of the Atmosphere (GLORIA) is an airborne infrared limb-imager combining a two-dimensional infrared detector with a Fourier transform spectrometer. It was operated aboard the new German Gulfstream G550 research aircraft HALO during the Transport And Composition in the upper Troposphere/lowermost Stratosphere (TACTS) and Earth System Model Validation (ESMVAL) campaigns in summer 2012. This paper describes the retrieval of temperature and trace gas (H2O, O3, HNO3) volume mixing ratios from GLORIA dynamics mode spectra. 26 integrated spectral windows are employed in a joint fit to retrieve seven targets using consecutively a fast and an accurate tabulated radiative transfer model. Typical diagnostic quantities are provided including effects of uncertainties in the calibration and horizontal resolution along the line-of-sight. Simultaneous in-situ observations by the BAsic HALO Measurement And Sensor System (BAHAMAS), the Fast In-Situ Stratospheric Hygrometer (FISH), FAIRO, and the Atmospheric chemical Ionization Mass Spectrometer (AIMS) allow a validation of retrieved values for three flights in the upper troposphere/lowermost stratosphere region spanning polar and sub-tropical latitudes. A high correlation is achieved between the remote sensing and the in-situ trace gas data, and discrepancies can to a large fraction be attributed to differences in the probed air masses caused by different sampling characteristics of the instruments. This 1-D processing of GLORIA dynamics mode spectra provides the basis for future tomographic inversions from circular and linear flight paths to better understand selected dynamical processes of the upper troposphere and lowermost stratosphere.

  16. Validation of design procedure and performance modeling of a heat and fluid transport field experiment in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Nir, A.; Doughty, C.; Tsang, C. F.

    Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no

  17. Commercial Disinfectants During Disinfection Process Validation: More Failures than Success.

    PubMed

    Chatterjee, Shiv Sekhar; Chumber, Sushil Kumar; Khanduri, Uma

    2016-08-01

    Disinfection process validation is mandatory before introduction of a new disinfectant in hospital services. Commercial disinfection brands often question existing hospital policy claiming greater efficacy and lack of toxicity of their products. Inadvertent inadequate disinfection leads to morbidity, patient's economic burden, and the risk of mortality. To evaluate commercial disinfectants for high, intermediate and low-level disinfection so as to identify utility for our routine situations. This laboratory based experiment was conducted at St Stephen Hospital, Delhi during July-September 2013. Twelve commercial disinfectants: Sanidex®, Sanocid®, Cidex®, SekuSept Aktiv®, BIB Forte®, Alprojet W®, Desnet®, Sanihygiene®, Incidin®, D125®, Lonzagard®, and Glutishield® were tested. Time-kill assay (suspension test) was performed against six indicator bacteria (Escherichia coli, Staphylococcus aureus, Pseudomonas aeruginosa, Salmonella Typhi, Bacillus cereus, and Mycobacterium fortuitum). Low and high inoculum (final concentrations 1.5X10(6) and 9X10(6) cfu/ml) of the first five bacteria while only low level of M. fortuitum was tested. Cidex® (2.4% Glutaraldehyde) performed best as high level disinfectant while newer quarternary ammonium compounds (QACs) (Incidin®, D125®, and Lonzagard®) were good at low level disinfection. Sanidex® (0.55% Ortho-pthalaldehyde) though mycobactericidal took 10 minutes for sporicidal activity. Older QAC containing BIB Forte® and Desnet® took 20 minutes to fully inhibit P. aeruginosa. All disinfectants effectively reduced S. Typhi to zero counts within 5 minutes. Cidex® is a good high-level disinfectant while newer QACs (Incidin®, D125®, and Lonzagard®) were capable low-level disinfectants.

  18. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis approaches the Vehicle Assembly Building (VAB). It is being towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis approaches the Vehicle Assembly Building (VAB). It is being towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  19. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is towed from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is towed from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  20. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis nears the Vehicle Assembly Building (VAB). It is being towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis nears the Vehicle Assembly Building (VAB). It is being towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  1. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis moves into high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis moves into high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  2. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis awaits a tow from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis awaits a tow from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  3. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis awaits transport from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis awaits transport from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  4. KENNEDY SPACE CENTER, FLA. - Workers back the Space Shuttle orbiter Atlantis out of the Orbiter Processing Facility (OPF) for its move to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - Workers back the Space Shuttle orbiter Atlantis out of the Orbiter Processing Facility (OPF) for its move to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  5. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is moved into high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is moved into high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  6. KENNEDY SPACE CENTER, FLA. - Workers prepare to tow the Space Shuttle orbiter Atlantis from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - Workers prepare to tow the Space Shuttle orbiter Atlantis from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  7. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is moments away from a tow from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is moments away from a tow from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  8. KENNEDY SPACE CENTER, FLA. - Workers monitor the Space Shuttle orbiter Atlantis as it is towed from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - Workers monitor the Space Shuttle orbiter Atlantis as it is towed from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  9. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis approaches the Vehicle Assembly Building (VAB) high bay 4. It is being towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis approaches the Vehicle Assembly Building (VAB) high bay 4. It is being towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  10. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis approaches high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis approaches high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  11. KENNEDY SPACE CENTER, FLA. - Workers walk with Space Shuttle orbiter Atlantis from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB) high bay 4. The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - Workers walk with Space Shuttle orbiter Atlantis from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB) high bay 4. The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  12. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis backs out of the Orbiter Processing Facility (OPF) for its move to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis backs out of the Orbiter Processing Facility (OPF) for its move to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  13. KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis arrives in high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-05

    KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis arrives in high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.

  14. Interaction of preservation methods and radiation sterilization in human skin processing, with particular insight on the impact of the final water content and collagen disruption. Part I: process validation, water activity and collagen changes in tissues cryopreserved or processed using 50, 85 or 98% glycerol solutions.

    PubMed

    Herson, M R; Hamilton, K; White, J; Alexander, D; Poniatowski, S; O'Connor, A J; Werkmeister, J A

    2018-04-25

    Current regulatory requirements demand an in-depth understanding and validation of protocols used in tissue banking. The aim of this work was to characterize the quality of split thickness skin allografts cryopreserved or manufactured using highly concentrated solutions of glycerol (50, 85 or 98%), where tissue water activity (a w ), histology and birefringence changes were chosen as parameters. Consistent a w outcomes validated the proposed processing protocols. While no significant changes in tissue quality were observed under bright-field microscopy or in collagen birefringence, in-process findings can be harnessed to fine-tune and optimize manufacturing outcomes in particular when further radiation sterilization is considered. Furthermore, exposing the tissues to 85% glycerol seems to derive the most efficient outcomes as far as a w and control of microbiological growth.

  15. Validity as Process: A Construct Driven Measure of Fidelity of Implementation

    ERIC Educational Resources Information Center

    Jones, Ryan Seth

    2013-01-01

    Estimates of fidelity of implementation are essential to interpret the effects of educational interventions in randomized controlled trials (RCTs). While random assignment protects against many threats to validity, and therefore provides the best approximation to a true counterfactual condition, it does not ensure that the treatment condition…

  16. The validation by measurement theory of proposed object-oriented software metrics

    NASA Technical Reports Server (NTRS)

    Neal, Ralph D.

    1994-01-01

    Moving software development into the engineering arena requires controllability, and to control a process, it must be measurable. Measuring the process does no good if the product is not also measured, i.e., being the best at producing an inferior product does not define a quality process. Also, not every number extracted from software development is a valid measurement. A valid measurement only results when we are able to verify that the number is representative of the attribute that we wish to measure. Many proposed software metrics are used by practitioners without these metrics ever having been validated, leading to costly but often useless calculations. Several researchers have bemoaned the lack of scientific precision in much of the published software measurement work and have called for validation of software metrics by measurement theory. This dissertation applies measurement theory to validate fifty proposed object-oriented software metrics (Li and Henry, 1993; Chidamber and Kemerrer, 1994; Lorenz and Kidd, 1994).

  17. Assessment of validity with polytrauma Veteran populations.

    PubMed

    Bush, Shane S; Bass, Carmela

    2015-01-01

    Veterans with polytrauma have suffered injuries to multiple body parts and organs systems, including the brain. The injuries can generate a triad of physical, neurologic/cognitive, and emotional symptoms. Accurate diagnosis is essential for the treatment of these conditions and for fair allocation of benefits. To accurately diagnose polytrauma disorders and their related problems, clinicians take into account the validity of reported history and symptoms, as well as clinical presentations. The purpose of this article is to describe the assessment of validity with polytrauma Veteran populations. Review of scholarly and other relevant literature and clinical experience are utilized. A multimethod approach to validity assessment that includes objective, standardized measures increases the confidence that can be placed in the accuracy of self-reported symptoms and physical, cognitive, and emotional test results. Due to the multivariate nature of polytrauma and the multiple disciplines that play a role in diagnosis and treatment, an ideal model of validity assessment with polytrauma Veteran populations utilizes neurocognitive, neurological, neuropsychiatric, and behavioral measures of validity. An overview of these validity assessment approaches as applied to polytrauma Veteran populations is presented. Veterans, the VA, and society are best served when accurate diagnoses are made.

  18. Innovative techniques for the production of energetic radicals for lunar processing including cold plasma processing of local planetary ores

    NASA Technical Reports Server (NTRS)

    Bullard, D.; Lynch, D. C.

    1992-01-01

    Hydrogen reduction of ilmenite has been studied by a number of investigators as a potential means for recovery of oxygen from lunar soil. Interest in this process has always rested with the simplicity of the flow diagram and the utilization of established technology. Effective utilization of hydrogen in the reduction process at temperatures of 1200 C and below has always been disappointing and, as such, has led other investigators to focus attention on other systems. Effective utilization of hydrogen in the reduction of ilmenite can be significantly enhanced in the presence of a non-equilibrium hydrogen plasma. Ilmenite at solid specimen temperatures of 600 C to 970 C were reacted in a hydrogen plasma. Those experiments revealed that hydrogen utilization can be significantly enhanced. At a specimen temperature of 850 C the fraction of H2 reacted was 24 percent compared to the 7 percent theoretical limit calculated with thermodynamic theory for the same temperature. An added advantage for a hydrogen plasma involves further reduction of TiO2. Reduction of the iron oxide in ilmenite yields TiO2 and metallic iron as by products. Titanium forms a number of oxides including TiO, Ti2O3, Ti3O5 and the Magneli oxides (Ti4O7 to Ti50O99). In conventional processing of ilmenite with hydrogen it is possible to reduce TiO2 to Ti7O13 within approximately an hour, but with poor utilization of hydrogen on the order of one mole of H2 per thousand. In the cold or non-equilibrium plasma TiO2 can be rapidly reduced to Ti2O3 with hydrogen utilization exceeding 10 percent. Based on design considerations of the plasma reactor greater utilization of the hydrogen in the reduction of TiO2 is possible.

  19. Cross-validated detection of crack initiation in aerospace materials

    NASA Astrophysics Data System (ADS)

    Vanniamparambil, Prashanth A.; Cuadra, Jefferson; Guclu, Utku; Bartoli, Ivan; Kontsos, Antonios

    2014-03-01

    A cross-validated nondestructive evaluation approach was employed to in situ detect the onset of damage in an Aluminum alloy compact tension specimen. The approach consisted of the coordinated use primarily the acoustic emission, combined with the infrared thermography and digital image correlation methods. Both tensile loads were applied and the specimen was continuously monitored using the nondestructive approach. Crack initiation was witnessed visually and was confirmed by the characteristic load drop accompanying the ductile fracture process. The full field deformation map provided by the nondestructive approach validated the formation of a pronounced plasticity zone near the crack tip. At the time of crack initiation, a burst in the temperature field ahead of the crack tip as well as a sudden increase of the acoustic recordings were observed. Although such experiments have been attempted and reported before in the literature, the presented approach provides for the first time a cross-validated nondestructive dataset that can be used for quantitative analyses of the crack initiation information content. It further allows future development of automated procedures for real-time identification of damage precursors including the rarely explored crack incubation stage in fatigue conditions.

  20. Measuring Children's Environmental Attitudes and Values in Northwest Mexico: Validating a Modified Version of Measures to Test the Model of Ecological Values (2-MEV)

    ERIC Educational Resources Information Center

    Schneller, A. J.; Johnson, B.; Bogner, F. X.

    2015-01-01

    This paper describes the validation process of measuring children's attitudes and values toward the environment within a Mexican sample. We applied the Model of Ecological Values (2-MEV), which has been shown to be valid and reliable in 20 countries, including one Spanish speaking culture. Items were initially modified to fit the regional dialect,…

  1. Intended and Unintended Meanings of Validity: Some Clarifying Comments

    ERIC Educational Resources Information Center

    Geisinger, Kurt F.

    2016-01-01

    The six primary papers in this issue of "Assessment in Education" emphasise a single primary point: the concept of validity is a complex one. Essentially, validity is a collective noun. That is, just as a group of players may be called a team and a group of geese a flock, so too does validity represent a variety of processes and…

  2. Results from SMAP Validation Experiments 2015 and 2016

    NASA Astrophysics Data System (ADS)

    Colliander, A.; Jackson, T. J.; Cosh, M. H.; Misra, S.; Crow, W.; Powers, J.; Wood, E. F.; Mohanty, B.; Judge, J.; Drewry, D.; McNairn, H.; Bullock, P.; Berg, A. A.; Magagi, R.; O'Neill, P. E.; Yueh, S. H.

    2017-12-01

    NASA's Soil Moisture Active Passive (SMAP) mission was launched in January 2015. The objective of the mission is global mapping of soil moisture and freeze/thaw state. Well-characterized sites with calibrated in situ soil moisture measurements are used to determine the quality of the soil moisture data products; these sites are designated as core validation sites (CVS). To support the CVS-based validation, airborne field experiments are used to provide high-fidelity validation data and to improve the SMAP retrieval algorithms. The SMAP project and NASA coordinated airborne field experiments at three CVS locations in 2015 and 2016. SMAP Validation Experiment 2015 (SMAPVEX15) was conducted around the Walnut Gulch CVS in Arizona in August, 2015. SMAPVEX16 was conducted at the South Fork CVS in Iowa and Carman CVS in Manitoba, Canada from May to August 2016. The airborne PALS (Passive Active L-band Sensor) instrument mapped all experiment areas several times resulting in 30 coincidental measurements with SMAP. The experiments included intensive ground sampling regime consisting of manual sampling and augmentation of the CVS soil moisture measurements with temporary networks of soil moisture sensors. Analyses using the data from these experiments have produced various results regarding the SMAP validation and related science questions. The SMAPVEX15 data set has been used for calibration of a hyper-resolution model for soil moisture product validation; development of a multi-scale parameterization approach for surface roughness, and validation of disaggregation of SMAP soil moisture with optical thermal signal. The SMAPVEX16 data set has been already used for studying the spatial upscaling within a pixel with highly heterogeneous soil texture distribution; for understanding the process of radiative transfer at plot scale in relation to field scale and SMAP footprint scale over highly heterogeneous vegetation distribution; for testing a data fusion based soil moisture

  3. Causation and Validation of Nursing Diagnoses: A Middle Range Theory.

    PubMed

    de Oliveira Lopes, Marcos Venícios; da Silva, Viviane Martins; Herdman, T Heather

    2017-01-01

    To describe a predictive middle range theory (MRT) that provides a process for validation and incorporation of nursing diagnoses in clinical practice. Literature review. The MRT includes definitions, a pictorial scheme, propositions, causal relationships, and translation to nursing practice. The MRT can be a useful alternative for education, research, and translation of this knowledge into practice. This MRT can assist clinicians in understanding clinical reasoning, based on temporal logic and spectral interaction among elements of nursing classifications. In turn, this understanding will improve the use and accuracy of nursing diagnosis, which is a critical component of the nursing process that forms a basis for nursing practice standards worldwide. © 2015 NANDA International, Inc.

  4. Copenhagen Psychosocial Questionnaire - A validation study using the Job Demand-Resources model.

    PubMed

    Berthelsen, Hanne; Hakanen, Jari J; Westerlund, Hugo

    2018-01-01

    This study aims at investigating the nomological validity of the Copenhagen Psychosocial Questionnaire (COPSOQ II) by using an extension of the Job Demands-Resources (JD-R) model with aspects of work ability as outcome. The study design is cross-sectional. All staff working at public dental organizations in four regions of Sweden were invited to complete an electronic questionnaire (75% response rate, n = 1345). The questionnaire was based on COPSOQ II scales, the Utrecht Work Engagement scale, and the one-item Work Ability Score in combination with a proprietary item. The data was analysed by Structural Equation Modelling. This study contributed to the literature by showing that: A) The scale characteristics were satisfactory and the construct validity of COPSOQ instrument could be integrated in the JD-R framework; B) Job resources arising from leadership may be a driver of the two processes included in the JD-R model; and C) Both the health impairment and motivational processes were associated with WA, and the results suggested that leadership may impact WA, in particularly by securing task resources. In conclusion, the nomological validity of COPSOQ was supported as the JD-R model-can be operationalized by the instrument. This may be helpful for transferral of complex survey results and work life theories to practitioners in the field.

  5. Improving Flight Software Module Validation Efforts : a Modular, Extendable Testbed Software Framework

    NASA Technical Reports Server (NTRS)

    Lange, R. Connor

    2012-01-01

    Ever since Explorer-1, the United States' first Earth satellite, was developed and launched in 1958, JPL has developed many more spacecraft, including landers and orbiters. While these spacecraft vary greatly in their missions, capabilities,and destination, they all have something in common. All of the components of these spacecraft had to be comprehensively tested. While thorough testing is important to mitigate risk, it is also a very expensive and time consuming process. Thankfully,since virtually all of the software testing procedures for SMAP are computer controlled, these procedures can be automated. Most people testing SMAP flight software (FSW) would only need to write tests that exercise specific requirements and then check the filtered results to verify everything occurred as planned. This gives developers the ability to automatically launch tests on the testbed, distill the resulting logs into only the important information, generate validation documentation, and then deliver the documentation to management. With many of the steps in FSW testing automated, developers can use their limited time more effectively and can validate SMAP FSW modules quicker and test them more rigorously. As a result of the various benefits of automating much of the testing process, management is considering this automated tools use in future FSW validation efforts.

  6. Validation of a questionnaire to measure sexual health knowledge and understanding (Sexual Health Questionnaire) in Nepalese secondary school: A psychometric process

    PubMed Central

    Acharya, Dev Raj; Thomas, Malcolm; Cann, Rosemary

    2016-01-01

    Background: School-based sex education has the potential to prevent unwanted pregnancy and to promote positive sexual health at the individual, family and community level. Objectives: To develop and validate a sexual health questionnaire to measure young peoples’ sexual health knowledge and understanding (SHQ) in Nepalese secondary school. Materials and Methods: Secondary school students (n = 259, male = 43.63%, female = 56.37%) and local experts (n = 9, male = 90%, female = 10%) were participated in this study. Evaluation processes were; content validity (>0.89), plausibility check (>95), item-total correlation (>0.3), factor loading (>0.4), principal component analysis (4 factors Kaiser's criterion), Chronbach's alpha (>0.65), face validity and internal consistency using test-retest reliability (P > 0.05). Results: The principal component analysis revealed four factors to be extracted; sexual health norms and beliefs, source of sexual health information, sexual health knowledge and understanding, and level of sexual awareness. Kaiser-Meyer-Olkin (KMO) measure of sampling adequacy demonstrated that the patterns of correlations are relatively compact (>0.80). Chronbach's alpha for each factors were above the cut-off point (0.65). Face validity indicated that the questions were clear to the majority of the respondent. Moreover, there were no significant differences (P > 0.05) in the responses to the items at two time points at seven weeks later. Conclusions: The finding suggests that SHQ is a valid and reliable instrument to be used in schools to measure sexual health knowledge and understanding. Further analysis such as structured equation modelling (SEM) and confirmatory factor analysis could make the questionnaire more robust and applicable to the wider school population. PMID:27500171

  7. Evaluation and implementation of chemotherapy regimen validation in an electronic health record.

    PubMed

    Diaz, Amber H; Bubalo, Joseph S

    2014-12-01

    Computerized provider order entry of chemotherapy regimens is quickly becoming the standard for prescribing chemotherapy in both inpatient and ambulatory settings. One of the difficulties with implementation of chemotherapy regimen computerized provider order entry lies in verifying the accuracy and completeness of all regimens built in the system library. Our goal was to develop, implement, and evaluate a process for validating chemotherapy regimens in an electronic health record. We describe our experience developing and implementing a process for validating chemotherapy regimens in the setting of a standard, commercially available computerized provider order entry system. The pilot project focused on validating chemotherapy regimens in the adult inpatient oncology setting and adult ambulatory hematologic malignancy setting. A chemotherapy regimen validation process was defined as a result of the pilot project. Over a 27-week pilot period, 32 chemotherapy regimens were validated using the process we developed. Results of the study suggest that by validating chemotherapy regimens, the amount of time spent by pharmacists in daily chemotherapy review was decreased. In addition, the number of pharmacist modifications required to make regimens complete and accurate were decreased. Both physician and pharmacy disciplines showed improved satisfaction and confidence levels with chemotherapy regimens after implementation of the validation system. Chemotherapy regimen validation required a considerable amount of planning and time but resulted in increased pharmacist efficiency and improved provider confidence and satisfaction. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  8. Reliability and Construct Validity of the Psychopathic Personality Inventory-Revised in a Swedish Non-Criminal Sample – A Multimethod Approach including Psychophysiological Correlates of Empathy for Pain

    PubMed Central

    Sörman, Karolina; Nilsonne, Gustav; Howner, Katarina; Tamm, Sandra; Caman, Shilan; Wang, Hui-Xin; Ingvar, Martin; Edens, John F.; Gustavsson, Petter; Lilienfeld, Scott O; Petrovic, Predrag; Fischer, Håkan; Kristiansson, Marianne

    2016-01-01

    Cross-cultural investigation of psychopathy measures is important for clarifying the nomological network surrounding the psychopathy construct. The Psychopathic Personality Inventory-Revised (PPI-R) is one of the most extensively researched self-report measures of psychopathic traits in adults. To date however, it has been examined primarily in North American criminal or student samples. To address this gap in the literature, we examined PPI-R’s reliability, construct validity and factor structure in non-criminal individuals (N = 227) in Sweden, using a multimethod approach including psychophysiological correlates of empathy for pain. PPI-R construct validity was investigated in subgroups of participants by exploring its degree of overlap with (i) the Psychopathy Checklist: Screening Version (PCL:SV), (ii) self-rated empathy and behavioral and physiological responses in an experiment on empathy for pain, and (iii) additional self-report measures of alexithymia and trait anxiety. The PPI-R total score was significantly associated with PCL:SV total and factor scores. The PPI-R Coldheartedness scale demonstrated significant negative associations with all empathy subscales and with rated unpleasantness and skin conductance responses in the empathy experiment. The PPI-R higher order Self-Centered Impulsivity and Fearless Dominance dimensions were associated with trait anxiety in opposite directions (positively and negatively, respectively). Overall, the results demonstrated solid reliability (test-retest and internal consistency) and promising but somewhat mixed construct validity for the Swedish translation of the PPI-R. PMID:27300292

  9. Sterilization validation for medical compresses at IRASM multipurpose irradiation facility

    NASA Astrophysics Data System (ADS)

    Alexandru, Mioara; Ene, Mihaela

    2007-08-01

    In Romania, IRASM Radiation Processing Center is the unique supplier of radiation sterilization services—industrial scale (ISO 9001:2000 and ISO 13485:2003 certified). Its Laboratory of Microbiological Testing is the sole third party competent laboratory (GLPractice License, ISO 17025 certification in progress) for pharmaceutics and medical devices as well. We here refer to medical compresses as a distinct category of sterile products, made from different kind of hydrophilic materials (cotton, non-woven, polyurethane foam) with or without an impregnated ointment base (paraffin, plant extracts). These products are included in the class of medical devices, but for the sterilization validation, from microbiological point of view, there are important differences in testing method compared to the common medical devices (syringes, catheters, etc). In this paper, we present some results and practical solutions chosen to perform a sterilization validation, compliant with ISO 11137: 2006.

  10. The Validation of a Case-Based, Cumulative Assessment and Progressions Examination

    PubMed Central

    Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David

    2016-01-01

    Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435

  11. Validating emotional attention regulation as a component of emotional intelligence: A Stroop approach to individual differences in tuning in to and out of nonverbal cues.

    PubMed

    Elfenbein, Hillary Anger; Jang, Daisung; Sharma, Sudeep; Sanchez-Burks, Jeffrey

    2017-03-01

    Emotional intelligence (EI) has captivated researchers and the public alike, but it has been challenging to establish its components as objective abilities. Self-report scales lack divergent validity from personality traits, and few ability tests have objectively correct answers. We adapt the Stroop task to introduce a new facet of EI called emotional attention regulation (EAR), which involves focusing emotion-related attention for the sake of information processing rather than for the sake of regulating one's own internal state. EAR includes 2 distinct components. First, tuning in to nonverbal cues involves identifying nonverbal cues while ignoring alternate content, that is, emotion recognition under conditions of distraction by competing stimuli. Second, tuning out of nonverbal cues involves ignoring nonverbal cues while identifying alternate content, that is, the ability to interrupt emotion recognition when needed to focus attention elsewhere. An auditory test of valence included positive and negative words spoken in positive and negative vocal tones. A visual test of approach-avoidance included green- and red-colored facial expressions depicting happiness and anger. The error rates for incongruent trials met the key criteria for establishing the validity of an EI test, in that the measure demonstrated test-retest reliability, convergent validity with other EI measures, divergent validity from factors such as general processing speed and mostly personality, and predictive validity in this case for well-being. By demonstrating that facets of EI can be validly theorized and empirically assessed, results also speak to the validity of EI more generally. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  12. The analytical validation of the Oncotype DX Recurrence Score assay

    PubMed Central

    Baehner, Frederick L

    2016-01-01

    In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score® result (scale: 0–100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time. PMID:27729940

  13. The analytical validation of the Oncotype DX Recurrence Score assay.

    PubMed

    Baehner, Frederick L

    2016-01-01

    In vitro diagnostic multivariate index assays are highly complex molecular assays that can provide clinically actionable information regarding the underlying tumour biology and facilitate personalised treatment. These assays are only useful in clinical practice if all of the following are established: analytical validation (i.e., how accurately/reliably the assay measures the molecular characteristics), clinical validation (i.e., how consistently/accurately the test detects/predicts the outcomes of interest), and clinical utility (i.e., how likely the test is to significantly improve patient outcomes). In considering the use of these assays, clinicians often focus primarily on the clinical validity/utility; however, the analytical validity of an assay (e.g., its accuracy, reproducibility, and standardisation) should also be evaluated and carefully considered. This review focuses on the rigorous analytical validation and performance of the Oncotype DX ® Breast Cancer Assay, which is performed at the Central Clinical Reference Laboratory of Genomic Health, Inc. The assay process includes tumour tissue enrichment (if needed), RNA extraction, gene expression quantitation (using a gene panel consisting of 16 cancer genes plus 5 reference genes and quantitative real-time RT-PCR), and an automated computer algorithm to produce a Recurrence Score ® result (scale: 0-100). This review presents evidence showing that the Recurrence Score result reported for each patient falls within a tight clinically relevant confidence interval. Specifically, the review discusses how the development of the assay was designed to optimise assay performance, presents data supporting its analytical validity, and describes the quality control and assurance programmes that ensure optimal test performance over time.

  14. Identification and Validation of ESP Teacher Competencies: A Research Design

    ERIC Educational Resources Information Center

    Venkatraman, G.; Prema, P.

    2013-01-01

    The paper presents the research design used for identifying and validating a set of competencies required of ESP (English for Specific Purposes) teachers. The identification of the competencies and the three-stage validation process are also discussed. The observation of classes of ESP teachers for field-testing the validated competencies and…

  15. Remote observations of reentering spacecraft including the space shuttle orbiter

    NASA Astrophysics Data System (ADS)

    Horvath, Thomas J.; Cagle, Melinda F.; Grinstead, Jay H.; Gibson, David M.

    Flight measurement is a critical phase in development, validation and certification processes of technologies destined for future civilian and military operational capabilities. This paper focuses on several recent NASA-sponsored remote observations that have provided unique engineering and scientific insights of reentry vehicle flight phenomenology and performance that could not necessarily be obtained with more traditional instrumentation methods such as onboard discrete surface sensors. The missions highlighted include multiple spatially-resolved infrared observations of the NASA Space Shuttle Orbiter during hypersonic reentry from 2009 to 2011, and emission spectroscopy of comparatively small-sized sample return capsules returning from exploration missions. Emphasis has been placed upon identifying the challenges associated with these remote sensing missions with focus on end-to-end aspects that include the initial science objective, selection of the appropriate imaging platform and instrumentation suite, target flight path analysis and acquisition strategy, pre-mission simulations to optimize sensor configuration, logistics and communications during the actual observation. Explored are collaborative opportunities and technology investments required to develop a next-generation quantitative imaging system (i.e., an intelligent sensor and platform) with greater capability, which could more affordably support cross cutting civilian and military flight test needs.

  16. Remote Observations of Reentering Spacecraft Including the Space Shuttle Orbiter

    NASA Technical Reports Server (NTRS)

    Horvath, Thomas J.; Cagle, Melinda F.; Grinstead, jay H.; Gibson, David

    2013-01-01

    Flight measurement is a critical phase in development, validation and certification processes of technologies destined for future civilian and military operational capabilities. This paper focuses on several recent NASA-sponsored remote observations that have provided unique engineering and scientific insights of reentry vehicle flight phenomenology and performance that could not necessarily be obtained with more traditional instrumentation methods such as onboard discrete surface sensors. The missions highlighted include multiple spatially-resolved infrared observations of the NASA Space Shuttle Orbiter during hypersonic reentry from 2009 to 2011, and emission spectroscopy of comparatively small-sized sample return capsules returning from exploration missions. Emphasis has been placed upon identifying the challenges associated with these remote sensing missions with focus on end-to-end aspects that include the initial science objective, selection of the appropriate imaging platform and instrumentation suite, target flight path analysis and acquisition strategy, pre-mission simulations to optimize sensor configuration, logistics and communications during the actual observation. Explored are collaborative opportunities and technology investments required to develop a next-generation quantitative imaging system (i.e., an intelligent sensor and platform) with greater capability, which could more affordably support cross cutting civilian and military flight test needs.

  17. A systematic review of validated methods for identifying pulmonary fibrosis and interstitial lung disease using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aimed to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of pulmonary fibrosis and interstitial lung disease. PubMed and Iowa Drug Information Service Web searches were conducted to identify citations applicable to the pulmonary fibrosis/interstitial lung disease HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify pulmonary fibrosis and interstitial lung disease, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on pulmonary fibrosis and interstitial lung disease algorithms and validation estimates. Only five studies provided codes; none provided validation estimates. Because interstitial lung disease includes a broad spectrum of diseases, including pulmonary fibrosis, the scope of these studies varied, as did the corresponding diagnostic codes used. Research needs to be conducted on designing validation studies to test pulmonary fibrosis and interstitial lung disease algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  18. When Assessment Data Are Words: Validity Evidence for Qualitative Educational Assessments.

    PubMed

    Cook, David A; Kuper, Ayelet; Hatala, Rose; Ginsburg, Shiphra

    2016-10-01

    Quantitative scores fail to capture all important features of learner performance. This awareness has led to increased use of qualitative data when assessing health professionals. Yet the use of qualitative assessments is hampered by incomplete understanding of their role in forming judgments, and lack of consensus in how to appraise the rigor of judgments therein derived. The authors articulate the role of qualitative assessment as part of a comprehensive program of assessment, and translate the concept of validity to apply to judgments arising from qualitative assessments. They first identify standards for rigor in qualitative research, and then use two contemporary assessment validity frameworks to reorganize these standards for application to qualitative assessment.Standards for rigor in qualitative research include responsiveness, reflexivity, purposive sampling, thick description, triangulation, transparency, and transferability. These standards can be reframed using Messick's five sources of validity evidence (content, response process, internal structure, relationships with other variables, and consequences) and Kane's four inferences in validation (scoring, generalization, extrapolation, and implications). Evidence can be collected and evaluated for each evidence source or inference. The authors illustrate this approach using published research on learning portfolios.The authors advocate a "methods-neutral" approach to assessment, in which a clearly stated purpose determines the nature of and approach to data collection and analysis. Increased use of qualitative assessments will necessitate more rigorous judgments of the defensibility (validity) of inferences and decisions. Evidence should be strategically sought to inform a coherent validity argument.

  19. ASRM process development in aqueous cleaning

    NASA Technical Reports Server (NTRS)

    Swisher, Bill

    1992-01-01

    Viewgraphs are included on process development in aqueous cleaning which is taking place at the Aerojet Advanced Solid Rocket Motor (ASRM) Division under a NASA Marshall Space and Flight Center contract for design, development, test, and evaluation of the ASRM including new production facilities. The ASRM will utilize aqueous cleaning in several manufacturing process steps to clean case segments, nozzle metal components, and igniter closures. ASRM manufacturing process development is underway, including agent selection, agent characterization, subscale process optimization, bonding verification, and scale-up validation. Process parameters are currently being tested for optimization utilizing a Taguci Matrix, including agent concentration, cleaning solution temperature, agitation and immersion time, rinse water amount and temperature, and use/non-use of drying air. Based on results of process development testing to date, several observations are offered: aqueous cleaning appears effective for steels and SermeTel-coated metals in ASRM processing; aqueous cleaning agents may stain and/or attack bare aluminum metals to various extents; aqueous cleaning appears unsuitable for thermal sprayed aluminum-coated steel; aqueous cleaning appears to adequately remove a wide range of contaminants from flat metal surfaces, but supplementary assistance may be needed to remove clumps of tenacious contaminants embedded in holes, etc.; and hot rinse water appears to be beneficial to aid in drying of bare steel and retarding oxidation rate.

  20. EEG-neurofeedback for optimising performance. II: creativity, the performing arts and ecological validity.

    PubMed

    Gruzelier, John H

    2014-07-01

    As a continuation of a review of evidence of the validity of cognitive/affective gains following neurofeedback in healthy participants, including correlations in support of the gains being mediated by feedback learning (Gruzelier, 2014a), the focus here is on the impact on creativity, especially in the performing arts including music, dance and acting. The majority of research involves alpha/theta (A/T), sensory-motor rhythm (SMR) and heart rate variability (HRV) protocols. There is evidence of reliable benefits from A/T training with advanced musicians especially for creative performance, and reliable benefits from both A/T and SMR training for novice music performance in adults and in a school study with children with impact on creativity, communication/presentation and technique. Making the SMR ratio training context ecologically relevant for actors enhanced creativity in stage performance, with added benefits from the more immersive training context. A/T and HRV training have benefitted dancers. The neurofeedback evidence adds to the rapidly accumulating validation of neurofeedback, while performing arts studies offer an opportunity for ecological validity in creativity research for both creative process and product. Copyright © 2013 Elsevier Ltd. All rights reserved.

  1. The Metacognitive Awareness Listening Questionnaire: Development and Validation

    ERIC Educational Resources Information Center

    Vandergrift, Larry; Goh, Christine C. M.; Mareschal, Catherine J.; Tafaghodtari, Marzieh H.

    2006-01-01

    This article describes the development and validation of a listening questionnaire designed to assess second language (L2) listeners' metacognitive awareness and perceived use of strategies while listening to oral texts. The process of instrument development and validation is described, along with a review of the relevant literature related to…

  2. The individual therapy process questionnaire: development and validation of a revised measure to evaluate general change mechanisms in psychotherapy.

    PubMed

    Mander, Johannes

    2015-01-01

    There is a dearth of measures specifically designed to assess empirically validated mechanisms of therapeutic change. To fill in this research gap, the aim of the current study was to develop a measure that covers a large variety of empirically validated mechanisms of change with corresponding versions for the patient and therapist. To develop an instrument that is based on several important change process frameworks, we combined two established change mechanisms instruments: the Scale for the Multiperspective Assessment of General Change Mechanisms in Psychotherapy (SACiP) and the Scale of the Therapeutic Alliance-Revised (STA-R). In our study, 457 psychosomatic inpatients completed the SACiP and the STA-R and diverse outcome measures in early, middle and late stages of psychotherapy. Data analyses were conducted using factor analyses and multilevel modelling. The psychometric properties of the resulting Individual Therapy Process Questionnaire were generally good to excellent, as demonstrated by (a) exploratory factor analyses on both patient and therapist ratings, (b) CFA on later measuring times, (c) high internal consistencies and (d) significant outcome predictive effects. The parallel forms of the ITPQ deliver opportunities to compare the patient and therapist perspectives for a broader range of facets of change mechanisms than was hitherto possible. Consequently, the measure can be applied in future research to more specifically analyse different change mechanism profiles in session-to-session development and outcome prediction. Key Practitioner Message This article describes the development of an instrument that measures general mechanisms of change in psychotherapy from both the patient and therapist perspectives. Post-session item ratings from both the patient and therapist can be used as feedback to optimize therapeutic processes. We provide a detailed discussion of measures developed to evaluate therapeutic change mechanisms. Copyright © 2014 John

  3. Creation and validation of web-based food allergy audiovisual educational materials for caregivers.

    PubMed

    Rosen, Jamie; Albin, Stephanie; Sicherer, Scott H

    2014-01-01

    Studies reveal deficits in caregivers' ability to prevent and treat food-allergic reactions with epinephrine and a consumer preference for validated educational materials in audiovisual formats. This study was designed to create brief, validated educational videos on food allergen avoidance and emergency management of anaphylaxis for caregivers of children with food allergy. The study used a stepwise iterative process including creation of a needs assessment survey consisting of 25 queries administered to caregivers and food allergy experts to identify curriculum content. Preliminary videos were drafted, reviewed, and revised based on knowledge and satisfaction surveys given to another cohort of caregivers and health care professionals. The final materials were tested for validation of their educational impact and user satisfaction using pre- and postknowledge tests and satisfaction surveys administered to a convenience sample of 50 caretakers who had not participated in the development stages. The needs assessment identified topics of importance including treatment of allergic reactions and food allergen avoidance. Caregivers in the final validation included mothers (76%), fathers (22%), and other caregivers (2%). Race/ethnicity were white (66%), black (12%), Asian (12%), Hispanic (8%), and other (2%). Knowledge tests (maximum score = 18) increased from a mean score of 12.4 preprogram to 16.7 postprogram (p < 0.0001). On a 7-point Likert scale, all satisfaction categories remained above a favorable mean score of 6, indicating participants were overall very satisfied, learned a lot, and found the materials to be informative, straightforward, helpful, and interesting. This web-based audiovisual curriculum on food allergy improved knowledge scores and was well received.

  4. Survey of Verification and Validation Techniques for Small Satellite Software Development

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    2015-01-01

    The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation.

  5. Shift Verification and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pandya, Tara M.; Evans, Thomas M.; Davidson, Gregory G

    2016-09-07

    This documentation outlines the verification and validation of Shift for the Consortium for Advanced Simulation of Light Water Reactors (CASL). Five main types of problems were used for validation: small criticality benchmark problems; full-core reactor benchmarks for light water reactors; fixed-source coupled neutron-photon dosimetry benchmarks; depletion/burnup benchmarks; and full-core reactor performance benchmarks. We compared Shift results to measured data and other simulated Monte Carlo radiation transport code results, and found very good agreement in a variety of comparison measures. These include prediction of critical eigenvalue, radial and axial pin power distributions, rod worth, leakage spectra, and nuclide inventories over amore » burn cycle. Based on this validation of Shift, we are confident in Shift to provide reference results for CASL benchmarking.« less

  6. KENNEDY SPACE CENTER, FLA. -- Endeavour begins rolling out of the Orbiter Processing Facility for temporary transfer to the Vehicle Assembly Building. The move allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.

    NASA Image and Video Library

    2004-01-09

    KENNEDY SPACE CENTER, FLA. -- Endeavour begins rolling out of the Orbiter Processing Facility for temporary transfer to the Vehicle Assembly Building. The move allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.

  7. KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis rolls into the Orbiter Processing Facility after spending 10 days in the Vehicle Assembly Building. The hiatus in the VAB allowed work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-16

    KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis rolls into the Orbiter Processing Facility after spending 10 days in the Vehicle Assembly Building. The hiatus in the VAB allowed work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.

  8. KENNEDY SPACE CENTER, FLA. -- Endeavour is ready to be rolled out of the Orbiter Processing Facility for temporary transfer to the Vehicle Assembly Building. The move allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.

    NASA Image and Video Library

    2004-01-09

    KENNEDY SPACE CENTER, FLA. -- Endeavour is ready to be rolled out of the Orbiter Processing Facility for temporary transfer to the Vehicle Assembly Building. The move allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.

  9. KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis rolls toward the Orbiter Processing Facility after spending 10 days in the Vehicle Assembly Building. The hiatus in the VAB allowed work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.

    NASA Image and Video Library

    2003-12-16

    KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis rolls toward the Orbiter Processing Facility after spending 10 days in the Vehicle Assembly Building. The hiatus in the VAB allowed work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.

  10. Empirical evaluation of the Process Overview Measure for assessing situation awareness in process plants.

    PubMed

    Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd

    2016-03-01

    The Process Overview Measure is a query-based measure developed to assess operator situation awareness (SA) from monitoring process plants. A companion paper describes how the measure has been developed according to process plant properties and operator cognitive work. The Process Overview Measure demonstrated practicality, sensitivity, validity and reliability in two full-scope simulator experiments investigating dramatically different operational concepts. Practicality was assessed based on qualitative feedback of participants and researchers. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA in full-scope simulator settings based on data collected on process experts. Thus, full-scope simulator studies can employ the Process Overview Measure to reveal the impact of new control room technology and operational concepts on monitoring process plants. Practitioner Summary: The Process Overview Measure is a query-based measure that demonstrated practicality, sensitivity, validity and reliability for assessing operator situation awareness (SA) from monitoring process plants in representative settings.

  11. Development and validation of a scoring index to predict the presence of lesions in capsule endoscopy in patients with suspected Crohn's disease of the small bowel: a Spanish multicenter study.

    PubMed

    Egea-Valenzuela, Juan; González Suárez, Begoña; Sierra Bernal, Cristian; Juanmartiñena Fernández, José Francisco; Luján-Sanchís, Marisol; San Juan Acosta, Mileidis; Martínez Andrés, Blanca; Pons Beltrán, Vicente; Sastre Lozano, Violeta; Carretero Ribón, Cristina; de Vera Almenar, Félix; Sánchez Cuenca, Joaquín; Alberca de Las Parras, Fernando; Rodríguez de Miguel, Cristina; Valle Muñoz, Julio; Férnandez-Urién Sainz, Ignacio; Torres González, Carolina; Borque Barrera, Pilar; Pérez-Cuadrado Robles, Enrique; Alonso Lázaro, Noelia; Martínez García, Pilar; Prieto de Frías, César; Carballo Álvarez, Fernando

    2018-05-01

    Capsule endoscopy (CE) is the first-line investigation in cases of suspected Crohn's disease (CD) of the small bowel, but the factors associated with a higher diagnostic yield remain unclear. Our aim is to develop and validate a scoring index to assess the risk of the patients in this setting on the basis of biomarkers. Data on fecal calprotectin, C-reactive protein, and other biomarkers from a population of 124 patients with suspected CD of the small bowel studied by CE and included in a PhD study were used to build a scoring index. This was first used on this population (internal validation process) and after that on a different set of patients from a multicenter study (external validation process). An index was designed in which every biomarker is assigned a score. Three risk groups have been established (low, intermediate, and high). In the internal validation analysis (124 individuals), patients had a 10, 46.5, and 81% probability of showing inflammatory lesions in CE in the low-risk, intermediate-risk, and high-risk groups, respectively. In the external validation analysis, including 410 patients from 12 Spanish hospitals, this probability was 15.8, 49.7, and 80.6% for the low-risk, intermediate-risk, and high-risk groups, respectively. Results from the internal validation process show that the scoring index is coherent, and results from the external validation process confirm its reliability. This index can be a useful tool for selecting patients before CE studies in cases of suspected CD of the small bowel.

  12. Data quality and processing for decision making: divergence between corporate strategy and manufacturing processes

    NASA Astrophysics Data System (ADS)

    McNeil, Ronald D.; Miele, Renato; Shaul, Dennis

    2000-10-01

    Information technology is driving improvements in manufacturing systems. Results are higher productivity and quality. However, corporate strategy is driven by a number of factors and includes data and pressure from multiple stakeholders, which includes employees, managers, executives, stockholders, boards, suppliers and customers. It is also driven by information about competitors and emerging technology. Much information is based on processing of data and the resulting biases of the processors. Thus, stakeholders can base inputs on faulty perceptions, which are not reality based. Prior to processing, data used may be inaccurate. Sources of data and information may include demographic reports, statistical analyses, intelligence reports (e.g., marketing data), technology and primary data collection. The reliability and validity of data as well as the management of sources and information is critical element to strategy formulation. The paper explores data collection, processing and analyses from secondary and primary sources, information generation and report presentation for strategy formulation and contrast this with data and information utilized to drive internal process such as manufacturing. The hypothesis is that internal process, such as manufacturing, are subordinate to corporate strategies. The impact of possible divergence in quality of decisions at the corporate level on IT driven, quality-manufacturing processes based on measurable outcomes is significant. Recommendations for IT improvements at the corporate strategy level are given.

  13. Remarks on CFD validation: A Boeing Commercial Airplane Company perspective

    NASA Technical Reports Server (NTRS)

    Rubbert, Paul E.

    1987-01-01

    Requirements and meaning of validation of computational fluid dynamics codes are discussed. Topics covered include: validating a code, validating a user, and calibrating a code. All results are presented in viewgraph format.

  14. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2014 CFR

    2014-07-01

    ... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...

  15. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2012 CFR

    2012-07-01

    ... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...

  16. 40 CFR 761.392 - Preparing validation study samples.

    Code of Federal Regulations, 2013 CFR

    2013-07-01

    ... establish a surface concentration to be included in the standard operating procedure. The surface levels of... Under § 761.79(d)(4) § 761.392 Preparing validation study samples. (a)(1) To validate a procedure to... surfaces must be ≥20 µg/100 cm2. (2) To validate a procedure to decontaminate a specified surface...

  17. Ground-water models: Validate or invalidate

    USGS Publications Warehouse

    Bredehoeft, J.D.; Konikow, Leonard F.

    1993-01-01

    The word validation has a clear meaning to both the scientific community and the general public. Within the scientific community the validation of scientific theory has been the subject of philosophical debate. The philosopher of science, Karl Popper, argued that scientific theory cannot be validated, only invalidated. Popper’s view is not the only opinion in this debate; however, many scientists today agree with Popper (including the authors). To the general public, proclaiming that a ground-water model is validated carries with it an aura of correctness that we do not believe many of us who model would claim. We can place all the caveats we wish, but the public has its own understanding of what the word implies. Using the word valid with respect to models misleads the public; verification carries with it similar connotations as far as the public is concerned. Our point is this: using the terms validation and verification are misleading, at best. These terms should be abandoned by the ground-water community.

  18. Development and validation of the Child Oral Health Impact Profile - Preschool version.

    PubMed

    Ruff, R R; Sischo, L; Chinn, C H; Broder, H L

    2017-09-01

    The Child Oral Health Impact Profile (COHIP) is a validated instrument created to measure the oral health-related quality of life of school-aged children. The purpose of this study was to develop and validate a preschool version of the COHIP (COHIP-PS) for children aged 2-5. The COHIP-PS was developed and validated using a multi-stage process consisting of item selection, face validity testing, item impact testing, reliability and validity testing, and factor analysis. A cross-sectional convenience sample of caregivers having children 2-5 years old from four groups completed item clarity and impact forms. Groups were recruited from pediatric health clinics or preschools/daycare centers, speech clinics, dental clinics, or cleft/craniofacial centers. Participants had a variety of oral health-related conditions, including caries, congenital orofacial anomalies, and speech/language deficiencies such as articulation and language disorders. COHIP-PS. The COHIP-PS was found to have acceptable internal validity (a = 0.71) and high test-retest reliability (0.87), though internal validity was below the accepted threshold for the community sample. While discriminant validity results indicated significant differences across study groups, the overall magnitude of differences was modest. Results from confirmatory factor analyses support the use of a four-factor model consisting of 11 items across oral health, functional well-being, social-emotional well-being, and self-image domains. Quality of life is an integral factor in understanding and assessing children's well-being. The COHIP-PS is a validated oral health-related quality of life measure for preschool children with cleft or other oral conditions. Copyright© 2017 Dennis Barber Ltd.

  19. The influence of commenting validity, placement, and style on perceptions of computer code trustworthiness: A heuristic-systematic processing approach.

    PubMed

    Alarcon, Gene M; Gamble, Rose F; Ryan, Tyler J; Walter, Charles; Jessup, Sarah A; Wood, David W; Capiola, August

    2018-07-01

    Computer programs are a ubiquitous part of modern society, yet little is known about the psychological processes that underlie reviewing code. We applied the heuristic-systematic model (HSM) to investigate the influence of computer code comments on perceptions of code trustworthiness. The study explored the influence of validity, placement, and style of comments in code on trustworthiness perceptions and time spent on code. Results indicated valid comments led to higher trust assessments and more time spent on the code. Properly placed comments led to lower trust assessments and had a marginal effect on time spent on code; however, the effect was no longer significant after controlling for effects of the source code. Low style comments led to marginally higher trustworthiness assessments, but high style comments led to longer time spent on the code. Several interactions were also found. Our findings suggest the relationship between code comments and perceptions of code trustworthiness is not as straightforward as previously thought. Additionally, the current paper extends the HSM to the programming literature. Copyright © 2018 Elsevier Ltd. All rights reserved.

  20. Study on the Rationality and Validity of Probit Models of Domino Effect to Chemical Process Equipment caused by Overpressure

    NASA Astrophysics Data System (ADS)

    Sun, Dongliang; Huang, Guangtuan; Jiang, Juncheng; Zhang, Mingguang; Wang, Zhirong

    2013-04-01

    Overpressure is one important cause of domino effect in accidents of chemical process equipments. Some models considering propagation probability and threshold values of the domino effect caused by overpressure have been proposed in previous study. In order to prove the rationality and validity of the models reported in the reference, two boundary values of three damage degrees reported were considered as random variables respectively in the interval [0, 100%]. Based on the overpressure data for damage to the equipment and the damage state, and the calculation method reported in the references, the mean square errors of the four categories of damage probability models of overpressure were calculated with random boundary values, and then a relationship of mean square error vs. the two boundary value was obtained, the minimum of mean square error was obtained, compared with the result of the present work, mean square error decreases by about 3%. Therefore, the error was in the acceptable range of engineering applications, the models reported can be considered reasonable and valid.