Sample records for validation process based

  1. How Many Batches Are Needed for Process Validation under the New FDA Guidance?

    PubMed

    Yang, Harry

    2013-01-01

    The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and they highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance. The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and THEY highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance.

  2. Validation of column-based chromatography processes for the purification of proteins. Technical report No. 14.

    PubMed

    2008-01-01

    PDA Technical Report No. 14 has been written to provide current best practices, such as application of risk-based decision making, based in sound science to provide a foundation for the validation of column-based chromatography processes and to expand upon information provided in Technical Report No. 42, Process Validation of Protein Manufacturing. The intent of this technical report is to provide an integrated validation life-cycle approach that begins with the use of process development data for the definition of operational parameters as a basis for validation, confirmation, and/or minor adjustment to these parameters at manufacturing scale during production of conformance batches and maintenance of the validated state throughout the product's life cycle.

  3. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  4. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  5. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  6. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  7. 21 CFR 1271.230 - Process validation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...

  8. Validation of gamma irradiator controls for quality and regulatory compliance

    NASA Astrophysics Data System (ADS)

    Harding, Rorry B.; Pinteric, Francis J. A.

    1995-09-01

    Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the Current Good Manufacturing Practice (CGMP) regulations in place to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However, it is only recently that FDA audits have focused on this component of the process validation. What is Irradiator Control System Validation? What constitutes evidence of control? How do owners obtain evidence? What is the irradiator supplier's role in validation? How does the ISO 9000 Quality Standard relate to the FDA's CGMP requirement for evidence of Control System Validation? This paper presents answers to these questions based on the recent experiences of Nordion's engineering and product management staff who have worked with several US-based irradiator owners. This topic — Validation of Irradiator Controls — is a significant regulatory compliance and operations issue within the irradiator suppliers' and users' community.

  9. Validation of a Performance Assessment Instrument in Problem-Based Learning Tutorials Using Two Cohorts of Medical Students

    ERIC Educational Resources Information Center

    Lee, Ming; Wimmers, Paul F.

    2016-01-01

    Although problem-based learning (PBL) has been widely used in medical schools, few studies have attended to the assessment of PBL processes using validated instruments. This study examined reliability and validity for an instrument assessing PBL performance in four domains: Problem Solving, Use of Information, Group Process, and Professionalism.…

  10. Risk-based Methodology for Validation of Pharmaceutical Batch Processes.

    PubMed

    Wiles, Frederick

    2013-01-01

    In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.

  11. Screening tool for oropharyngeal dysphagia in stroke - Part I: evidence of validity based on the content and response processes.

    PubMed

    Almeida, Tatiana Magalhães de; Cola, Paula Cristina; Pernambuco, Leandro de Araújo; Magalhães, Hipólito Virgílio; Magnoni, Carlos Daniel; Silva, Roberta Gonçalves da

    2017-08-17

    The aim of the present study was to identify the evidence of validity based on the content and response process of the Rastreamento de Disfagia Orofaríngea no Acidente Vascular Encefálico (RADAVE; "Screening Tool for Oropharyngeal Dysphagia in Stroke"). The criteria used to elaborate the questions were based on a literature review. A group of judges consisting of 19 different health professionals evaluated the relevance and representativeness of the questions, and the results were analyzed using the Content Validity Index. In order to evidence validity based on the response processes, 23 health professionals administered the screening tool and analyzed the questions using a structured scale and cognitive interview. The RADAVE structured to be applied in two stages. The first version consisted of 18 questions in stage I and 11 questions in stage II. Eight questions in stage I and four in stage II did not reach the minimum Content Validity Index, requiring reformulation by the authors. The cognitive interview demonstrated some misconceptions. New adjustments were made and the final version was produced with 12 questions in stage I and six questions in stage II. It was possible to develop a screening tool for dysphagia in stroke with adequate evidence of validity based on content and response processes. Both validity evidences obtained so far allowed to adjust the screening tool in relation to its construct. The next studies will analyze the other evidences of validity and the measures of accuracy.

  12. Model-based verification and validation of the SMAP uplink processes

    NASA Astrophysics Data System (ADS)

    Khan, M. O.; Dubos, G. F.; Tirona, J.; Standley, S.

    Model-Based Systems Engineering (MBSE) is being used increasingly within the spacecraft design community because of its benefits when compared to document-based approaches. As the complexity of projects expands dramatically with continually increasing computational power and technology infusion, the time and effort needed for verification and validation (V& V) increases geometrically. Using simulation to perform design validation with system-level models earlier in the life cycle stands to bridge the gap between design of the system (based on system-level requirements) and verifying those requirements/validating the system as a whole. This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V& V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process. Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based development efforts.

  13. Empirical evaluation of the Process Overview Measure for assessing situation awareness in process plants.

    PubMed

    Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd

    2016-03-01

    The Process Overview Measure is a query-based measure developed to assess operator situation awareness (SA) from monitoring process plants. A companion paper describes how the measure has been developed according to process plant properties and operator cognitive work. The Process Overview Measure demonstrated practicality, sensitivity, validity and reliability in two full-scope simulator experiments investigating dramatically different operational concepts. Practicality was assessed based on qualitative feedback of participants and researchers. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA in full-scope simulator settings based on data collected on process experts. Thus, full-scope simulator studies can employ the Process Overview Measure to reveal the impact of new control room technology and operational concepts on monitoring process plants. Practitioner Summary: The Process Overview Measure is a query-based measure that demonstrated practicality, sensitivity, validity and reliability for assessing operator situation awareness (SA) from monitoring process plants in representative settings.

  14. Validation of a DNA IQ-based extraction method for TECAN robotic liquid handling workstations for processing casework.

    PubMed

    Frégeau, Chantal J; Lett, C Marc; Fourney, Ron M

    2010-10-01

    A semi-automated DNA extraction process for casework samples based on the Promega DNA IQ™ system was optimized and validated on TECAN Genesis 150/8 and Freedom EVO robotic liquid handling stations configured with fixed tips and a TECAN TE-Shake™ unit. The use of an orbital shaker during the extraction process promoted efficiency with respect to DNA capture, magnetic bead/DNA complex washes and DNA elution. Validation studies determined the reliability and limitations of this shaker-based process. Reproducibility with regards to DNA yields for the tested robotic workstations proved to be excellent and not significantly different than that offered by the manual phenol/chloroform extraction. DNA extraction of animal:human blood mixtures contaminated with soil demonstrated that a human profile was detectable even in the presence of abundant animal blood. For exhibits containing small amounts of biological material, concordance studies confirmed that DNA yields for this shaker-based extraction process are equivalent or greater to those observed with phenol/chloroform extraction as well as our original validated automated magnetic bead percolation-based extraction process. Our data further supports the increasing use of robotics for the processing of casework samples. Crown Copyright © 2009. Published by Elsevier Ireland Ltd. All rights reserved.

  15. Validity in the hiring and evaluation process.

    PubMed

    Gregg, Robert E

    2006-01-01

    Validity means "based on sound principles." Hiring decisions, discharges, and layoffs are often challenged in court. Unfortunately the employer's defenses are too often found "invalid." The Americans With Disabilities Act requires the employer to show a "validated" hiring process. Defense of discharges or layoffs often focuses on validity of the employer's decision. This article explains the elements of validity needed for sound and defendable employment decisions.

  16. Review of surface steam sterilization for validation purposes.

    PubMed

    van Doornmalen, Joost; Kopinga, Klaas

    2008-03-01

    Sterilization is an essential step in the process of producing sterile medical devices. To guarantee sterility, the process of sterilization must be validated. Because there is no direct way to measure sterility, the techniques applied to validate the sterilization process are based on statistical principles. Steam sterilization is the most frequently applied sterilization method worldwide and can be validated either by indicators (chemical or biological) or physical measurements. The steam sterilization conditions are described in the literature. Starting from these conditions, criteria for the validation of steam sterilization are derived and can be described in terms of physical parameters. Physical validation of steam sterilization appears to be an adequate and efficient validation method that could be considered as an alternative for indicator validation. Moreover, physical validation can be used for effective troubleshooting in steam sterilizing processes.

  17. Validation of the Evidence-Based Practice Process Assessment Scale

    ERIC Educational Resources Information Center

    Rubin, Allen; Parrish, Danielle E.

    2011-01-01

    Objective: This report describes the reliability, validity, and sensitivity of a scale that assesses practitioners' perceived familiarity with, attitudes of, and implementation of the evidence-based practice (EBP) process. Method: Social work practitioners and second-year master of social works (MSW) students (N = 511) were surveyed in four sites…

  18. Recommendations for elaboration, transcultural adaptation and validation process of tests in Speech, Hearing and Language Pathology.

    PubMed

    Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de

    2017-06-08

    to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.

  19. Applying Kane's Validity Framework to a Simulation Based Assessment of Clinical Competence

    ERIC Educational Resources Information Center

    Tavares, Walter; Brydges, Ryan; Myre, Paul; Prpic, Jason; Turner, Linda; Yelle, Richard; Huiskamp, Maud

    2018-01-01

    Assessment of clinical competence is complex and inference based. Trustworthy and defensible assessment processes must have favourable evidence of validity, particularly where decisions are considered high stakes. We aimed to organize, collect and interpret validity evidence for a high stakes simulation based assessment strategy for certifying…

  20. Development and Validation of the Evidence-Based Practice Process Assessment Scale: Preliminary Findings

    ERIC Educational Resources Information Center

    Rubin, Allen; Parrish, Danielle E.

    2010-01-01

    Objective: This report describes the development and preliminary findings regarding the reliability, validity, and sensitivity of a scale that has been developed to assess practitioners' perceived familiarity with, attitudes about, and implementation of the phases of the evidence-based practice (EBP) process. Method: After a panel of national…

  1. Opportunity integrated assessment facilitating critical thinking and science process skills measurement on acid base matter

    NASA Astrophysics Data System (ADS)

    Sari, Anggi Ristiyana Puspita; Suyanta, LFX, Endang Widjajanti; Rohaeti, Eli

    2017-05-01

    Recognizing the importance of the development of critical thinking and science process skills, the instrument should give attention to the characteristics of chemistry. Therefore, constructing an accurate instrument for measuring those skills is important. However, the integrated instrument assessment is limited in number. The purpose of this study is to validate an integrated assessment instrument for measuring students' critical thinking and science process skills on acid base matter. The development model of the test instrument adapted McIntire model. The sample consisted of 392 second grade high school students in the academic year of 2015/2016 in Yogyakarta. Exploratory Factor Analysis (EFA) was conducted to explore construct validity, whereas content validity was substantiated by Aiken's formula. The result shows that the KMO test is 0.714 which indicates sufficient items for each factor and the Bartlett test is significant (a significance value of less than 0.05). Furthermore, content validity coefficient which is based on 8 experts is obtained at 0.85. The findings support the integrated assessment instrument to measure critical thinking and science process skills on acid base matter.

  2. Quantification of construction waste prevented by BIM-based design validation: Case studies in South Korea.

    PubMed

    Won, Jongsung; Cheng, Jack C P; Lee, Ghang

    2016-03-01

    Waste generated in construction and demolition processes comprised around 50% of the solid waste in South Korea in 2013. Many cases show that design validation based on building information modeling (BIM) is an effective means to reduce the amount of construction waste since construction waste is mainly generated due to improper design and unexpected changes in the design and construction phases. However, the amount of construction waste that could be avoided by adopting BIM-based design validation has been unknown. This paper aims to estimate the amount of construction waste prevented by a BIM-based design validation process based on the amount of construction waste that might be generated due to design errors. Two project cases in South Korea were studied in this paper, with 381 and 136 design errors detected, respectively during the BIM-based design validation. Each design error was categorized according to its cause and the likelihood of detection before construction. The case studies show that BIM-based design validation could prevent 4.3-15.2% of construction waste that might have been generated without using BIM. Copyright © 2015 Elsevier Ltd. All rights reserved.

  3. Issues in developing valid assessments of speech pathology students' performance in the workplace.

    PubMed

    McAllister, Sue; Lincoln, Michelle; Ferguson, Alison; McAllister, Lindy

    2010-01-01

    Workplace-based learning is a critical component of professional preparation in speech pathology. A validated assessment of this learning is seen to be 'the gold standard', but it is difficult to develop because of design and validation issues. These issues include the role and nature of judgement in assessment, challenges in measuring quality, and the relationship between assessment and learning. Valid assessment of workplace-based performance needs to capture the development of competence over time and account for both occupation specific and generic competencies. This paper reviews important conceptual issues in the design of valid and reliable workplace-based assessments of competence including assessment content, process, impact on learning, measurement issues, and validation strategies. It then goes on to share what has been learned about quality assessment and validation of a workplace-based performance assessment using competency-based ratings. The outcomes of a four-year national development and validation of an assessment tool are described. A literature review of issues in conceptualizing, designing, and validating workplace-based assessments was conducted. Key factors to consider in the design of a new tool were identified and built into the cycle of design, trialling, and data analysis in the validation stages of the development process. This paper provides an accessible overview of factors to consider in the design and validation of workplace-based assessment tools. It presents strategies used in the development and national validation of a tool COMPASS, used in an every speech pathology programme in Australia, New Zealand, and Singapore. The paper also describes Rasch analysis, a model-based statistical approach which is useful for establishing validity and reliability of assessment tools. Through careful attention to conceptual and design issues in the development and trialling of workplace-based assessments, it has been possible to develop the world's first valid and reliable national assessment tool for the assessment of performance in speech pathology.

  4. The Validation of a Case-Based, Cumulative Assessment and Progressions Examination

    PubMed Central

    Coker, Adeola O.; Copeland, Jeffrey T.; Gottlieb, Helmut B.; Horlen, Cheryl; Smith, Helen E.; Urteaga, Elizabeth M.; Ramsinghani, Sushma; Zertuche, Alejandra; Maize, David

    2016-01-01

    Objective. To assess content and criterion validity, as well as reliability of an internally developed, case-based, cumulative, high-stakes third-year Annual Student Assessment and Progression Examination (P3 ASAP Exam). Methods. Content validity was assessed through the writing-reviewing process. Criterion validity was assessed by comparing student scores on the P3 ASAP Exam with the nationally validated Pharmacy Curriculum Outcomes Assessment (PCOA). Reliability was assessed with psychometric analysis comparing student performance over four years. Results. The P3 ASAP Exam showed content validity through representation of didactic courses and professional outcomes. Similar scores on the P3 ASAP Exam and PCOA with Pearson correlation coefficient established criterion validity. Consistent student performance using Kuder-Richardson coefficient (KR-20) since 2012 reflected reliability of the examination. Conclusion. Pharmacy schools can implement internally developed, high-stakes, cumulative progression examinations that are valid and reliable using a robust writing-reviewing process and psychometric analyses. PMID:26941435

  5. Refinement and Further Validation of the Decisional Process Inventory.

    ERIC Educational Resources Information Center

    Hartung, Paul J.; Marco, Cynthia D.

    1998-01-01

    The Decisional Process Inventory is a Gestalt theory-based measure of career decision-making and level of career indecision. Results from a sample of 183 undergraduates supported its content, construct, and concurrent validity. (SK)

  6. When fast logic meets slow belief: Evidence for a parallel-processing model of belief bias.

    PubMed

    Trippas, Dries; Thompson, Valerie A; Handley, Simon J

    2017-05-01

    Two experiments pitted the default-interventionist account of belief bias against a parallel-processing model. According to the former, belief bias occurs because a fast, belief-based evaluation of the conclusion pre-empts a working-memory demanding logical analysis. In contrast, according to the latter both belief-based and logic-based responding occur in parallel. Participants were given deductive reasoning problems of variable complexity and instructed to decide whether the conclusion was valid on half the trials or to decide whether the conclusion was believable on the other half. When belief and logic conflict, the default-interventionist view predicts that it should take less time to respond on the basis of belief than logic, and that the believability of a conclusion should interfere with judgments of validity, but not the reverse. The parallel-processing view predicts that beliefs should interfere with logic judgments only if the processing required to evaluate the logical structure exceeds that required to evaluate the knowledge necessary to make a belief-based judgment, and vice versa otherwise. Consistent with this latter view, for the simplest reasoning problems (modus ponens), judgments of belief resulted in lower accuracy than judgments of validity, and believability interfered more with judgments of validity than the converse. For problems of moderate complexity (modus tollens and single-model syllogisms), the interference was symmetrical, in that validity interfered with belief judgments to the same degree that believability interfered with validity judgments. For the most complex (three-term multiple-model syllogisms), conclusion believability interfered more with judgments of validity than vice versa, in spite of the significant interference from conclusion validity on judgments of belief.

  7. Examining Evidence for the Validity of PISA Learning Strategy Scales Based on Student Response Processes

    ERIC Educational Resources Information Center

    Hopfenbeck, Therese N.; Maul, Andrew

    2011-01-01

    The aim of this study was to investigate response-process based evidence for the validity of the Programme for International Student Assessment's (PISA) self-report questionnaire scales as measures of specific psychological constructs, with a focus on scales meant to measure inclination toward specific learning strategies. Cognitive interviews (N…

  8. Validation of Student and Parent Reported Data on the Basic Grant Application Form: Pre-Award Validation Analysis Study. Revised Final Report.

    ERIC Educational Resources Information Center

    Applied Management Sciences, Inc., Silver Spring, MD.

    The 1978-1979 pre-award institution validation process for the Basic Educational Opportunity Grant (BEOG) program was studied, based on applicant and grant recipient files as of the end of February 1979. The objective was to assess the impact of the validation process on the proper award of BEOGs, and to determine whether the criteria for…

  9. Validation of the Vanderbilt Holistic Face Processing Test.

    PubMed

    Wang, Chao-Chih; Ross, David A; Gauthier, Isabel; Richler, Jennifer J

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1.

  10. Validation of the Vanderbilt Holistic Face Processing Test

    PubMed Central

    Wang, Chao-Chih; Ross, David A.; Gauthier, Isabel; Richler, Jennifer J.

    2016-01-01

    The Vanderbilt Holistic Face Processing Test (VHPT-F) is a new measure of holistic face processing with better psychometric properties relative to prior measures developed for group studies (Richler et al., 2014). In fields where psychologists study individual differences, validation studies are commonplace and the concurrent validity of a new measure is established by comparing it to an older measure with established validity. We follow this approach and test whether the VHPT-F measures the same construct as the composite task, which is group-based measure at the center of the large literature on holistic face processing. In Experiment 1, we found a significant correlation between holistic processing measured in the VHPT-F and the composite task. Although this correlation was small, it was comparable to the correlation between holistic processing measured in the composite task with the same faces, but different target parts (top or bottom), which represents a reasonable upper limit for correlations between the composite task and another measure of holistic processing. These results confirm the validity of the VHPT-F by demonstrating shared variance with another measure of holistic processing based on the same operational definition. These results were replicated in Experiment 2, but only when the demographic profile of our sample matched that of Experiment 1. PMID:27933014

  11. Model-Based Verification and Validation of the SMAP Uplink Processes

    NASA Technical Reports Server (NTRS)

    Khan, M. Omair; Dubos, Gregory F.; Tirona, Joseph; Standley, Shaun

    2013-01-01

    This case study stands as an example of how a project can validate a system-level design earlier in the project life cycle than traditional V&V processes by using simulation on a system model. Specifically, this paper describes how simulation was added to a system model of the Soil Moisture Active-Passive (SMAP) mission's uplink process.Also discussed are the advantages and disadvantages of the methods employed and the lessons learned; which are intended to benefit future model-based and simulation-based V&V development efforts.

  12. Validity of Scientific Based Chemistry Android Module to Empower Science Process Skills (SPS) in Solubility Equilibrium

    NASA Astrophysics Data System (ADS)

    Antrakusuma, B.; Masykuri, M.; Ulfa, M.

    2018-04-01

    Evolution of Android technology can be applied to chemistry learning, one of the complex chemistry concept was solubility equilibrium. this concept required the science process skills (SPS). This study aims to: 1) Characteristic scientific based chemistry Android module to empowering SPS, and 2) Validity of the module based on content validity and feasibility test. This research uses a Research and Development approach (RnD). Research subjects were 135 s1tudents and three teachers at three high schools in Boyolali, Central of Java. Content validity of the module was tested by seven experts using Aiken’s V technique, and the module feasibility was tested to students and teachers in each school. Characteristics of chemistry module can be accessed using the Android device. The result of validation of the module contents got V = 0.89 (Valid), and the results of the feasibility test Obtained 81.63% (by the student) and 73.98% (by the teacher) indicates this module got good criteria.

  13. [Computerized system validation of clinical researches].

    PubMed

    Yan, Charles; Chen, Feng; Xia, Jia-lai; Zheng, Qing-shan; Liu, Daniel

    2015-11-01

    Validation is a documented process that provides a high degree of assurance. The computer system does exactly and consistently what it is designed to do in a controlled manner throughout the life. The validation process begins with the system proposal/requirements definition, and continues application and maintenance until system retirement and retention of the e-records based on regulatory rules. The objective to do so is to clearly specify that each application of information technology fulfills its purpose. The computer system validation (CSV) is essential in clinical studies according to the GCP standard, meeting product's pre-determined attributes of the specifications, quality, safety and traceability. This paper describes how to perform the validation process and determine relevant stakeholders within an organization in the light of validation SOPs. Although a specific accountability in the implementation of the validation process might be outsourced, the ultimate responsibility of the CSV remains on the shoulder of the business process owner-sponsor. In order to show that the compliance of the system validation has been properly attained, it is essential to set up comprehensive validation procedures and maintain adequate documentations as well as training records. Quality of the system validation should be controlled using both QC and QA means.

  14. Comparing Thermal Process Validation Methods for Salmonella Inactivation on Almond Kernels.

    PubMed

    Jeong, Sanghyup; Marks, Bradley P; James, Michael K

    2017-01-01

    Ongoing regulatory changes are increasing the need for reliable process validation methods for pathogen reduction processes involving low-moisture products; however, the reliability of various validation methods has not been evaluated. Therefore, the objective was to quantify accuracy and repeatability of four validation methods (two biologically based and two based on time-temperature models) for thermal pasteurization of almonds. Almond kernels were inoculated with Salmonella Enteritidis phage type 30 or Enterococcus faecium (NRRL B-2354) at ~10 8 CFU/g, equilibrated to 0.24, 0.45, 0.58, or 0.78 water activity (a w ), and then heated in a pilot-scale, moist-air impingement oven (dry bulb 121, 149, or 177°C; dew point <33.0, 69.4, 81.6, or 90.6°C; v air = 2.7 m/s) to a target lethality of ~4 log. Almond surface temperatures were measured in two ways, and those temperatures were used to calculate Salmonella inactivation using a traditional (D, z) model and a modified model accounting for process humidity. Among the process validation methods, both methods based on time-temperature models had better repeatability, with replication errors approximately half those of the surrogate ( E. faecium ). Additionally, the modified model yielded the lowest root mean squared error in predicting Salmonella inactivation (1.1 to 1.5 log CFU/g); in contrast, E. faecium yielded a root mean squared error of 1.2 to 1.6 log CFU/g, and the traditional model yielded an unacceptably high error (3.4 to 4.4 log CFU/g). Importantly, the surrogate and modified model both yielded lethality predictions that were statistically equivalent (α = 0.05) to actual Salmonella lethality. The results demonstrate the importance of methodology, a w , and process humidity when validating thermal pasteurization processes for low-moisture foods, which should help processors select and interpret validation methods to ensure product safety.

  15. Toward a New Process-Based Indicator for Measuring Writing Fluency: Evidence from L2 Writers' Think-Aloud Protocols

    ERIC Educational Resources Information Center

    Abdel Latif, Muhammad M.

    2009-01-01

    This article reports on a study aimed at testing the hypothesis that, because of strategic and temporal variables, composing rate and text quantity may not be valid measures of writing fluency. A second objective was to validate the mean length of writers' translating episodes as a process-based indicator that mirrors their fluent written…

  16. Exploring Validity of Computer-Based Test Scores with Examinees' Response Behaviors and Response Times

    ERIC Educational Resources Information Center

    Sahin, Füsun

    2017-01-01

    Examining the testing processes, as well as the scores, is needed for a complete understanding of validity and fairness of computer-based assessments. Examinees' rapid-guessing and insufficient familiarity with computers have been found to be major issues that weaken the validity arguments of scores. This study has three goals: (a) improving…

  17. Global approach for the validation of an in-line Raman spectroscopic method to determine the API content in real-time during a hot-melt extrusion process.

    PubMed

    Netchacovitch, L; Thiry, J; De Bleye, C; Dumont, E; Cailletaud, J; Sacré, P-Y; Evrard, B; Hubert, Ph; Ziemons, E

    2017-08-15

    Since the Food and Drug Administration (FDA) published a guidance based on the Process Analytical Technology (PAT) approach, real-time analyses during manufacturing processes are in real expansion. In this study, in-line Raman spectroscopic analyses were performed during a Hot-Melt Extrusion (HME) process to determine the Active Pharmaceutical Ingredient (API) content in real-time. The method was validated based on a univariate and a multivariate approach and the analytical performances of the obtained models were compared. Moreover, on one hand, in-line data were correlated with the real API concentration present in the sample quantified by a previously validated off-line confocal Raman microspectroscopic method. On the other hand, in-line data were also treated in function of the concentration based on the weighing of the components in the prepared mixture. The importance of developing quantitative methods based on the use of a reference method was thus highlighted. The method was validated according to the total error approach fixing the acceptance limits at ±15% and the α risk at ±5%. This method reaches the requirements of the European Pharmacopeia norms for the uniformity of content of single-dose preparations. The validation proves that future results will be in the acceptance limits with a previously defined probability. Finally, the in-line validated method was compared with the off-line one to demonstrate its ability to be used in routine analyses. Copyright © 2017 Elsevier B.V. All rights reserved.

  18. USING CFD TO ANALYZE NUCLEAR SYSTEMS BEHAVIOR: DEFINING THE VALIDATION REQUIREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard Schultz

    2012-09-01

    A recommended protocol to formulate numeric tool specifications and validation needs in concert with practices accepted by regulatory agencies for advanced reactors is described. The protocol is based on the plant type and perceived transient and accident envelopes that translates to boundary conditions for a process that gives the: (a) key phenomena and figures-of-merit which must be analyzed to ensure that the advanced plant can be licensed, (b) specification of the numeric tool capabilities necessary to perform the required analyses—including bounding calculational uncertainties, and (c) specification of the validation matrices and experiments--including the desired validation data. The result of applyingmore » the process enables a complete program to be defined, including costs, for creating and benchmarking transient and accident analysis methods for advanced reactors. By following a process that is in concert with regulatory agency licensing requirements from the start to finish, based on historical acceptance of past licensing submittals, the methods derived and validated have a high probability of regulatory agency acceptance.« less

  19. Web Based Semi-automatic Scientific Validation of Models of the Corona and Inner Heliosphere

    NASA Astrophysics Data System (ADS)

    MacNeice, P. J.; Chulaki, A.; Taktakishvili, A.; Kuznetsova, M. M.

    2013-12-01

    Validation is a critical step in preparing models of the corona and inner heliosphere for future roles supporting either or both the scientific research community and the operational space weather forecasting community. Validation of forecasting quality tends to focus on a short list of key features in the model solutions, with an unchanging order of priority. Scientific validation exposes a much larger range of physical processes and features, and as the models evolve to better represent features of interest, the research community tends to shift its focus to other areas which are less well understood and modeled. Given the more comprehensive and dynamic nature of scientific validation, and the limited resources available to the community to pursue this, it is imperative that the community establish a semi-automated process which engages the model developers directly into an ongoing and evolving validation process. In this presentation we describe the ongoing design and develpment of a web based facility to enable this type of validation of models of the corona and inner heliosphere, on the growing list of model results being generated, and on strategies we have been developing to account for model results that incorporate adaptively refined numerical grids.

  20. An integrated assessment instrument: Developing and validating instrument for facilitating critical thinking abilities and science process skills on electrolyte and nonelectrolyte solution matter

    NASA Astrophysics Data System (ADS)

    Astuti, Sri Rejeki Dwi; Suyanta, LFX, Endang Widjajanti; Rohaeti, Eli

    2017-05-01

    The demanding of assessment in learning process was impact by policy changes. Nowadays, assessment is not only emphasizing knowledge, but also skills and attitudes. However, in reality there are many obstacles in measuring them. This paper aimed to describe how to develop integrated assessment instrument and to verify instruments' validity such as content validity and construct validity. This instrument development used test development model by McIntire. Development process data was acquired based on development test step. Initial product was observed by three peer reviewer and six expert judgments (two subject matter experts, two evaluation experts and two chemistry teachers) to acquire content validity. This research involved 376 first grade students of two Senior High Schools in Bantul Regency to acquire construct validity. Content validity was analyzed used Aiken's formula. The verifying of construct validity was analyzed by exploratory factor analysis using SPSS ver 16.0. The result show that all constructs in integrated assessment instrument are asserted valid according to content validity and construct validity. Therefore, the integrated assessment instrument is suitable for measuring critical thinking abilities and science process skills of senior high school students on electrolyte solution matter.

  1. NASA GPM GV Science Implementation

    NASA Technical Reports Server (NTRS)

    Petersen, W. A.

    2009-01-01

    Pre-launch algorithm development & post-launch product evaluation: The GPM GV paradigm moves beyond traditional direct validation/comparison activities by incorporating improved algorithm physics & model applications (end-to-end validation) in the validation process. Three approaches: 1) National Network (surface): Operational networks to identify and resolve first order discrepancies (e.g., bias) between satellite and ground-based precipitation estimates. 2) Physical Process (vertical column): Cloud system and microphysical studies geared toward testing and refinement of physically-based retrieval algorithms. 3) Integrated (4-dimensional): Integration of satellite precipitation products into coupled prediction models to evaluate strengths/limitations of satellite precipitation producers.

  2. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers

    PubMed Central

    Reeves, Todd D.; Marbach-Ad, Gili

    2016-01-01

    Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology—either quantitative or qualitative—on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. PMID:26903498

  3. Assessment of bachelor's theses in a nursing degree with a rubrics system: Development and validation study.

    PubMed

    González-Chordá, Víctor M; Mena-Tudela, Desirée; Salas-Medina, Pablo; Cervera-Gasch, Agueda; Orts-Cortés, Isabel; Maciá-Soler, Loreto

    2016-02-01

    Writing a bachelor thesis (BT) is the last step to obtain a nursing degree. In order to perform an effective assessment of a nursing BT, certain reliable and valid tools are required. To develop and validate a 3-rubric system (drafting process, dissertation, and viva) to assess final year nursing students' BT. A multi-disciplinary study of content validity and psychometric properties. The study was carried out between December 2014 and July 2015. Nursing Degree at Universitat Jaume I. Spain. Eleven experts (9 nursing professors and 2 education professors from 6 different universities) took part in the development and content validity stages. Fifty-two theses presented during the 2014-2015 academic year were included by consecutive sampling of cases in order to study the psychometric properties. First, a group of experts was created to validate the content of the assessment system based on three rubrics (drafting process, dissertation, and viva). Subsequently, a reliability and validity study of the rubrics was carried out on the 52 theses presented during the 2014-2015 academic year. The BT drafting process rubric has 8 criteria (S-CVI=0.93; α=0.837; ICC=0.614), the dissertation rubric has 7 criteria (S-CVI=0.9; α=0.893; ICC=0.74), and the viva rubric has 4 criteria (S-CVI=0.86; α=8.16; ICC=0.895). A nursing BT assessment system based on three rubrics (drafting process, dissertation, and viva) has been validated. This system may be transferred to other nursing degrees or degrees from other academic areas. It is necessary to continue with the validation process taking into account factors that may affect the results obtained. Copyright © 2015 Elsevier Ltd. All rights reserved.

  4. You don't have to believe everything you read: background knowledge permits fast and efficient validation of information.

    PubMed

    Richter, Tobias; Schroeder, Sascha; Wöhrmann, Britta

    2009-03-01

    In social cognition, knowledge-based validation of information is usually regarded as relying on strategic and resource-demanding processes. Research on language comprehension, in contrast, suggests that validation processes are involved in the construction of a referential representation of the communicated information. This view implies that individuals can use their knowledge to validate incoming information in a routine and efficient manner. Consistent with this idea, Experiments 1 and 2 demonstrated that individuals are able to reject false assertions efficiently when they have validity-relevant beliefs. Validation processes were carried out routinely even when individuals were put under additional cognitive load during comprehension. Experiment 3 demonstrated that the rejection of false information occurs automatically and interferes with affirmative responses in a nonsemantic task (epistemic Stroop effect). Experiment 4 also revealed complementary interference effects of true information with negative responses in a nonsemantic task. These results suggest the existence of fast and efficient validation processes that protect mental representations from being contaminated by false and inaccurate information.

  5. Where Public Health Meets Human Rights: Integrating Human Rights into the Validation of the Elimination of Mother-to-Child Transmission of HIV and Syphilis.

    PubMed

    Kismödi, Eszter; Kiragu, Karusa; Sawicki, Olga; Smith, Sally; Brion, Sophie; Sharma, Aditi; Mworeko, Lilian; Iovita, Alexandrina

    2017-12-01

    In 2014, the World Health Organization (WHO) initiated a process for validation of the elimination of mother-to-child transmission (EMTCT) of HIV and syphilis by countries. For the first time in such a process for the validation of disease elimination, WHO introduced norms and approaches that are grounded in human rights, gender equality, and community engagement. This human rights-based validation process can serve as a key opportunity to enhance accountability for human rights protection by evaluating EMTCT programs against human rights norms and standards, including in relation to gender equality and by ensuring the provision of discrimination-free quality services. The rights-based validation process also involves the assessment of participation of affected communities in EMTCT program development, implementation, and monitoring and evaluation. It brings awareness to the types of human rights abuses and inequalities faced by women living with, at risk of, or affected by HIV and syphilis, and commits governments to eliminate those barriers. This process demonstrates the importance and feasibility of integrating human rights, gender, and community into key public health interventions in a manner that improves health outcomes, legitimizes the participation of affected communities, and advances the human rights of women living with HIV.

  6. Development and Validation of a Theory Based Screening Process for Suicide Risk

    DTIC Science & Technology

    2015-09-01

    not be delayed until all data have been collected. This is with particular respect to our data that confirms soldiers under report suicide ideation ...and that while they say that they would inform loved ones about suicidal thoughts, over 50% of soldiers who endorse ideation have not told anyone...AD_________________ Award Number: W81XWH-11-1-0588 TITLE: Development and Validation of a Theory Based Screening Process for Suicide Risk

  7. Design for validation: An approach to systems validation

    NASA Technical Reports Server (NTRS)

    Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)

    1989-01-01

    Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.

  8. An Internationally Consented Standard for Nursing Process-Clinical Decision Support Systems in Electronic Health Records.

    PubMed

    Müller-Staub, Maria; de Graaf-Waar, Helen; Paans, Wolter

    2016-11-01

    Nurses are accountable to apply the nursing process, which is key for patient care: It is a problem-solving process providing the structure for care plans and documentation. The state-of-the art nursing process is based on classifications that contain standardized concepts, and therefore, it is named Advanced Nursing Process. It contains valid assessments, nursing diagnoses, interventions, and nursing-sensitive patient outcomes. Electronic decision support systems can assist nurses to apply the Advanced Nursing Process. However, nursing decision support systems are missing, and no "gold standard" is available. The study aim is to develop a valid Nursing Process-Clinical Decision Support System Standard to guide future developments of clinical decision support systems. In a multistep approach, a Nursing Process-Clinical Decision Support System Standard with 28 criteria was developed. After pilot testing (N = 29 nurses), the criteria were reduced to 25. The Nursing Process-Clinical Decision Support System Standard was then presented to eight internationally known experts, who performed qualitative interviews according to Mayring. Fourteen categories demonstrate expert consensus on the Nursing Process-Clinical Decision Support System Standard and its content validity. All experts agreed the Advanced Nursing Process should be the centerpiece for the Nursing Process-Clinical Decision Support System and should suggest research-based, predefined nursing diagnoses and correct linkages between diagnoses, evidence-based interventions, and patient outcomes.

  9. Creation of a Tool for Assessing Knowledge in Evidence-Based Decision-Making in Practicing Health Care Providers.

    PubMed

    Spurr, Kathy; Dechman, Gail; Lackie, Kelly; Gilbert, Robert

    2016-01-01

    Evidence-based decision-making (EBDM) is the process health care providers (HCPs) use to identify and appraise potential evidence. It supports the integration of best research evidence with clinical expertise and patient values into the decision-making process for patient care. Competence in this process is essential to delivery of optimal care. There is no objective tool that assesses EBDM across HCP groups. This research aimed to develop a content valid tool to assess knowledge of the principles of evidence-based medicine and the EBDM process, for use with all HCPs. A Delphi process was used in the creation of the tool. Pilot testing established its content validity with the added benefit of evaluating HCPs' knowledge of EBDM. Descriptive statistics and multivariate mixed models were used to evaluate individual survey responses in total, as well as within each EBDM component. The tool consisted of 26 multiple-choice questions. A total of 12,884 HCPs in Nova Scotia were invited to participate in the web-based validation study, yielding 818 (6.3%) participants, 471 of whom completed all questions. The mean overall score was 68%. Knowledge in one component, integration of evidence with clinical expertise and patient preferences, was identified as needing development across all HCPs surveyed. A content valid tool for assessing HCP EBDM knowledge was created and can be used to support the development of continuing education programs to enhance EBDM competency.

  10. Application of process mining to assess the data quality of routinely collected time-based performance data sourced from electronic health records by validating process conformance.

    PubMed

    Perimal-Lewis, Lua; Teubner, David; Hakendorf, Paul; Horwood, Chris

    2016-12-01

    Effective and accurate use of routinely collected health data to produce Key Performance Indicator reporting is dependent on the underlying data quality. In this research, Process Mining methodology and tools were leveraged to assess the data quality of time-based Emergency Department data sourced from electronic health records. This research was done working closely with the domain experts to validate the process models. The hospital patient journey model was used to assess flow abnormalities which resulted from incorrect timestamp data used in time-based performance metrics. The research demonstrated process mining as a feasible methodology to assess data quality of time-based hospital performance metrics. The insight gained from this research enabled appropriate corrective actions to be put in place to address the data quality issues. © The Author(s) 2015.

  11. Implementing the Science Assessment Standards: Developing and validating a set of laboratory assessment tasks in high school biology

    NASA Astrophysics Data System (ADS)

    Saha, Gouranga Chandra

    Very often a number of factors, especially time, space and money, deter many science educators from using inquiry-based, hands-on, laboratory practical tasks as alternative assessment instruments in science. A shortage of valid inquiry-based laboratory tasks for high school biology has been cited. Driven by this need, this study addressed the following three research questions: (1) How can laboratory-based performance tasks be designed and developed that are doable by students for whom they are designed/written? (2) Do student responses to the laboratory-based performance tasks validly represent at least some of the intended process skills that new biology learning goals want students to acquire? (3) Are the laboratory-based performance tasks psychometrically consistent as individual tasks and as a set? To answer these questions, three tasks were used from the six biology tasks initially designed and developed by an iterative process of trial testing. Analyses of data from 224 students showed that performance-based laboratory tasks that are doable by all students require careful and iterative process of development. Although the students demonstrated more skill in performing than planning and reasoning, their performances at the item level were very poor for some items. Possible reasons for the poor performances have been discussed and suggestions on how to remediate the deficiencies have been made. Empirical evidences for validity and reliability of the instrument have been presented both from the classical and the modern validity criteria point of view. Limitations of the study have been identified. Finally implications of the study and directions for further research have been discussed.

  12. Reliability and validity of the revised Gibson Test of Cognitive Skills, a computer-based test battery for assessing cognition across the lifespan.

    PubMed

    Moore, Amy Lawson; Miller, Terissa M

    2018-01-01

    The purpose of the current study is to evaluate the validity and reliability of the revised Gibson Test of Cognitive Skills, a computer-based battery of tests measuring short-term memory, long-term memory, processing speed, logic and reasoning, visual processing, as well as auditory processing and word attack skills. This study included 2,737 participants aged 5-85 years. A series of studies was conducted to examine the validity and reliability using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test-retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III Tests of Cognitive Abilities and Tests of Achievement. Results indicated strong sources of evidence of validity and reliability for the test, including internal consistency reliability coefficients ranging from 0.87 to 0.98, test-retest reliability coefficients ranging from 0.69 to 0.91, split-half reliability coefficients ranging from 0.87 to 0.91, and concurrent validity coefficients ranging from 0.53 to 0.93. The Gibson Test of Cognitive Skills-2 is a reliable and valid tool for assessing cognition in the general population across the lifespan.

  13. Validation of the process criteria for assessment of a hospital nursing service.

    PubMed

    Feldman, Liliane Bauer; Cunha, Isabel Cristina Kowal Olm; D'Innocenzo, Maria

    2013-01-01

    to validate an instrument containing process criteria for assessment of a hospital nursing service based on the National Accreditation Organization program. a descriptive, quantitative methodological study performed in stages. An instrument constructed with 69 process criteria was assessed by 49 nurses from accredited hospitals in 2009, according to a Likert scale, and validated by 16 judges through Delphi rounds in 2010. the original instrument assessed by nurses with 69 process criteria was judged by the degree of importance, and changed to 39 criteria. In the first Delphi round, the 39 criteria reached consensus among the 19 judges, with a medium reliability by Cronbach's alpha. In the second round, 40 converging criteria were validated by 16 judges, with high reliability. The criteria addressed management, costs, teaching, education, indicators, protocols, human resources, communication, among others. the 40 process criteria formed a validated instrument to assess the hospital nursing service which, when measured, can better direct interventions by nurses in reaching and strengthening outcomes.

  14. Where Public Health Meets Human Rights

    PubMed Central

    Kiragu, Karusa; Sawicki, Olga; Smith, Sally; Brion, Sophie; Sharma, Aditi; Mworeko, Lilian; Iovita, Alexandrina

    2017-01-01

    Abstract In 2014, the World Health Organization (WHO) initiated a process for validation of the elimination of mother-to-child transmission (EMTCT) of HIV and syphilis by countries. For the first time in such a process for the validation of disease elimination, WHO introduced norms and approaches that are grounded in human rights, gender equality, and community engagement. This human rights-based validation process can serve as a key opportunity to enhance accountability for human rights protection by evaluating EMTCT programs against human rights norms and standards, including in relation to gender equality and by ensuring the provision of discrimination-free quality services. The rights-based validation process also involves the assessment of participation of affected communities in EMTCT program development, implementation, and monitoring and evaluation. It brings awareness to the types of human rights abuses and inequalities faced by women living with, at risk of, or affected by HIV and syphilis, and commits governments to eliminate those barriers. This process demonstrates the importance and feasibility of integrating human rights, gender, and community into key public health interventions in a manner that improves health outcomes, legitimizes the participation of affected communities, and advances the human rights of women living with HIV. PMID:29302179

  15. Mashup Model and Verification Using Mashup Processing Network

    NASA Astrophysics Data System (ADS)

    Zahoor, Ehtesham; Perrin, Olivier; Godart, Claude

    Mashups are defined to be lightweight Web applications aggregating data from different Web services, built using ad-hoc composition and being not concerned with long term stability and robustness. In this paper we present a pattern based approach, called Mashup Processing Network (MPN). The idea is based on Event Processing Network and is supposed to facilitate the creation, modeling and the verification of mashups. MPN provides a view of how different actors interact for the mashup development namely the producer, consumer, mashup processing agent and the communication channels. It also supports modeling transformations and validations of data and offers validation of both functional and non-functional requirements, such as reliable messaging and security, that are key issues within the enterprise context. We have enriched the model with a set of processing operations and categorize them into data composition, transformation and validation categories. These processing operations can be seen as a set of patterns for facilitating the mashup development process. MPN also paves a way for realizing Mashup Oriented Architecture where mashups along with services are used as building blocks for application development.

  16. "La Clave Profesional": Validation of a Vocational Guidance Instrument

    ERIC Educational Resources Information Center

    Mudarra, Maria J.; Lázaro Martínez, Ángel

    2014-01-01

    Introduction: The current study demonstrates empirical and cultural validity of "La Clave Profesional" (Spanish adaptation of Career Key, Jones's test based Holland's RIASEC model). The process of providing validity evidence also includes a reflection on personal and career development and examines the relationahsips between RIASEC…

  17. Validation of a Videoconferenced Speaking Test

    ERIC Educational Resources Information Center

    Kim, Jungtae; Craig, Daniel A.

    2012-01-01

    Videoconferencing offers new opportunities for language testers to assess speaking ability in low-stakes diagnostic tests. To be considered a trusted testing tool in language testing, a test should be examined employing appropriate validation processes [Chapelle, C.A., Jamieson, J., & Hegelheimer, V. (2003). "Validation of a web-based ESL…

  18. Individual Differences in Base Rate Neglect: A Fuzzy Processing Preference Index

    PubMed Central

    Wolfe, Christopher R.; Fisher, Christopher R.

    2013-01-01

    Little is known about individual differences in integrating numeric base-rates and qualitative text in making probability judgments. Fuzzy-Trace Theory predicts a preference for fuzzy processing. We conducted six studies to develop the FPPI, a reliable and valid instrument assessing individual differences in this fuzzy processing preference. It consists of 19 probability estimation items plus 4 "M-Scale" items that distinguish simple pattern matching from “base rate respect.” Cronbach's Alpha was consistently above 0.90. Validity is suggested by significant correlations between FPPI scores and three other measurers: "Rule Based" Process Dissociation Procedure scores; the number of conjunction fallacies in joint probability estimation; and logic index scores on syllogistic reasoning. Replicating norms collected in a university study with a web-based study produced negligible differences in FPPI scores, indicating robustness. The predicted relationships between individual differences in base rate respect and both conjunction fallacies and syllogistic reasoning were partially replicated in two web-based studies. PMID:23935255

  19. Groundwater Model Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ahmed E. Hassan

    2006-01-24

    Models have an inherent uncertainty. The difficulty in fully characterizing the subsurface environment makes uncertainty an integral component of groundwater flow and transport models, which dictates the need for continuous monitoring and improvement. Building and sustaining confidence in closure decisions and monitoring networks based on models of subsurface conditions require developing confidence in the models through an iterative process. The definition of model validation is postulated as a confidence building and long-term iterative process (Hassan, 2004a). Model validation should be viewed as a process not an end result. Following Hassan (2004b), an approach is proposed for the validation process ofmore » stochastic groundwater models. The approach is briefly summarized herein and detailed analyses of acceptance criteria for stochastic realizations and of using validation data to reduce input parameter uncertainty are presented and applied to two case studies. During the validation process for stochastic models, a question arises as to the sufficiency of the number of acceptable model realizations (in terms of conformity with validation data). Using a hierarchical approach to make this determination is proposed. This approach is based on computing five measures or metrics and following a decision tree to determine if a sufficient number of realizations attain satisfactory scores regarding how they represent the field data used for calibration (old) and used for validation (new). The first two of these measures are applied to hypothetical scenarios using the first case study and assuming field data consistent with the model or significantly different from the model results. In both cases it is shown how the two measures would lead to the appropriate decision about the model performance. Standard statistical tests are used to evaluate these measures with the results indicating they are appropriate measures for evaluating model realizations. The use of validation data to constrain model input parameters is shown for the second case study using a Bayesian approach known as Markov Chain Monte Carlo. The approach shows a great potential to be helpful in the validation process and in incorporating prior knowledge with new field data to derive posterior distributions for both model input and output.« less

  20. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  1. A Metric-Based Validation Process to Assess the Realism of Synthetic Power Grids

    DOE PAGES

    Birchfield, Adam; Schweitzer, Eran; Athari, Mir; ...

    2017-08-19

    Public power system test cases that are of high quality benefit the power systems research community with expanded resources for testing, demonstrating, and cross-validating new innovations. Building synthetic grid models for this purpose is a relatively new problem, for which a challenge is to show that created cases are sufficiently realistic. This paper puts forth a validation process based on a set of metrics observed from actual power system cases. These metrics follow the structure, proportions, and parameters of key power system elements, which can be used in assessing and validating the quality of synthetic power grids. Though wide diversitymore » exists in the characteristics of power systems, the paper focuses on an initial set of common quantitative metrics to capture the distribution of typical values from real power systems. The process is applied to two new public test cases, which are shown to meet the criteria specified in the metrics of this paper.« less

  2. LACO-Wiki: A land cover validation tool and a new, innovative teaching resource for remote sensing and the geosciences

    NASA Astrophysics Data System (ADS)

    See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz

    2016-04-01

    The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.

  3. Validation and Comprehension of Text Information: Two Sides of the Same Coin

    ERIC Educational Resources Information Center

    Richter, Tobias

    2015-01-01

    In psychological research, the comprehension of linguistic information and the knowledge-based assessment of its validity are often regarded as two separate stages of information processing. Recent findings in psycholinguistics and text comprehension research call this two-stage model into question. In particular, validation can affect…

  4. What Counts as Validity Evidence? Examples and Prevalence in a Systematic Review of Simulation-Based Assessment

    ERIC Educational Resources Information Center

    Cook, David A.; Zendejas, Benjamin; Hamstra, Stanley J.; Hatala, Rose; Brydges, Ryan

    2014-01-01

    Ongoing transformations in health professions education underscore the need for valid and reliable assessment. The current standard for assessment validation requires evidence from five sources: content, response process, internal structure, relations with other variables, and consequences. However, researchers remain uncertain regarding the types…

  5. Factors Influencing Pursuit of Higher Education: Validating a Questionnaire.

    ERIC Educational Resources Information Center

    Harris, Sandra M.

    This paper explains the process used to validate the construct validity of the Factors Influencing Pursuit of Higher Education Questionnaire. This questionnaire is a literature-based, researcher-developed instrument which gathers information on the factors thought to affect a person's decision to pursue higher education. The questionnaire includes…

  6. Validating workplace performance assessments in health sciences students: a case study from speech pathology.

    PubMed

    McAllister, Sue; Lincoln, Michelle; Ferguson, Allison; McAllister, Lindy

    2013-01-01

    Valid assessment of health science students' ability to perform in the real world of workplace practice is critical for promoting quality learning and ultimately certifying students as fit to enter the world of professional practice. Current practice in performance assessment in the health sciences field has been hampered by multiple issues regarding assessment content and process. Evidence for the validity of scores derived from assessment tools are usually evaluated against traditional validity categories with reliability evidence privileged over validity, resulting in the paradoxical effect of compromising the assessment validity and learning processes the assessments seek to promote. Furthermore, the dominant statistical approaches used to validate scores from these assessments fall under the umbrella of classical test theory approaches. This paper reports on the successful national development and validation of measures derived from an assessment of Australian speech pathology students' performance in the workplace. Validation of these measures considered each of Messick's interrelated validity evidence categories and included using evidence generated through Rasch analyses to support score interpretation and related action. This research demonstrated that it is possible to develop an assessment of real, complex, work based performance of speech pathology students, that generates valid measures without compromising the learning processes the assessment seeks to promote. The process described provides a model for other health professional education programs to trial.

  7. Improving the Residency Admissions Process by Integrating a Professionalism Assessment: A Validity and Feasibility Study

    ERIC Educational Resources Information Center

    Bajwa, Nadia M.; Yudkowsky, Rachel; Belli, Dominique; Vu, Nu Viet; Park, Yoon Soo

    2017-01-01

    The purpose of this study was to provide validity and feasibility evidence in measuring professionalism using the Professionalism Mini-Evaluation Exercise (P-MEX) scores as part of a residency admissions process. In 2012 and 2013, three standardized-patient-based P-MEX encounters were administered to applicants invited for an interview at the…

  8. Performing Verification and Validation in Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1999-01-01

    The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.

  9. Examining students' views about validity of experiments: From introductory to Ph.D. students

    NASA Astrophysics Data System (ADS)

    Hu, Dehui; Zwickl, Benjamin M.

    2018-06-01

    We investigated physics students' epistemological views on measurements and validity of experimental results. The roles of experiments in physics have been underemphasized in previous research on students' personal epistemology, and there is a need for a broader view of personal epistemology that incorporates experiments. An epistemological framework incorporating the structure, methodology, and validity of scientific knowledge guided the development of an open-ended survey. The survey was administered to students in algebra-based and calculus-based introductory physics courses, upper-division physics labs, and physics Ph.D. students. Within our sample, we identified several differences in students' ideas about validity and uncertainty in measurement. The majority of introductory students justified the validity of results through agreement with theory or with results from others. Alternatively, Ph.D. students frequently justified the validity of results based on the quality of the experimental process and repeatability of results. When asked about the role of uncertainty analysis, introductory students tended to focus on the representational roles (e.g., describing imperfections, data variability, and human mistakes). However, advanced students focused on the inferential roles of uncertainty analysis (e.g., quantifying reliability, making comparisons, and guiding refinements). The findings suggest that lab courses could emphasize a variety of approaches to establish validity, such as by valuing documentation of the experimental process when evaluating the quality of student work. In order to emphasize the role of uncertainty in an authentic way, labs could provide opportunities to iterate, make repeated comparisons, and make decisions based on those comparisons.

  10. Process Evaluation in Corrections-Based Substance Abuse Treatment.

    ERIC Educational Resources Information Center

    Wolk, James L.; Hartmann, David J.

    1996-01-01

    Argues that process evaluation is needed to validate prison-based substance abuse treatment effectiveness. Five groups--inmates, treatment staff, prison staff, prison administration, and the parole board--should be a part of this process evaluation. Discusses these five groups relative to three stages of development of substance abuse treatment in…

  11. An adaptive management process for forest soil conservation.

    Treesearch

    Michael P. Curran; Douglas G. Maynard; Ronald L. Heninger; Thomas A. Terry; Steven W. Howes; Douglas M. Stone; Thomas Niemann; Richard E. Miller; Robert F. Powers

    2005-01-01

    Soil disturbance guidelines should be based on comparable disturbance categories adapted to specific local soil conditions, validated by monitoring and research. Guidelines, standards, and practices should be continually improved based on an adaptive management process, which is presented in this paper. Core components of this process include: reliable monitoring...

  12. Aristotle Meets Zeno: Psychophysiological Evidence

    PubMed Central

    Papageorgiou, Charalabos; Stachtea, Xanthi; Papageorgiou, Panos; Alexandridis, Antonio T.

    2016-01-01

    This study, a tribute to Aristotle's 2400 years, used a juxtaposition of valid Aristotelian arguments to the paradoxes formulated by Zeno the Eleatic, in order to investigate the electrophysiological correlates of attentional and /or memory processing effects in the course of deductive reasoning. Participants undertook reasoning tasks based on visually presented arguments which were either (a) valid (Aristotelian) statements or (b) paradoxes. We compared brain activation patterns while participants maintained the premises / conclusions of either the valid statements or the paradoxes in working memory (WM). Event-related brain potentials (ERPs), specifically the P300 component of ERPs, were recorded during the WM phase, during which participants were required to draw a logical conclusion regarding the correctness of the valid syllogisms or the paradoxes. During the processing of paradoxes, results demonstrated a more positive event-related potential deflection (P300) across frontal regions, whereas processing of valid statements was associated with noticeable P300 amplitudes across parieto-occipital regions. These findings suggest that paradoxes mobilize frontal attention mechanisms, while valid deduction promotes parieto-occipital activity associated with attention and/or subsequent memory processing. PMID:28033333

  13. Aristotle Meets Zeno: Psychophysiological Evidence.

    PubMed

    Papageorgiou, Charalabos; Stachtea, Xanthi; Papageorgiou, Panos; Alexandridis, Antonio T; Tsaltas, Eleftheria; Angelopoulos, Elias

    2016-01-01

    This study, a tribute to Aristotle's 2400 years, used a juxtaposition of valid Aristotelian arguments to the paradoxes formulated by Zeno the Eleatic, in order to investigate the electrophysiological correlates of attentional and /or memory processing effects in the course of deductive reasoning. Participants undertook reasoning tasks based on visually presented arguments which were either (a) valid (Aristotelian) statements or (b) paradoxes. We compared brain activation patterns while participants maintained the premises / conclusions of either the valid statements or the paradoxes in working memory (WM). Event-related brain potentials (ERPs), specifically the P300 component of ERPs, were recorded during the WM phase, during which participants were required to draw a logical conclusion regarding the correctness of the valid syllogisms or the paradoxes. During the processing of paradoxes, results demonstrated a more positive event-related potential deflection (P300) across frontal regions, whereas processing of valid statements was associated with noticeable P300 amplitudes across parieto-occipital regions. These findings suggest that paradoxes mobilize frontal attention mechanisms, while valid deduction promotes parieto-occipital activity associated with attention and/or subsequent memory processing.

  14. Development and validation of instrument for ergonomic evaluation of tablet arm chairs

    PubMed Central

    Tirloni, Adriana Seára; dos Reis, Diogo Cunha; Bornia, Antonio Cezar; de Andrade, Dalton Francisco; Borgatto, Adriano Ferreti; Moro, Antônio Renato Pereira

    2016-01-01

    The purpose of this study was to develop and validate an evaluation instrument for tablet arm chairs based on ergonomic requirements, focused on user perceptions and using Item Response Theory (IRT). This exploratory study involved 1,633 participants (university students and professors) in four steps: a pilot study (n=26), semantic validation (n=430), content validation (n=11) and construct validation (n=1,166). Samejima's graded response model was applied to validate the instrument. The results showed that all the steps (theoretical and practical) of the instrument's development and validation processes were successful and that the group of remaining items (n=45) had a high consistency (0.95). This instrument can be used in the furniture industry by engineers and product designers and in the purchasing process of tablet arm chairs for schools, universities and auditoriums. PMID:28337099

  15. The effectiveness of physics learning material based on South Kalimantan local wisdom

    NASA Astrophysics Data System (ADS)

    Hartini, Sri; Misbah, Helda, Dewantara, Dewi

    2017-08-01

    The local wisdom is essential element incorporated into learning process. However, there are no learning materials in Physics learning process which contain South Kalimantan local wisdom. Therefore, it is necessary to develop a Physics learning material based on South Kalimantan local wisdom. The objective of this research is to produce products in the form of learning material based on South Kalimantan local wisdom that is feasible and effective based on the validity, practicality, effectiveness of learning material and achievement of waja sampai kaputing (wasaka) character. This research is a research and development which refers to the ADDIE model. Data were obtained through the validation sheet of learning material, questionnaire, the test of learning outcomes and the sheet of character assesment. The research results showed that (1) the validity category of the learning material was very valid, (2) the practicality category of the learning material was very practical, (3) the effectiveness category of thelearning material was very effective, and (4) the achivement of wasaka characters was very good. In conclusion, the Physics learning materials based on South Kalimantan local wisdom are feasible and effective to be used in learning activities.

  16. Validity, Reliability, and Equity Issues in an Observational Talent Assessment Process in the Performing Arts

    ERIC Educational Resources Information Center

    Oreck, Barry A.; Owen, Steven V.; Baum, Susan M.

    2003-01-01

    The lack of valid, research-based methods to identify potential artistic talent hampers the inclusion of the arts in programs for the gifted and talented. The Talent Assessment Process in Dance, Music, and Theater (D/M/T TAP) was designed to identify potential performing arts talent in diverse populations, including bilingual and special education…

  17. NASA Ocean Altimeter Pathfinder Project. Report 2; Data Set Validation

    NASA Technical Reports Server (NTRS)

    Koblinsky, C. J.; Ray, Richard D.; Beckley, Brian D.; Bremmer, Anita; Tsaoussi, Lucia S.; Wang, Yan-Ming

    1999-01-01

    The NOAA/NASA Pathfinder program was created by the Earth Observing System (EOS) Program Office to determine how existing satellite-based data sets can be processed and used to study global change. The data sets are designed to be long time-series data processed with stable calibration and community consensus algorithms to better assist the research community. The Ocean Altimeter Pathfinder Project involves the reprocessing of all altimeter observations with a consistent set of improved algorithms, based on the results from TOPEX/POSEIDON (T/P), into easy-to-use data sets for the oceanographic community for climate research. Details are currently presented in two technical reports: Report# 1: Data Processing Handbook Report #2: Data Set Validation This report describes the validation of the data sets against a global network of high quality tide gauge measurements and provides an estimate of the error budget. The first report describes the processing schemes used to produce the geodetic consistent data set comprised of SEASAT, GEOSAT, ERS-1, TOPEX/ POSEIDON, and ERS-2 satellite observations.

  18. A Framework for Performing Verification and Validation in Reuse Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward A.

    1997-01-01

    Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.

  19. Heuristic and analytic processes in reasoning: an event-related potential study of belief bias.

    PubMed

    Banks, Adrian P; Hope, Christopher

    2014-03-01

    Human reasoning involves both heuristic and analytic processes. This study of belief bias in relational reasoning investigated whether the two processes occur serially or in parallel. Participants evaluated the validity of problems in which the conclusions were either logically valid or invalid and either believable or unbelievable. Problems in which the conclusions presented a conflict between the logically valid response and the believable response elicited a more positive P3 than problems in which there was no conflict. This shows that P3 is influenced by the interaction of belief and logic rather than either of these factors on its own. These findings indicate that belief and logic influence reasoning at the same time, supporting models in which belief-based and logical evaluations occur in parallel but not theories in which belief-based heuristic evaluations precede logical analysis.

  20. Verification and Validation Methodology of Real-Time Adaptive Neural Networks for Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Gupta, Pramod; Loparo, Kenneth; Mackall, Dale; Schumann, Johann; Soares, Fola

    2004-01-01

    Recent research has shown that adaptive neural based control systems are very effective in restoring stability and control of an aircraft in the presence of damage or failures. The application of an adaptive neural network with a flight critical control system requires a thorough and proven process to ensure safe and proper flight operation. Unique testing tools have been developed as part of a process to perform verification and validation (V&V) of real time adaptive neural networks used in recent adaptive flight control system, to evaluate the performance of the on line trained neural networks. The tools will help in certification from FAA and will help in the successful deployment of neural network based adaptive controllers in safety-critical applications. The process to perform verification and validation is evaluated against a typical neural adaptive controller and the results are discussed.

  1. Alternative validation practice of an automated faulting measurement method.

    DOT National Transportation Integrated Search

    2010-03-08

    A number of states have adopted profiler based systems to automatically measure faulting, : in jointed concrete pavements. However, little published work exists which documents the : validation process used for such automated faulting systems. This p...

  2. The development of thematic materials using project based learning for elementary school

    NASA Astrophysics Data System (ADS)

    Yuliana, M.; Wiryawan, S. A.; Riyadi

    2018-05-01

    Teaching materials is one of the important factors in supporting on learning process. This paper discussed about developing thematic materials using project based learning. Thematic materials are designed to make students to be active, creative, cooperative, easy in thinking to solve the problem. The purpose of the research was to develop thematic material using project based learning which used valid variables. The method of research which used in this research was four stages of research and development proposed by Thiagarajan consisting of 4 stages, namely: (1) definition stage, (2) design stage, (3) development stage, and (4) stage of dissemination. The first stage was research and information collection, it was in form of need analysis with questionnaire, observation, interview, and document analysis. Design stage was based on the competencies and indicator. The third was development stage, this stage was used to product validation from expert. The validity of research development involved media validator, material validator, and linguistic validator. The result from the validation of thematic material by expert showed that the overall result had a very good rating which ranged from 1 to 5 likert scale, media validation showed a mean score 4,83, the material validation showed mean score 4,68, and the mean of linguistic validation was e 4,74. It showed that the thematic material using project based learning was valid and feasible to be implemented in the context thematic learning.

  3. Content validation of terms and definitions in a wound glossary.

    PubMed

    Milne, Catherine T; Paine, Tim; Sullivan, Valerie; Sawyer, Allen

    2011-12-01

    A common language and lexicon provide the easiest means of mutual understanding. Inconsistency in terminology makes effective information exchange difficult. Previous studies identified the need to determine standard, accepted definitions for the vocabulary frequently used in wound care. The objective of this study was to establish content validation for these terms and develop an evidence-based glossary for this specialty. Members of the Association for the Advancement of Wound Care Quality of Care Task Force reviewed literature to determine glossary content generation and the associated literature-based definitions. Thirty-nine wound care professionals from wound care stakeholder professional organizations in the United States and Canada participated in the content validation process. Participants were asked to quantify the degree of validity using a 367-item, 4-point Likert-type scale. On a scale of 1 to 4, the mean score of the entire instrument was 3.84. The instrument's overall scale content validity index was 0.96. Terms with an item content validity index of less than 0.70 were removed from the glossary, leaving 365 items with established content validity. Qualitative data analysis revealed themes suggesting that enhanced communication between providers improves patient outcomes. The need for ongoing updates of the glossary was also identified. The wound care glossary in its finalized form proved valid. An evidence-based glossary bridges the chasm of miscommunication and nonstandardization so that wound care, as an emerging specialized medical science field, can move forward to optimize both process and clinical outcomes.

  4. Guiding Development Based Approach Practicum Vertebrates Taxonomy Scientific Study Program for Students of Biology Education

    NASA Astrophysics Data System (ADS)

    Arieska, M.; Syamsurizal, S.; Sumarmin, R.

    2018-04-01

    Students having difficulty in identifying and describing the vertebrate animals as well as less skilled in science process as practical. Increased expertise in scientific skills, one of which is through practical activities using practical guidance based on scientific approach. This study aims to produce practical guidance vertebrate taxonomy for biology education students PGRI STKIP West Sumatra valid. This study uses a model of Plomp development consisting of three phases: the initial investigation, floating or prototype stage, and the stage of assessment. Data collection instruments used in this study is a validation sheet guiding practicum. Data were analyzed descriptively based on data obtained from the field. The result of the development of practical guidance vertebrate taxonomic validity value of 3.22 is obtained with very valid category. Research and development has produced a practical guide based vertebrate taxonomic scientific approach very valid.

  5. An Engineering Method of Civil Jet Requirements Validation Based on Requirements Project Principle

    NASA Astrophysics Data System (ADS)

    Wang, Yue; Gao, Dan; Mao, Xuming

    2018-03-01

    A method of requirements validation is developed and defined to meet the needs of civil jet requirements validation in product development. Based on requirements project principle, this method will not affect the conventional design elements, and can effectively connect the requirements with design. It realizes the modern civil jet development concept, which is “requirement is the origin, design is the basis”. So far, the method has been successfully applied in civil jet aircraft development in China. Taking takeoff field length as an example, the validation process and the validation method of the requirements are detailed introduced in the study, with the hope of providing the experiences to other civil jet product design.

  6. Experimental validation of thermo-chemical algorithm for a simulation of pultrusion processes

    NASA Astrophysics Data System (ADS)

    Barkanov, E.; Akishin, P.; Miazza, N. L.; Galvez, S.; Pantelelis, N.

    2018-04-01

    To provide better understanding of the pultrusion processes without or with temperature control and to support the pultrusion tooling design, an algorithm based on the mixed time integration scheme and nodal control volumes method has been developed. At present study its experimental validation is carried out by the developed cure sensors measuring the electrical resistivity and temperature on the profile surface. By this verification process the set of initial data used for a simulation of the pultrusion process with rod profile has been successfully corrected and finally defined.

  7. A Comprehensive Plan for the Long-Term Calibration and Validation of Oceanic Biogeochemical Satellite Data

    NASA Technical Reports Server (NTRS)

    Hooker, Stanford B.; McClain, Charles R.; Mannino, Antonio

    2007-01-01

    The primary objective of this planning document is to establish a long-term capability and validating oceanic biogeochemical satellite data. It is a pragmatic solution to a practical problem based primarily o the lessons learned from prior satellite missions. All of the plan's elements are seen to be interdependent, so a horizontal organizational scheme is anticipated wherein the overall leadership comes from the NASA Ocean Biology and Biogeochemistry (OBB) Program Manager and the entire enterprise is split into two components of equal sature: calibration and validation plus satellite data processing. The detailed elements of the activity are based on the basic tasks of the two main components plus the current objectives of the Carbon Cycle and Ecosystems Roadmap. The former is distinguished by an internal core set of responsibilities and the latter is facilitated through an external connecting-core ring of competed or contracted activities. The core elements for the calibration and validation component include a) publish protocols and performance metrics; b) verify uncertainty budgets; c) manage the development and evaluation of instrumentation; and d) coordinate international partnerships. The core elements for the satellite data processing component are e) process and reprocess multisensor data; f) acquire, distribute, and archive data products; and g) implement new data products. Both components have shared responsibilities for initializing and temporally monitoring satellite calibration. Connecting-core elements include (but are not restricted to) atmospheric correction and characterization, standards and traceability, instrument and analysis round robins, field campaigns and vicarious calibration sites, in situ database, bio-optical algorithm (and product) validation, satellite characterization and vicarious calibration, and image processing software. The plan also includes an accountability process, creating a Calibration and Validation Team (to help manage the activity), and a discussion of issues associated with the plan's scientific focus.

  8. Exploring geo-tagged photos for land cover validation with deep learning

    NASA Astrophysics Data System (ADS)

    Xing, Hanfa; Meng, Yuan; Wang, Zixuan; Fan, Kaixuan; Hou, Dongyang

    2018-07-01

    Land cover validation plays an important role in the process of generating and distributing land cover thematic maps, which is usually implemented by high cost of sample interpretation with remotely sensed images or field survey. With an increasing availability of geo-tagged landscape photos, the automatic photo recognition methodologies, e.g., deep learning, can be effectively utilised for land cover applications. However, they have hardly been utilised in validation processes, as challenges remain in sample selection and classification for highly heterogeneous photos. This study proposed an approach to employ geo-tagged photos for land cover validation by using the deep learning technology. The approach first identified photos automatically based on the VGG-16 network. Then, samples for validation were selected and further classified by considering photos distribution and classification probabilities. The implementations were conducted for the validation of the GlobeLand30 land cover product in a heterogeneous area, western California. Experimental results represented promises in land cover validation, given that GlobeLand30 showed an overall accuracy of 83.80% with classified samples, which was close to the validation result of 80.45% based on visual interpretation. Additionally, the performances of deep learning based on ResNet-50 and AlexNet were also quantified, revealing no substantial differences in final validation results. The proposed approach ensures geo-tagged photo quality, and supports the sample classification strategy by considering photo distribution, with accuracy improvement from 72.07% to 79.33% compared with solely considering the single nearest photo. Consequently, the presented approach proves the feasibility of deep learning technology on land cover information identification of geo-tagged photos, and has a great potential to support and improve the efficiency of land cover validation.

  9. Model-Based Thermal System Design Optimization for the James Webb Space Telescope

    NASA Technical Reports Server (NTRS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-01-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  10. Model-based thermal system design optimization for the James Webb Space Telescope

    NASA Astrophysics Data System (ADS)

    Cataldo, Giuseppe; Niedner, Malcolm B.; Fixsen, Dale J.; Moseley, Samuel H.

    2017-10-01

    Spacecraft thermal model validation is normally performed by comparing model predictions with thermal test data and reducing their discrepancies to meet the mission requirements. Based on thermal engineering expertise, the model input parameters are adjusted to tune the model output response to the test data. The end result is not guaranteed to be the best solution in terms of reduced discrepancy and the process requires months to complete. A model-based methodology was developed to perform the validation process in a fully automated fashion and provide mathematical bases to the search for the optimal parameter set that minimizes the discrepancies between model and data. The methodology was successfully applied to several thermal subsystems of the James Webb Space Telescope (JWST). Global or quasiglobal optimal solutions were found and the total execution time of the model validation process was reduced to about two weeks. The model sensitivities to the parameters, which are required to solve the optimization problem, can be calculated automatically before the test begins and provide a library for sensitivity studies. This methodology represents a crucial commodity when testing complex, large-scale systems under time and budget constraints. Here, results for the JWST Core thermal system will be presented in detail.

  11. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth.

    PubMed

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-06-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering are more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services.

  12. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth

    PubMed Central

    Zhang, Zhaoyang; Wang, Honggang

    2016-01-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering is more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services. PMID:27126063

  13. Assessment Methodology for Process Validation Lifecycle Stage 3A.

    PubMed

    Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Chen, Shu; Ingram, Marzena; Spes, Jana

    2017-07-01

    The paper introduces evaluation methodologies and associated statistical approaches for process validation lifecycle Stage 3A. The assessment tools proposed can be applied to newly developed and launched small molecule as well as bio-pharma products, where substantial process and product knowledge has been gathered. The following elements may be included in Stage 3A: number of 3A batch determination; evaluation of critical material attributes, critical process parameters, critical quality attributes; in vivo in vitro correlation; estimation of inherent process variability (IPV) and PaCS index; process capability and quality dashboard (PCQd); and enhanced control strategy. US FDA guidance on Process Validation: General Principles and Practices, January 2011 encourages applying previous credible experience with suitably similar products and processes. A complete Stage 3A evaluation is a valuable resource for product development and future risk mitigation of similar products and processes. Elements of 3A assessment were developed to address industry and regulatory guidance requirements. The conclusions made provide sufficient information to make a scientific and risk-based decision on product robustness.

  14. Generic Raman-based calibration models enabling real-time monitoring of cell culture bioreactors.

    PubMed

    Mehdizadeh, Hamidreza; Lauri, David; Karry, Krizia M; Moshgbar, Mojgan; Procopio-Melino, Renee; Drapeau, Denis

    2015-01-01

    Raman-based multivariate calibration models have been developed for real-time in situ monitoring of multiple process parameters within cell culture bioreactors. Developed models are generic, in the sense that they are applicable to various products, media, and cell lines based on Chinese Hamster Ovarian (CHO) host cells, and are scalable to large pilot and manufacturing scales. Several batches using different CHO-based cell lines and corresponding proprietary media and process conditions have been used to generate calibration datasets, and models have been validated using independent datasets from separate batch runs. All models have been validated to be generic and capable of predicting process parameters with acceptable accuracy. The developed models allow monitoring multiple key bioprocess metabolic variables, and hence can be utilized as an important enabling tool for Quality by Design approaches which are strongly supported by the U.S. Food and Drug Administration. © 2015 American Institute of Chemical Engineers.

  15. Validation of vision-based obstacle detection algorithms for low-altitude helicopter flight

    NASA Technical Reports Server (NTRS)

    Suorsa, Raymond; Sridhar, Banavar

    1991-01-01

    A validation facility being used at the NASA Ames Research Center is described which is aimed at testing vision based obstacle detection and range estimation algorithms suitable for low level helicopter flight. The facility is capable of processing hundreds of frames of calibrated multicamera 6 degree-of-freedom motion image sequencies, generating calibrated multicamera laboratory images using convenient window-based software, and viewing range estimation results from different algorithms along with truth data using powerful window-based visualization software.

  16. Meeting Report: Validation of Toxicogenomics-Based Test Systems: ECVAM–ICCVAM/NICEATM Considerations for Regulatory Use

    PubMed Central

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H.; Clerici, Libero; Coecke, Sandra; Douglas, George R.; Gribaldo, Laura; Groten, John P.; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R.; Toda, Eisaku; Tong, Weida; van Delft, Joost H.; Weis, Brenda; Schechtman, Leonard M.

    2006-01-01

    This is the report of the first workshop “Validation of Toxicogenomics-Based Test Systems” held 11–12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities. PMID:16507466

  17. Meeting report: Validation of toxicogenomics-based test systems: ECVAM-ICCVAM/NICEATM considerations for regulatory use.

    PubMed

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H; Clerici, Libero; Coecke, Sandra; Douglas, George R; Gribaldo, Laura; Groten, John P; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R; Toda, Eisaku; Tong, Weida; van Delft, Joost H; Weis, Brenda; Schechtman, Leonard M

    2006-03-01

    This is the report of the first workshop "Validation of Toxicogenomics-Based Test Systems" held 11-12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities.

  18. Ship Detection in SAR Image Based on the Alpha-stable Distribution

    PubMed Central

    Wang, Changcheng; Liao, Mingsheng; Li, Xiaofeng

    2008-01-01

    This paper describes an improved Constant False Alarm Rate (CFAR) ship detection algorithm in spaceborne synthetic aperture radar (SAR) image based on Alpha-stable distribution model. Typically, the CFAR algorithm uses the Gaussian distribution model to describe statistical characteristics of a SAR image background clutter. However, the Gaussian distribution is only valid for multilook SAR images when several radar looks are averaged. As sea clutter in SAR images shows spiky or heavy-tailed characteristics, the Gaussian distribution often fails to describe background sea clutter. In this study, we replace the Gaussian distribution with the Alpha-stable distribution, which is widely used in impulsive or spiky signal processing, to describe the background sea clutter in SAR images. In our proposed algorithm, an initial step for detecting possible ship targets is employed. Then, similar to the typical two-parameter CFAR algorithm, a local process is applied to the pixel identified as possible target. A RADARSAT-1 image is used to validate this Alpha-stable distribution based algorithm. Meanwhile, known ship location data during the time of RADARSAT-1 SAR image acquisition is used to validate ship detection results. Validation results show improvements of the new CFAR algorithm based on the Alpha-stable distribution over the CFAR algorithm based on the Gaussian distribution. PMID:27873794

  19. The quality of instruments to assess the process of shared decision making: A systematic review.

    PubMed

    Gärtner, Fania R; Bomhof-Roordink, Hanna; Smith, Ian P; Scholl, Isabelle; Stiggelbout, Anne M; Pieterse, Arwen H

    2018-01-01

    To inventory instruments assessing the process of shared decision making and appraise their measurement quality, taking into account the methodological quality of their validation studies. In a systematic review we searched seven databases (PubMed, Embase, Emcare, Cochrane, PsycINFO, Web of Science, Academic Search Premier) for studies investigating instruments measuring the process of shared decision making. Per identified instrument, we assessed the level of evidence separately for 10 measurement properties following a three-step procedure: 1) appraisal of the methodological quality using the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist, 2) appraisal of the psychometric quality of the measurement property using three possible quality scores, 3) best-evidence synthesis based on the number of studies, their methodological and psychometrical quality, and the direction and consistency of the results. The study protocol was registered at PROSPERO: CRD42015023397. We included 51 articles describing the development and/or evaluation of 40 shared decision-making process instruments: 16 patient questionnaires, 4 provider questionnaires, 18 coding schemes and 2 instruments measuring multiple perspectives. There is an overall lack of evidence for their measurement quality, either because validation is missing or methods are poor. The best-evidence synthesis indicated positive results for a major part of instruments for content validity (50%) and structural validity (53%) if these were evaluated, but negative results for a major part of instruments when inter-rater reliability (47%) and hypotheses testing (59%) were evaluated. Due to the lack of evidence on measurement quality, the choice for the most appropriate instrument can best be based on the instrument's content and characteristics such as the perspective that they assess. We recommend refinement and validation of existing instruments, and the use of COSMIN-guidelines to help guarantee high-quality evaluations.

  20. Scrutinizing A Survey-Based Measure of Science and Mathematics Teacher Knowledge: Relationship to Observations of Teaching Practice

    ERIC Educational Resources Information Center

    Talbot, Robert M., III

    2017-01-01

    There is a clear need for valid and reliable instrumentation that measures teacher knowledge. However, the process of investigating and making a case for instrument validity is not a simple undertaking; rather, it is a complex endeavor. This paper presents the empirical case of one aspect of such an instrument validation effort. The particular…

  1. Measuring adverse events in helicopter emergency medical services: establishing content validity.

    PubMed

    Patterson, P Daniel; Lave, Judith R; Martin-Gill, Christian; Weaver, Matthew D; Wadas, Richard J; Arnold, Robert M; Roth, Ronald N; Mosesso, Vincent N; Guyette, Francis X; Rittenberger, Jon C; Yealy, Donald M

    2014-01-01

    We sought to create a valid framework for detecting adverse events (AEs) in the high-risk setting of helicopter emergency medical services (HEMS). We assembled a panel of 10 expert clinicians (n = 6 emergency medicine physicians and n = 4 prehospital nurses and flight paramedics) affiliated with a large multistate HEMS organization in the Northeast US. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the content validity index (CVI), to quantify the validity of the framework's content. The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: (1) a trigger tool, (2) a method for rating proximal cause, and (3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. We demonstrate a standardized process for the development of a content-valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS.

  2. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and space station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  3. A process improvement model for software verification and validation

    NASA Technical Reports Server (NTRS)

    Callahan, John; Sabolish, George

    1994-01-01

    We describe ongoing work at the NASA Independent Verification and Validation (IV&V) Facility to establish a process improvement model for software verification and validation (V&V) organizations. This model, similar to those used by some software development organizations, uses measurement-based techniques to identify problem areas and introduce incremental improvements. We seek to replicate this model for organizations involved in V&V on large-scale software development projects such as EOS and Space Station. At the IV&V Facility, a university research group and V&V contractors are working together to collect metrics across projects in order to determine the effectiveness of V&V and improve its application. Since V&V processes are intimately tied to development processes, this paper also examines the repercussions for development organizations in large-scale efforts.

  4. Validation in the Absence of Observed Events.

    PubMed

    Lathrop, John; Ezell, Barry

    2016-04-01

    This article addresses the problem of validating models in the absence of observed events, in the area of weapons of mass destruction terrorism risk assessment. We address that problem with a broadened definition of "validation," based on stepping "up" a level to considering the reason why decisionmakers seek validation, and from that basis redefine validation as testing how well the model can advise decisionmakers in terrorism risk management decisions. We develop that into two conditions: validation must be based on cues available in the observable world; and it must focus on what can be done to affect that observable world, i.e., risk management. That leads to two foci: (1) the real-world risk generating process, and (2) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests--Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three validation tests from the DOD literature: Is the model a correct representation of the process to be simulated? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful? © 2015 Society for Risk Analysis.

  5. The Chemical Validation and Standardization Platform (CVSP): large-scale automated validation of chemical structure datasets.

    PubMed

    Karapetyan, Karen; Batchelor, Colin; Sharpe, David; Tkachenko, Valery; Williams, Antony J

    2015-01-01

    There are presently hundreds of online databases hosting millions of chemical compounds and associated data. As a result of the number of cheminformatics software tools that can be used to produce the data, subtle differences between the various cheminformatics platforms, as well as the naivety of the software users, there are a myriad of issues that can exist with chemical structure representations online. In order to help facilitate validation and standardization of chemical structure datasets from various sources we have delivered a freely available internet-based platform to the community for the processing of chemical compound datasets. The chemical validation and standardization platform (CVSP) both validates and standardizes chemical structure representations according to sets of systematic rules. The chemical validation algorithms detect issues with submitted molecular representations using pre-defined or user-defined dictionary-based molecular patterns that are chemically suspicious or potentially requiring manual review. Each identified issue is assigned one of three levels of severity - Information, Warning, and Error - in order to conveniently inform the user of the need to browse and review subsets of their data. The validation process includes validation of atoms and bonds (e.g., making aware of query atoms and bonds), valences, and stereo. The standard form of submission of collections of data, the SDF file, allows the user to map the data fields to predefined CVSP fields for the purpose of cross-validating associated SMILES and InChIs with the connection tables contained within the SDF file. This platform has been applied to the analysis of a large number of data sets prepared for deposition to our ChemSpider database and in preparation of data for the Open PHACTS project. In this work we review the results of the automated validation of the DrugBank dataset, a popular drug and drug target database utilized by the community, and ChEMBL 17 data set. CVSP web site is located at http://cvsp.chemspider.com/. A platform for the validation and standardization of chemical structure representations of various formats has been developed and made available to the community to assist and encourage the processing of chemical structure files to produce more homogeneous compound representations for exchange and interchange between online databases. While the CVSP platform is designed with flexibility inherent to the rules that can be used for processing the data we have produced a recommended rule set based on our own experiences with the large data sets such as DrugBank, ChEMBL, and data sets from ChemSpider.

  6. Extending the Strategy Based Risk Model Using the Delphi Method: An Application to the Validation Process for Research and Developmental (R&D) Satellites

    DTIC Science & Technology

    2009-12-01

    correctly Risk before validation step: 41-60% - Is this too high/ low ? Why? Risk 8: Operational or data latency impacts based on relationship between...too high, too low , or correct. We also asked them to comment on why they felt this way. Finally, we left additional space on the survey for any...cost of each validation effort was too high, too low , or acceptable. They then gave us rationale for their beliefs. The second cost associated with

  7. Validity evidence as a key marker of quality of technical skill assessment in OTL-HNS.

    PubMed

    Labbé, Mathilde; Young, Meredith; Nguyen, Lily H P

    2018-01-13

    Quality monitoring of assessment practices should be a priority in all residency programs. Validity evidence is one of the main hallmarks of assessment quality and should be collected to support the interpretation and use of assessment data. Our objective was to identify, synthesize, and present the validity evidence reported supporting different technical skill assessment tools in otolaryngology-head and neck surgery (OTL-HNS). We performed a secondary analysis of data generated through a systematic review of all published tools for assessing technical skills in OTL-HNS (n = 16). For each tool, we coded validity evidence according to the five types of evidence described by the American Educational Research Association's interpretation of Messick's validity framework. Descriptive statistical analyses were conducted. All 16 tools included in our analysis were supported by internal structure and relationship to variables validity evidence. Eleven articles presented evidence supporting content. Response process was discussed only in one article, and no study reported on evidence exploring consequences. We present the validity evidence reported for 16 rater-based tools that could be used for work-based assessment of OTL-HNS residents in the operating room. The articles included in our review were consistently deficient in evidence for response process and consequences. Rater-based assessment tools that support high-stakes decisions that impact the learner and programs should include several sources of validity evidence. Thus, use of any assessment should be done with careful consideration of the context-specific validity evidence supporting score interpretation, and we encourage deliberate continual assessment quality-monitoring. NA. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.

  8. Validity and reliability of the Turkish version of the Optimality Index-US (OI-US) to assess maternity care outcomes.

    PubMed

    Yucel, Cigdem; Taskin, Lale; Low, Lisa Kane

    2015-12-01

    Although obstetrical interventions are used commonly in Turkey, there is no standardized evidence-based assessment tool to evaluate maternity care outcomes. The Optimality Index-US (OI-US) is an evidence-based tool that was developed for the purpose of measuring aggregate perinatal care processes and outcomes against an optimal or best possible standard. This index has been validated and used in Netherlands, USA and UK until now. The objective of this study was to adapt the OI-US to assess maternity care outcomes in Turkey. Translation and back translation were used to develop the Optimality Index-Turkey (OI-TR) version. To evaluate the content validity of the OI-TR, an expert panel group (n=10) reviewed the items and evidence-based quality of the OI-TR for application in Turkey. Following the content validity process, the OI-TR was used to assess 150 healthy and 150 high-risk pregnant women who gave birth at a high volume, urban maternity hospital in Turkey. The scores between the two groups were compared to assess the discriminant validity of the OI-TR. The percentage of agreement between two raters and the Kappa statistic were calculated to evaluate the reliability. Content validity was established for the OI-TR by an expert group. Discriminant validity was confirmed by comparing the OI scores of healthy pregnant women (mean OI score=77.65%) and those of high-risk pregnant women (mean OI score=78.60%). The percentage of agreement between the two raters was 96.19, and inter-rater agreement was provided for each item in the OI-TR. OI-TR is a valid and reliable tool that can be used to assess maternity care outcomes in Turkey. The results of this study indicate that although the risk statuses of the women differed, the type of care they received was essentially the same, as measured by the OI-TR. Care was not individualised based on risk and for a majority of items was inconsistent with evidence based practice, which is not optimal. Use of the OI-TR will help to provide a standardized way to assess maternity care process and outcomes of maternity care in Turkey which can inform future research aimed at improving maternity care outcomes. Copyright © 2015 Elsevier Ltd. All rights reserved.

  9. Perspectives on Validation of High-Throughput Assays Supporting 21st Century Toxicity Testing1

    PubMed Central

    Judson, Richard; Kavlock, Robert; Martin, Matt; Reif, David; Houck, Keith; Knudsen, Thomas; Richard, Ann; Tice, Raymond R.; Whelan, Maurice; Xia, Menghang; Huang, Ruili; Austin, Christopher; Daston, George; Hartung, Thomas; Fowle, John R.; Wooge, William; Tong, Weida; Dix, David

    2014-01-01

    Summary In vitro, high-throughput screening (HTS) assays are seeing increasing use in toxicity testing. HTS assays can simultaneously test many chemicals, but have seen limited use in the regulatory arena, in part because of the need to undergo rigorous, time-consuming formal validation. Here we discuss streamlining the validation process, specifically for prioritization applications in which HTS assays are used to identify a high-concern subset of a collection of chemicals. The high-concern chemicals could then be tested sooner rather than later in standard guideline bioassays. The streamlined validation process would continue to ensure the reliability and relevance of assays for this application. We discuss the following practical guidelines: (1) follow current validation practice to the extent possible and practical; (2) make increased use of reference compounds to better demonstrate assay reliability and relevance; (3) deemphasize the need for cross-laboratory testing, and; (4) implement a web-based, transparent and expedited peer review process. PMID:23338806

  10. Development and Validation of Pathogen Environmental Monitoring Programs for Small Cheese Processing Facilities.

    PubMed

    Beno, Sarah M; Stasiewicz, Matthew J; Andrus, Alexis D; Ralyea, Robert D; Kent, David J; Martin, Nicole H; Wiedmann, Martin; Boor, Kathryn J

    2016-12-01

    Pathogen environmental monitoring programs (EMPs) are essential for food processing facilities of all sizes that produce ready-to-eat food products exposed to the processing environment. We developed, implemented, and evaluated EMPs targeting Listeria spp. and Salmonella in nine small cheese processing facilities, including seven farmstead facilities. Individual EMPs with monthly sample collection protocols were designed specifically for each facility. Salmonella was detected in only one facility, with likely introduction from the adjacent farm indicated by pulsed-field gel electrophoresis data. Listeria spp. were isolated from all nine facilities during routine sampling. The overall Listeria spp. (other than Listeria monocytogenes ) and L. monocytogenes prevalences in the 4,430 environmental samples collected were 6.03 and 1.35%, respectively. Molecular characterization and subtyping data suggested persistence of a given Listeria spp. strain in seven facilities and persistence of L. monocytogenes in four facilities. To assess routine sampling plans, validation sampling for Listeria spp. was performed in seven facilities after at least 6 months of routine sampling. This validation sampling was performed by independent individuals and included collection of 50 to 150 samples per facility, based on statistical sample size calculations. Two of the facilities had a significantly higher frequency of detection of Listeria spp. during the validation sampling than during routine sampling, whereas two other facilities had significantly lower frequencies of detection. This study provides a model for a science- and statistics-based approach to developing and validating pathogen EMPs.

  11. Quality Risk Management: Putting GMP Controls First.

    PubMed

    O'Donnell, Kevin; Greene, Anne; Zwitkovits, Michael; Calnan, Nuala

    2012-01-01

    This paper presents a practical way in which current approaches to quality risk management (QRM) may be improved, such that they better support qualification, validation programs, and change control proposals at manufacturing sites. The paper is focused on the treatment of good manufacturing practice (GMP) controls during QRM exercises. It specifically addresses why it is important to evaluate and classify such controls in terms of how they affect the severity, probability of occurrence, and detection ratings that may be assigned to potential failure modes or negative events. It also presents a QRM process that is designed to directly link the outputs of risk assessments and risk control activities with qualification and validation protocols in the GMP environment. This paper concerns the need for improvement in the use of risk-based principles and tools when working to ensure that the manufacturing processes used to produce medicines, and their related equipment, are appropriate. Manufacturing processes need to be validated (or proven) to demonstrate that they can produce a medicine of the required quality. The items of equipment used in such processes need to be qualified, in order to prove that they are fit for their intended use. Quality risk management (QRM) tools can be used to support such qualification and validation activities, but their use should be science-based and subject to as little subjectivity and uncertainty as possible. When changes are proposed to manufacturing processes, equipment, or related activities, they also need careful evaluation to ensure that any risks present are managed effectively. This paper presents a practical approach to how QRM may be improved so that it better supports qualification, validation programs, and change control proposals in a more scientific way. This improved approach is based on the treatment of what are called good manufacturing process (GMP) controls during those QRM exercises. A GMP control can be considered to be any control that is put in place to assure product quality and regulatory compliance. This improved approach is also based on how the detectability of risks is assessed. This is important because when producing medicines, it is not always good practice to place a high reliance upon detection-type controls in the absence of an adequate level of assurance in the manufacturing process that leads to the finished medicine.

  12. Computer aided manual validation of mass spectrometry-based proteomic data.

    PubMed

    Curran, Timothy G; Bryson, Bryan D; Reigelhaupt, Michael; Johnson, Hannah; White, Forest M

    2013-06-15

    Advances in mass spectrometry-based proteomic technologies have increased the speed of analysis and the depth provided by a single analysis. Computational tools to evaluate the accuracy of peptide identifications from these high-throughput analyses have not kept pace with technological advances; currently the most common quality evaluation methods are based on statistical analysis of the likelihood of false positive identifications in large-scale data sets. While helpful, these calculations do not consider the accuracy of each identification, thus creating a precarious situation for biologists relying on the data to inform experimental design. Manual validation is the gold standard approach to confirm accuracy of database identifications, but is extremely time-intensive. To palliate the increasing time required to manually validate large proteomic datasets, we provide computer aided manual validation software (CAMV) to expedite the process. Relevant spectra are collected, catalogued, and pre-labeled, allowing users to efficiently judge the quality of each identification and summarize applicable quantitative information. CAMV significantly reduces the burden associated with manual validation and will hopefully encourage broader adoption of manual validation in mass spectrometry-based proteomics. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Process-based modeling of temperature and water profiles in the seedling recruitment zone: Part I. Model validation

    USDA-ARS?s Scientific Manuscript database

    Process-based modeling provides detailed spatial and temporal information of the soil environment in the shallow seedling recruitment zone across field topography where measurements of soil temperature and water may not sufficiently describe the zone. Hourly temperature and water profiles within the...

  14. Validation of electronic medical record-based phenotyping algorithms: results and lessons learned from the eMERGE network.

    PubMed

    Newton, Katherine M; Peissig, Peggy L; Kho, Abel Ngo; Bielinski, Suzette J; Berg, Richard L; Choudhary, Vidhu; Basford, Melissa; Chute, Christopher G; Kullo, Iftikhar J; Li, Rongling; Pacheco, Jennifer A; Rasmussen, Luke V; Spangler, Leslie; Denny, Joshua C

    2013-06-01

    Genetic studies require precise phenotype definitions, but electronic medical record (EMR) phenotype data are recorded inconsistently and in a variety of formats. To present lessons learned about validation of EMR-based phenotypes from the Electronic Medical Records and Genomics (eMERGE) studies. The eMERGE network created and validated 13 EMR-derived phenotype algorithms. Network sites are Group Health, Marshfield Clinic, Mayo Clinic, Northwestern University, and Vanderbilt University. By validating EMR-derived phenotypes we learned that: (1) multisite validation improves phenotype algorithm accuracy; (2) targets for validation should be carefully considered and defined; (3) specifying time frames for review of variables eases validation time and improves accuracy; (4) using repeated measures requires defining the relevant time period and specifying the most meaningful value to be studied; (5) patient movement in and out of the health plan (transience) can result in incomplete or fragmented data; (6) the review scope should be defined carefully; (7) particular care is required in combining EMR and research data; (8) medication data can be assessed using claims, medications dispensed, or medications prescribed; (9) algorithm development and validation work best as an iterative process; and (10) validation by content experts or structured chart review can provide accurate results. Despite the diverse structure of the five EMRs of the eMERGE sites, we developed, validated, and successfully deployed 13 electronic phenotype algorithms. Validation is a worthwhile process that not only measures phenotype performance but also strengthens phenotype algorithm definitions and enhances their inter-institutional sharing.

  15. Validating a Geographical Image Retrieval System.

    ERIC Educational Resources Information Center

    Zhu, Bin; Chen, Hsinchun

    2000-01-01

    Summarizes a prototype geographical image retrieval system that demonstrates how to integrate image processing and information analysis techniques to support large-scale content-based image retrieval. Describes an experiment to validate the performance of this image retrieval system against that of human subjects by examining similarity analysis…

  16. Media fill for validation of a good manufacturing practice-compliant cell production process.

    PubMed

    Serra, Marta; Roseti, Livia; Bassi, Alessandra

    2015-01-01

    According to the European Regulation EC 1394/2007, the clinical use of Advanced Therapy Medicinal Products, such as Human Bone Marrow Mesenchymal Stem Cells expanded for the regeneration of bone tissue or Chondrocytes for Autologous Implantation, requires the development of a process in compliance with the Good Manufacturing Practices. The Media Fill test, consisting of a simulation of the expansion process by using a microbial growth medium instead of the cells, is considered one of the most effective ways to validate a cell production process. Such simulation, in fact, allows to identify any weakness in production that can lead to microbiological contamination of the final cell product as well as qualifying operators. Here, we report the critical aspects concerning the design of a Media Fill test to be used as a tool for the further validation of the sterility of a cell-based Good Manufacturing Practice-compliant production process.

  17. Development and Validation of the Social Information Processing Application: A Web-Based Measure of Social Information Processing Patterns in Elementary School-Age Boys

    ERIC Educational Resources Information Center

    Kupersmidt, Janis B.; Stelter, Rebecca; Dodge, Kenneth A.

    2011-01-01

    The purpose of this study was to evaluate the psychometric properties of an audio computer-assisted self-interviewing Web-based software application called the Social Information Processing Application (SIP-AP) that was designed to assess social information processing skills in boys in RD through 5th grades. This study included a racially and…

  18. Statistical Calibration and Validation of a Homogeneous Ventilated Wall-Interference Correction Method for the National Transonic Facility

    NASA Technical Reports Server (NTRS)

    Walker, Eric L.

    2005-01-01

    Wind tunnel experiments will continue to be a primary source of validation data for many types of mathematical and computational models in the aerospace industry. The increased emphasis on accuracy of data acquired from these facilities requires understanding of the uncertainty of not only the measurement data but also any correction applied to the data. One of the largest and most critical corrections made to these data is due to wall interference. In an effort to understand the accuracy and suitability of these corrections, a statistical validation process for wall interference correction methods has been developed. This process is based on the use of independent cases which, after correction, are expected to produce the same result. Comparison of these independent cases with respect to the uncertainty in the correction process establishes a domain of applicability based on the capability of the method to provide reasonable corrections with respect to customer accuracy requirements. The statistical validation method was applied to the version of the Transonic Wall Interference Correction System (TWICS) recently implemented in the National Transonic Facility at NASA Langley Research Center. The TWICS code generates corrections for solid and slotted wall interference in the model pitch plane based on boundary pressure measurements. Before validation could be performed on this method, it was necessary to calibrate the ventilated wall boundary condition parameters. Discrimination comparisons are used to determine the most representative of three linear boundary condition models which have historically been used to represent longitudinally slotted test section walls. Of the three linear boundary condition models implemented for ventilated walls, the general slotted wall model was the most representative of the data. The TWICS code using the calibrated general slotted wall model was found to be valid to within the process uncertainty for test section Mach numbers less than or equal to 0.60. The scatter among the mean corrected results of the bodies of revolution validation cases was within one count of drag on a typical transport aircraft configuration for Mach numbers at or below 0.80 and two counts of drag for Mach numbers at or below 0.90.

  19. FDA 2011 process validation guidance: lifecycle compliance model.

    PubMed

    Campbell, Cliff

    2014-01-01

    This article has been written as a contribution to the industry's efforts in migrating from a document-driven to a data-driven compliance mindset. A combination of target product profile, control engineering, and general sum principle techniques is presented as the basis of a simple but scalable lifecycle compliance model in support of modernized process validation. Unit operations and significant variables occupy pole position within the model, documentation requirements being treated as a derivative or consequence of the modeling process. The quality system is repositioned as a subordinate of system quality, this being defined as the integral of related "system qualities". The article represents a structured interpretation of the U.S. Food and Drug Administration's 2011 Guidance for Industry on Process Validation and is based on the author's educational background and his manufacturing/consulting experience in the validation field. The U.S. Food and Drug Administration's Guidance for Industry on Process Validation (2011) provides a wide-ranging and rigorous outline of compliant drug manufacturing requirements relative to its 20(th) century predecessor (1987). Its declared focus is patient safety, and it identifies three inter-related (and obvious) stages of the compliance lifecycle. Firstly, processes must be designed, both from a technical and quality perspective. Secondly, processes must be qualified, providing evidence that the manufacturing facility is fully "roadworthy" and fit for its intended purpose. Thirdly, processes must be verified, meaning that commercial batches must be monitored to ensure that processes remain in a state of control throughout their lifetime.

  20. Risk Management Approach & Progress in Cd and Cr6+ Elimination

    DTIC Science & Technology

    2014-11-18

    Documentation Available by 2015? Gaps Conversion Coating- Aluminum Avionics/Electrical- Class 3 9 7 Medium yes- joint service/OEM/ NASA effort to...Optimized conditions validated by NASA . – FRC validation: immersion process – Based on data from the lab Surtec 650V optimization, an 1800-gallon tank...acting similarly, 650V not • Plans: scale up to 80 gallon process line; assess Metalast TCP/HF- EPA and Henkel products; further study 650V

  1. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers.

    PubMed

    Reeves, Todd D; Marbach-Ad, Gili

    2016-01-01

    Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology--either quantitative or qualitative--on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. © 2016 T. D. Reeves and G. Marbach-Ad. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  2. Validating archetypes for the Multiple Sclerosis Functional Composite.

    PubMed

    Braun, Michael; Brandt, Alexander Ulrich; Schulz, Stefan; Boeker, Martin

    2014-08-03

    Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions.This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model.

  3. Validating archetypes for the Multiple Sclerosis Functional Composite

    PubMed Central

    2014-01-01

    Background Numerous information models for electronic health records, such as openEHR archetypes are available. The quality of such clinical models is important to guarantee standardised semantics and to facilitate their interoperability. However, validation aspects are not regarded sufficiently yet. The objective of this report is to investigate the feasibility of archetype development and its community-based validation process, presuming that this review process is a practical way to ensure high-quality information models amending the formal reference model definitions. Methods A standard archetype development approach was applied on a case set of three clinical tests for multiple sclerosis assessment: After an analysis of the tests, the obtained data elements were organised and structured. The appropriate archetype class was selected and the data elements were implemented in an iterative refinement process. Clinical and information modelling experts validated the models in a structured review process. Results Four new archetypes were developed and publicly deployed in the openEHR Clinical Knowledge Manager, an online platform provided by the openEHR Foundation. Afterwards, these four archetypes were validated by domain experts in a team review. The review was a formalised process, organised in the Clinical Knowledge Manager. Both, development and review process turned out to be time-consuming tasks, mostly due to difficult selection processes between alternative modelling approaches. The archetype review was a straightforward team process with the goal to validate archetypes pragmatically. Conclusions The quality of medical information models is crucial to guarantee standardised semantic representation in order to improve interoperability. The validation process is a practical way to better harmonise models that diverge due to necessary flexibility left open by the underlying formal reference model definitions. This case study provides evidence that both community- and tool-enabled review processes, structured in the Clinical Knowledge Manager, ensure archetype quality. It offers a pragmatic but feasible way to reduce variation in the representation of clinical information models towards a more unified and interoperable model. PMID:25087081

  4. Differentiating location- and distance-based processes in memory for time: an ERP study.

    PubMed

    Curran, Tim; Friedman, William J

    2003-09-01

    Memory for the time of events may benefit from reconstructive, location-based, and distance-based processes, but these processes are difficult to dissociate with behavioral methods. Neuropsychological research has emphasized the contribution of prefrontal brain mechanisms to memory for time but has not clearly differentiated location- from distance-based processing. The present experiment recorded event-related brain potentials (ERPs) while subjects completed two different temporal memory tests, designed to emphasize either location- or distance-based processing. The subjects' reports of location-based versus distance-based strategies and the reaction time pattern validated our experimental manipulation. Late (800-1,800 msec) frontal ERP effects were related to location-based processing. The results provide support for a two-process theory of memory for time and suggest that frontal memory mechanisms are specifically related to reconstructive, location-based processing.

  5. Validating Performance Level Descriptors (PLDs) for the AP® Environmental Science Exam

    ERIC Educational Resources Information Center

    Reshetar, Rosemary; Kaliski, Pamela; Chajewski, Michael; Lionberger, Karen

    2012-01-01

    This presentation summarizes a pilot study conducted after the May 2011 administration of the AP Environmental Science Exam. The study used analytical methods based on scaled anchoring as input to a Performance Level Descriptor validation process that solicited systematic input from subject matter experts.

  6. Optimization of critical quality attributes in continuous twin-screw wet granulation via design space validated with pilot scale experimental data.

    PubMed

    Liu, Huolong; Galbraith, S C; Ricart, Brendon; Stanton, Courtney; Smith-Goettler, Brandye; Verdi, Luke; O'Connor, Thomas; Lee, Sau; Yoon, Seongkyu

    2017-06-15

    In this study, the influence of key process variables (screw speed, throughput and liquid to solid (L/S) ratio) of a continuous twin screw wet granulation (TSWG) was investigated using a central composite face-centered (CCF) experimental design method. Regression models were developed to predict the process responses (motor torque, granule residence time), granule properties (size distribution, volume average diameter, yield, relative width, flowability) and tablet properties (tensile strength). The effects of the three key process variables were analyzed via contour and interaction plots. The experimental results have demonstrated that all the process responses, granule properties and tablet properties are influenced by changing the screw speed, throughput and L/S ratio. The TSWG process was optimized to produce granules with specific volume average diameter of 150μm and the yield of 95% based on the developed regression models. A design space (DS) was built based on volume average granule diameter between 90 and 200μm and the granule yield larger than 75% with a failure probability analysis using Monte Carlo simulations. Validation experiments successfully validated the robustness and accuracy of the DS generated using the CCF experimental design in optimizing a continuous TSWG process. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. A Study on the Data Compression Technology-Based Intelligent Data Acquisition (IDAQ) System for Structural Health Monitoring of Civil Structures

    PubMed Central

    Jeon, Joonryong

    2017-01-01

    In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size. PMID:28704945

  8. A Study on the Data Compression Technology-Based Intelligent Data Acquisition (IDAQ) System for Structural Health Monitoring of Civil Structures.

    PubMed

    Heo, Gwanghee; Jeon, Joonryong

    2017-07-12

    In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size.

  9. 7 CFR 25.404 - Validation of designation.

    Code of Federal Regulations, 2010 CFR

    2010-01-01

    ... maintain a process for ensuring ongoing broad-based participation by community residents consistent with the approved application and planning process outlined in the strategic plan. (1) Continuous... benchmarks, the process it will use for reviewing goals and benchmarks and revising its strategic plan. (2...

  10. Scaling the Information Processing Demands of Occupations

    ERIC Educational Resources Information Center

    Haase, Richard F.; Jome, LaRae M.; Ferreira, Joaquim Armando; Santos, Eduardo J. R.; Connacher, Christopher C.; Sendrowitz, Kerrin

    2011-01-01

    The purpose of this study was to provide additional validity evidence for a model of person-environment fit based on polychronicity, stimulus load, and information processing capacities. In this line of research the confluence of polychronicity and information processing (e.g., the ability of individuals to process stimuli from the environment…

  11. Enhancing students' learning in problem based learning: validation of a self-assessment scale for active learning and critical thinking.

    PubMed

    Khoiriyah, Umatul; Roberts, Chris; Jorm, Christine; Van der Vleuten, C P M

    2015-08-26

    Problem based learning (PBL) is a powerful learning activity but fidelity to intended models may slip and student engagement wane, negatively impacting learning processes, and outcomes. One potential solution to solve this degradation is by encouraging self-assessment in the PBL tutorial. Self-assessment is a central component of the self-regulation of student learning behaviours. There are few measures to investigate self-assessment relevant to PBL processes. We developed a Self-assessment Scale on Active Learning and Critical Thinking (SSACT) to address this gap. We wished to demonstrated evidence of its validity in the context of PBL by exploring its internal structure. We used a mixed methods approach to scale development. We developed scale items from a qualitative investigation, literature review, and consideration of previous existing tools used for study of the PBL process. Expert review panels evaluated its content; a process of validation subsequently reduced the pool of items. We used structural equation modelling to undertake a confirmatory factor analysis (CFA) of the SSACT and coefficient alpha. The 14 item SSACT consisted of two domains "active learning" and "critical thinking." The factorial validity of SSACT was evidenced by all items loading significantly on their expected factors, a good model fit for the data, and good stability across two independent samples. Each subscale had good internal reliability (>0.8) and strongly correlated with each other. The SSACT has sufficient evidence of its validity to support its use in the PBL process to encourage students to self-assess. The implementation of the SSACT may assist students to improve the quality of their learning in achieving PBL goals such as critical thinking and self-directed learning.

  12. Process-based costing.

    PubMed

    Lee, Robert H; Bott, Marjorie J; Forbes, Sarah; Redford, Linda; Swagerty, Daniel L; Taunton, Roma Lee

    2003-01-01

    Understanding how quality improvement affects costs is important. Unfortunately, low-cost, reliable ways of measuring direct costs are scarce. This article builds on the principles of process improvement to develop a costing strategy that meets both criteria. Process-based costing has 4 steps: developing a flowchart, estimating resource use, valuing resources, and calculating direct costs. To illustrate the technique, this article uses it to cost the care planning process in 3 long-term care facilities. We conclude that process-based costing is easy to implement; generates reliable, valid data; and allows nursing managers to assess the costs of new or modified processes.

  13. Development and validation of an instrument for evaluating inquiry-based tasks in science textbooks

    NASA Astrophysics Data System (ADS)

    Yang, Wenyuan; Liu, Enshan

    2016-12-01

    This article describes the development and validation of an instrument that can be used for content analysis of inquiry-based tasks. According to the theories of educational evaluation and qualities of inquiry, four essential functions that inquiry-based tasks should serve are defined: (1) assisting in the construction of understandings about scientific concepts, (2) providing students opportunities to use inquiry process skills, (3) being conducive to establishing understandings about scientific inquiry, and (4) giving students opportunities to develop higher order thinking skills. An instrument - the Inquiry-Based Tasks Analysis Inventory (ITAI) - was developed to judge whether inquiry-based tasks perform these functions well. To test the reliability and validity of the ITAI, 4 faculty members were invited to use the ITAI to collect data from 53 inquiry-based tasks in the 3 most widely adopted senior secondary biology textbooks in Mainland China. The results indicate that (1) the inter-rater reliability reached 87.7%, (2) the grading criteria have high discriminant validity, (3) the items possess high convergent validity, and (4) the Cronbach's alpha reliability coefficient reached 0.792. The study concludes that the ITAI is valid and reliable. Because of its solid foundations in theoretical and empirical argumentation, the ITAI is trustworthy.

  14. Developing authentic clinical simulations for effective listening and communication in pediatric rehabilitation service delivery.

    PubMed

    King, Gillian; Shepherd, Tracy A; Servais, Michelle; Willoughby, Colleen; Bolack, Linda; Strachan, Deborah; Moodie, Sheila; Baldwin, Patricia; Knickle, Kerry; Parker, Kathryn; Savage, Diane; McNaughton, Nancy

    2016-10-01

    To describe the creation and validation of six simulations concerned with effective listening and interpersonal communication in pediatric rehabilitation. The simulations involved clinicians from various disciplines, were based on clinical scenarios related to client issues, and reflected core aspects of listening/communication. Each simulation had a key learning objective, thus focusing clinicians on specific listening skills. The article outlines the process used to turn written scenarios into digital video simulations, including steps taken to establish content validity and authenticity, and to establish a series of videos based on the complexity of their learning objectives, given contextual factors and associated macrocognitive processes that influence the ability to listen. A complexity rating scale was developed and used to establish a gradient of easy/simple, intermediate, and hard/complex simulations. The development process exemplifies an evidence-based, integrated knowledge translation approach to the teaching and learning of listening and communication skills.

  15. A Bayesian Approach to Determination of F, D, and Z Values Used in Steam Sterilization Validation.

    PubMed

    Faya, Paul; Stamey, James D; Seaman, John W

    2017-01-01

    For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the well-known D T , z , and F o values that are used in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these values to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. LAY ABSTRACT: For manufacturers of sterile drug products, steam sterilization is a common method used to provide assurance of the sterility of manufacturing equipment and products. The validation of sterilization processes is a regulatory requirement and relies upon the estimation of key resistance parameters of microorganisms. Traditional methods have relied upon point estimates for the resistance parameters. In this paper, we propose a Bayesian method for estimation of the critical process parameters that are evaluated in the development and validation of sterilization processes. A Bayesian approach allows the uncertainty about these parameters to be modeled using probability distributions, thereby providing a fully risk-based approach to measures of sterility assurance. An example is given using the survivor curve and fraction negative methods for estimation of resistance parameters, and we present a means by which a probabilistic conclusion can be made regarding the ability of a process to achieve a specified sterility criterion. © PDA, Inc. 2017.

  16. A Supervised Learning Process to Validate Online Disease Reports for Use in Predictive Models.

    PubMed

    Patching, Helena M M; Hudson, Laurence M; Cooke, Warrick; Garcia, Andres J; Hay, Simon I; Roberts, Mark; Moyes, Catherine L

    2015-12-01

    Pathogen distribution models that predict spatial variation in disease occurrence require data from a large number of geographic locations to generate disease risk maps. Traditionally, this process has used data from public health reporting systems; however, using online reports of new infections could speed up the process dramatically. Data from both public health systems and online sources must be validated before they can be used, but no mechanisms exist to validate data from online media reports. We have developed a supervised learning process to validate geolocated disease outbreak data in a timely manner. The process uses three input features, the data source and two metrics derived from the location of each disease occurrence. The location of disease occurrence provides information on the probability of disease occurrence at that location based on environmental and socioeconomic factors and the distance within or outside the current known disease extent. The process also uses validation scores, generated by disease experts who review a subset of the data, to build a training data set. The aim of the supervised learning process is to generate validation scores that can be used as weights going into the pathogen distribution model. After analyzing the three input features and testing the performance of alternative processes, we selected a cascade of ensembles comprising logistic regressors. Parameter values for the training data subset size, number of predictors, and number of layers in the cascade were tested before the process was deployed. The final configuration was tested using data for two contrasting diseases (dengue and cholera), and 66%-79% of data points were assigned a validation score. The remaining data points are scored by the experts, and the results inform the training data set for the next set of predictors, as well as going to the pathogen distribution model. The new supervised learning process has been implemented within our live site and is being used to validate the data that our system uses to produce updated predictive disease maps on a weekly basis.

  17. A multi-site cognitive task analysis for biomedical query mediation.

    PubMed

    Hruby, Gregory W; Rasmussen, Luke V; Hanauer, David; Patel, Vimla L; Cimino, James J; Weng, Chunhua

    2016-09-01

    To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: "Identify potential index phenotype," "If needed, request EHR database access rights," and "Perform query and present output to medical researcher", and 8 are invalid. We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  18. A Multi-Site Cognitive Task Analysis for Biomedical Query Mediation

    PubMed Central

    Hruby, Gregory W.; Rasmussen, Luke V.; Hanauer, David; Patel, Vimla; Cimino, James J.; Weng, Chunhua

    2016-01-01

    Objective To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. Materials and Methods We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. Results The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: “Identify potential index phenotype,” “If needed, request EHR database access rights,” and “Perform query and present output to medical researcher”, and 8 are invalid. Discussion We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. Conclusions We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. PMID:27435950

  19. NCI-FDA Interagency Oncology Task Force Workshop Provides Guidance for Analytical Validation of Protein-based Multiplex Assays | Office of Cancer Clinical Proteomics Research

    Cancer.gov

    An NCI-FDA Interagency Oncology Task Force (IOTF) Molecular Diagnostics Workshop was held on October 30, 2008 in Cambridge, MA, to discuss requirements for analytical validation of protein-based multiplex technologies in the context of its intended use. This workshop developed through NCI's Clinical Proteomic Technologies for Cancer initiative and the FDA focused on technology-specific analytical validation processes to be addressed prior to use in clinical settings. In making this workshop unique, a case study approach was used to discuss issues related to

  20. Determining e-Portfolio Elements in Learning Process Using Fuzzy Delphi Analysis

    ERIC Educational Resources Information Center

    Mohamad, Syamsul Nor Azlan; Embi, Mohamad Amin; Nordin, Norazah

    2015-01-01

    The present article introduces the Fuzzy Delphi method results obtained in the study on determining e-Portfolio elements in learning process for art and design context. This method bases on qualified experts that assure the validity of the collected information. In particular, the confirmation of elements is based on experts' opinion and…

  1. Inhibitory mechanism of the matching heuristic in syllogistic reasoning.

    PubMed

    Tse, Ping Ping; Moreno Ríos, Sergio; García-Madruga, Juan Antonio; Bajo Molina, María Teresa

    2014-11-01

    A number of heuristic-based hypotheses have been proposed to explain how people solve syllogisms with automatic processes. In particular, the matching heuristic employs the congruency of the quantifiers in a syllogism—by matching the quantifier of the conclusion with those of the two premises. When the heuristic leads to an invalid conclusion, successful solving of these conflict problems requires the inhibition of automatic heuristic processing. Accordingly, if the automatic processing were based on processing the set of quantifiers, no semantic contents would be inhibited. The mental model theory, however, suggests that people reason using mental models, which always involves semantic processing. Therefore, whatever inhibition occurs in the processing implies the inhibition of the semantic contents. We manipulated the validity of the syllogism and the congruency of the quantifier of its conclusion with those of the two premises according to the matching heuristic. A subsequent lexical decision task (LDT) with related words in the conclusion was used to test any inhibition of the semantic contents after each syllogistic evaluation trial. In the LDT, the facilitation effect of semantic priming diminished after correctly solved conflict syllogisms (match-invalid or mismatch-valid), but was intact after no-conflict syllogisms. The results suggest the involvement of an inhibitory mechanism of semantic contents in syllogistic reasoning when there is a conflict between the output of the syntactic heuristic and actual validity. Our results do not support a uniquely syntactic process of syllogistic reasoning but fit with the predictions based on mental model theory. Copyright © 2014 Elsevier B.V. All rights reserved.

  2. Development and Validation of Cognitive Screening Instruments.

    ERIC Educational Resources Information Center

    Jarman, Ronald F.

    The author suggests that most research on the early detection of learning disabilities is characterisized by an ineffective and a theoretical method of selecting and validating tasks. An alternative technique is proposed, based on a neurological theory of cognitive processes, whereby task analysis is a first step, with empirical analyses as…

  3. Students' Self-Evaluation and Reflection (Part 1): "Measurement"

    ERIC Educational Resources Information Center

    Cambra-Fierro, Jesus; Cambra-Berdun, Jesus

    2007-01-01

    Purpose: The objective of the paper is the development and validation of scales to assess reflective learning. Design/methodology/approach: The research is based on a literature review plus in-classroom experience. For the scale validation process, exploratory and confirmatory analyses were conducted, following proposals made by Anderson and…

  4. Codification and Validation of Professional Development Questionnaire of Teachers

    ERIC Educational Resources Information Center

    Ayyoobi, Fatemah; Pourshafei, Hadi; Asgari, Ali

    2016-01-01

    Teacher in the educational system and the teaching-learning process, as a main leading should need to knowledge and professional skills. Therefore, evaluation of professional development is important. This study aims to design and modify Construction and Validation of professional development questionnaire of teachers. This research based on…

  5. Measuring Adverse Events in Helicopter Emergency Medical Services: Establishing Content Validity

    PubMed Central

    Patterson, P. Daniel; Lave, Judith R.; Martin-Gill, Christian; Weaver, Matthew D.; Wadas, Richard J.; Arnold, Robert M.; Roth, Ronald N.; Mosesso, Vincent N.; Guyette, Francis X.; Rittenberger, Jon C.; Yealy, Donald M.

    2015-01-01

    Introduction We sought to create a valid framework for detecting Adverse Events (AEs) in the high-risk setting of Helicopter Emergency Medical Services (HEMS). Methods We assembled a panel of 10 expert clinicians (n=6 emergency medicine physicians and n=4 prehospital nurses and flight paramedics) affiliated with a large multi-state HEMS organization in the Northeast U.S. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the Content Validity Index (CVI), to quantify the validity of the framework’s content. Results The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: 1) a trigger tool, 2) a method for rating proximal cause, and 3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. Conclusions We demonstrate a standardized process for the development of a content valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS. PMID:24003951

  6. A new test for the assessment of working memory in clinical settings: Validation and norming of a month ordering task.

    PubMed

    Buekenhout, Imke; Leitão, José; Gomes, Ana A

    2018-05-24

    Month ordering tasks have been used in experimental settings to obtain measures of working memory (WM) capacity in older/clinical groups based solely on their face validity. We sought to assess the appropriateness of using a month ordering task in other contexts, including clinical settings, as a psychometrically sound WM assessment. To this end, we constructed a month ordering task (ucMOT), studied its reliability (internal consistency and temporal stability), and gathered construct-related and criterion-related validity evidence for its use as a WM assessment. The ucMOT proved to be internally consistent and temporally stable, and analyses of the criterion-related validity evidence revealed that its scores predicted the efficiency of language comprehension processes known to depend crucially on WM resources, namely, processes involved in pronoun interpretation. Furthermore, all ucMOT items discriminated between younger and older age groups; the global scores were significantly correlated with scores on well-established WM tasks and presented lower correlations with instruments that evaluate different (although related) processes, namely, inhibition and processing speed. We conclude that the ucMOT possesses solid psychometric properties. Accordingly, we acquired normative data for the Portuguese population, which we present as a regression-based algorithm that yields z scores adjusted for age, gender, and years of formal education. (PsycINFO Database Record (c) 2018 APA, all rights reserved).

  7. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems.

    PubMed

    Silva, Lenardo C; Almeida, Hyggo O; Perkusich, Angelo; Perkusich, Mirko

    2015-10-30

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage.

  8. A Model-Based Approach to Support Validation of Medical Cyber-Physical Systems

    PubMed Central

    Silva, Lenardo C.; Almeida, Hyggo O.; Perkusich, Angelo; Perkusich, Mirko

    2015-01-01

    Medical Cyber-Physical Systems (MCPS) are context-aware, life-critical systems with patient safety as the main concern, demanding rigorous processes for validation to guarantee user requirement compliance and specification-oriented correctness. In this article, we propose a model-based approach for early validation of MCPS, focusing on promoting reusability and productivity. It enables system developers to build MCPS formal models based on a library of patient and medical device models, and simulate the MCPS to identify undesirable behaviors at design time. Our approach has been applied to three different clinical scenarios to evaluate its reusability potential for different contexts. We have also validated our approach through an empirical evaluation with developers to assess productivity and reusability. Finally, our models have been formally verified considering functional and safety requirements and model coverage. PMID:26528982

  9. Near infrared and Raman spectroscopy as Process Analytical Technology tools for the manufacturing of silicone-based drug reservoirs.

    PubMed

    Mantanus, J; Rozet, E; Van Butsele, K; De Bleye, C; Ceccato, A; Evrard, B; Hubert, Ph; Ziémons, E

    2011-08-05

    Using near infrared (NIR) and Raman spectroscopy as PAT tools, 3 critical quality attributes of a silicone-based drug reservoir were studied. First, the Active Pharmaceutical Ingredient (API) homogeneity in the reservoir was evaluated using Raman spectroscopy (mapping): the API distribution within the industrial drug reservoirs was found to be homogeneous while API aggregates were detected in laboratory scale samples manufactured with a non optimal mixing process. Second, the crosslinking process of the reservoirs was monitored at different temperatures with NIR spectroscopy. Conformity tests and Principal Component Analysis (PCA) were performed on the collected data to find out the relation between the temperature and the time necessary to reach the crosslinking endpoints. An agreement was found between the conformity test results and the PCA results. Compared to the conformity test method, PCA had the advantage to discriminate the heating effect from the crosslinking effect occurring together during the monitored process. Therefore the 2 approaches were found to be complementary. Third, based on the HPLC reference method, a NIR model able to quantify the API in the drug reservoir was developed and thoroughly validated. Partial Least Squares (PLS) regression on the calibration set was performed to build prediction models of which the ability to quantify accurately was tested with the external validation set. The 1.2% Root Mean Squared Error of Prediction (RMSEP) of the NIR model indicated the global accuracy of the model. The accuracy profile based on tolerance intervals was used to generate a complete validation report. The 95% tolerance interval calculated on the validation results indicated that each future result will have a relative error below ±5% with a probability of at least 95%. In conclusion, 3 critical quality attributes of silicone-based drug reservoirs were quickly and efficiently evaluated by NIR and Raman spectroscopy. Copyright © 2011 Elsevier B.V. All rights reserved.

  10. Measuring team-based interprofessional education outcomes in clinical dentistry: psychometric evaluation of a new scale at an Australian dental school.

    PubMed

    Storrs, Mark J; Alexander, Heather; Sun, Jing; Kroon, Jeroen; Evans, Jane L

    2015-03-01

    Previous research on interprofessional education (IPE) assessment has shown the need to evaluate the influence of team-based processes on the quality of clinical education. This study aimed to develop a valid and reliable instrument to evaluate the effectiveness of interprofessional team-based treatment planning (TBTP) on the quality of clinical education at the Griffith University School of Dentistry and Oral Health, Queensland, Australia. A scale was developed and evaluated to measure interprofessional student team processes and their effect on the quality of clinical education for dental, oral health therapy, and dental technology students (known more frequently as intraprofessional education). A face validity analysis by IPE experts confirmed that items on the scale reflected the meaning of relevant concepts. After piloting, 158 students (61% response rate) involved with TBTP participated in a survey. An exploratory factor analysis using the principal component method retained 23 items with a total variance of 64.6%, suggesting high content validity. Three subscales accounted for 45.7%, 11.4%, and 7.5% of the variance. Internal consistency of the scale (α=0.943) and subscales 1 (α=0.953), 2 (α=0.897), and 3 (α=0.813) was high. A reliability analysis yielded moderate (rs=0.43) to high correlations (0.81) with the remaining scale items. Confirmatory factor analyses verified convergent validity and confirmed that this structure had a good model fit. This study suggests that the instrument might be useful in evaluating interprofessional or intraprofessional team-based processes and their influence on the quality of clinical education in academic dental institutions.

  11. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates.

    PubMed

    LeDell, Erin; Petersen, Maya; van der Laan, Mark

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC.

  12. Computationally efficient confidence intervals for cross-validated area under the ROC curve estimates

    PubMed Central

    Petersen, Maya; van der Laan, Mark

    2015-01-01

    In binary classification problems, the area under the ROC curve (AUC) is commonly used to evaluate the performance of a prediction model. Often, it is combined with cross-validation in order to assess how the results will generalize to an independent data set. In order to evaluate the quality of an estimate for cross-validated AUC, we obtain an estimate of its variance. For massive data sets, the process of generating a single performance estimate can be computationally expensive. Additionally, when using a complex prediction method, the process of cross-validating a predictive model on even a relatively small data set can still require a large amount of computation time. Thus, in many practical settings, the bootstrap is a computationally intractable approach to variance estimation. As an alternative to the bootstrap, we demonstrate a computationally efficient influence curve based approach to obtaining a variance estimate for cross-validated AUC. PMID:26279737

  13. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, Lieke A.; Teuling, Adriaan J.; Torfs, Paul J. J. F.; Uijlenhoet, Remko; Mizukami, Naoki; Clark, Martyn P.

    2016-03-01

    A meta-analysis on 192 peer-reviewed articles reporting on applications of the variable infiltration capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  14. HESS Opinions: The need for process-based evaluation of large-domain hyper-resolution models

    NASA Astrophysics Data System (ADS)

    Melsen, L. A.; Teuling, A. J.; Torfs, P. J. J. F.; Uijlenhoet, R.; Mizukami, N.; Clark, M. P.

    2015-12-01

    A meta-analysis on 192 peer-reviewed articles reporting applications of the Variable Infiltration Capacity (VIC) model in a distributed way reveals that the spatial resolution at which the model is applied has increased over the years, while the calibration and validation time interval has remained unchanged. We argue that the calibration and validation time interval should keep pace with the increase in spatial resolution in order to resolve the processes that are relevant at the applied spatial resolution. We identified six time concepts in hydrological models, which all impact the model results and conclusions. Process-based model evaluation is particularly relevant when models are applied at hyper-resolution, where stakeholders expect credible results both at a high spatial and temporal resolution.

  15. Validation of educational assessments: a primer for simulation and beyond.

    PubMed

    Cook, David A; Hatala, Rose

    2016-01-01

    Simulation plays a vital role in health professions assessment. This review provides a primer on assessment validation for educators and education researchers. We focus on simulation-based assessment of health professionals, but the principles apply broadly to other assessment approaches and topics. Validation refers to the process of collecting validity evidence to evaluate the appropriateness of the interpretations, uses, and decisions based on assessment results. Contemporary frameworks view validity as a hypothesis, and validity evidence is collected to support or refute the validity hypothesis (i.e., that the proposed interpretations and decisions are defensible). In validation, the educator or researcher defines the proposed interpretations and decisions, identifies and prioritizes the most questionable assumptions in making these interpretations and decisions (the "interpretation-use argument"), empirically tests those assumptions using existing or newly-collected evidence, and then summarizes the evidence as a coherent "validity argument." A framework proposed by Messick identifies potential evidence sources: content, response process, internal structure, relationships with other variables, and consequences. Another framework proposed by Kane identifies key inferences in generating useful interpretations: scoring, generalization, extrapolation, and implications/decision. We propose an eight-step approach to validation that applies to either framework: Define the construct and proposed interpretation, make explicit the intended decision(s), define the interpretation-use argument and prioritize needed validity evidence, identify candidate instruments and/or create/adapt a new instrument, appraise existing evidence and collect new evidence as needed, keep track of practical issues, formulate the validity argument, and make a judgment: does the evidence support the intended use? Rigorous validation first prioritizes and then empirically evaluates key assumptions in the interpretation and use of assessment scores. Validation science would be improved by more explicit articulation and prioritization of the interpretation-use argument, greater use of formal validation frameworks, and more evidence informing the consequences and implications of assessment.

  16. Validation of the TTM processes of change measure for physical activity in an adult French sample.

    PubMed

    Bernard, Paquito; Romain, Ahmed-Jérôme; Trouillet, Raphael; Gernigon, Christophe; Nigg, Claudio; Ninot, Gregory

    2014-04-01

    Processes of change (POC) are constructs from the transtheoretical model that propose to examine how people engage in a behavior. However, there is no consensus about a leading model explaining POC and there is no validated French POC scale in physical activity This study aimed to compare the different existing models to validate a French POC scale. Three studies, with 748 subjects included, were carried out to translate the items and evaluate their clarity (study 1, n = 77), to assess the factorial validity (n = 200) and invariance/equivalence (study 2, n = 471), and to analyze the concurrent validity by stage × process analyses (study 3, n = 671). Two models displayed adequate fit to the data; however, based on the Akaike information criterion, the fully correlated five-factor model appeared as the most appropriate to measure POC in physical activity. The invariance/equivalence was also confirmed across genders and student status. Four of the five existing factors discriminated pre-action and post-action stages. These data support the validation of the POC questionnaire in physical activity among a French sample. More research is needed to explore the longitudinal properties of this scale.

  17. Validating the ACE Model for Evaluating Student Performance Using a Teaching-Learning Process Based on Computational Modeling Systems

    ERIC Educational Resources Information Center

    Louzada, Alexandre Neves; Elia, Marcos da Fonseca; Sampaio, Fábio Ferrentini; Vidal, Andre Luiz Pestana

    2014-01-01

    The aim of this work is to adapt and test, in a Brazilian public school, the ACE model proposed by Borkulo for evaluating student performance as a teaching-learning process based on computational modeling systems. The ACE model is based on different types of reasoning involving three dimensions. In addition to adapting the model and introducing…

  18. The logic-bias effect: The role of effortful processing in the resolution of belief-logic conflict.

    PubMed

    Howarth, Stephanie; Handley, Simon J; Walsh, Clare

    2016-02-01

    According to the default interventionist dual-process account of reasoning, belief-based responses to reasoning tasks are based on Type 1 processes generated by default, which must be inhibited in order to produce an effortful, Type 2 output based on the validity of an argument. However, recent research has indicated that reasoning on the basis of beliefs may not be as fast and automatic as this account claims. In three experiments, we presented participants with a reasoning task that was to be completed while they were generating random numbers (RNG). We used the novel methodology introduced by Handley, Newstead & Trippas (Journal of Experimental Psychology: Learning, Memory, and Cognition, 37, 28-43, 2011), which required participants to make judgments based upon either the validity of a conditional argument or the believability of its conclusion. The results showed that belief-based judgments produced lower rates of accuracy overall and were influenced to a greater extent than validity judgments by the presence of a conflict between belief and logic for both simple and complex arguments. These findings were replicated in Experiment 3, in which we controlled for switching demands in a blocked design. Across all three experiments, we found a main effect of RNG, implying that both instructional sets require some effortful processing. However, in the blocked design RNG had its greatest impact on logic judgments, suggesting that distinct executive resources may be required for each type of judgment. We discuss the implications of our findings for the default interventionist account and offer a parallel competitive model as an alternative interpretation for our findings.

  19. Incremental checking of Master Data Management model based on contextual graphs

    NASA Astrophysics Data System (ADS)

    Lamolle, Myriam; Menet, Ludovic; Le Duc, Chan

    2015-10-01

    The validation of models is a crucial step in distributed heterogeneous systems. In this paper, an incremental validation method is proposed in the scope of a Model Driven Engineering (MDE) approach, which is used to develop a Master Data Management (MDM) field represented by XML Schema models. The MDE approach presented in this paper is based on the definition of an abstraction layer using UML class diagrams. The validation method aims to minimise the model errors and to optimisethe process of model checking. Therefore, the notion of validation contexts is introduced allowing the verification of data model views. Description logics specify constraints that the models have to check. An experimentation of the approach is presented through an application developed in ArgoUML IDE.

  20. Development and preliminary validation of a behavioral task of negative reinforcement underlying risk taking and its relation to problem alcohol use in college freshmen

    PubMed Central

    MacPherson, Laura; Calvin, Nicholas T.; Richards, Jessica M.; Guller, Leila; Mayes, Linda C.; Crowley, Michael J.; Daughters, Stacey B.; Lejuez, C.W.

    2011-01-01

    Background A long line of theoretical and empirical evidence implicates negative reinforcement as a process underlying the etiology and maintenance of risky alcohol use behaviors from adolescence through emerging adulthood. However, the bulk of this literature has relied on self-report measures and there is a notable absence of behavioral modes of assessments of negative reinforcement-based alcohol-related risk-taking. To address this clear gap in the literature, the current study presents the first published data on the reliability and validity of the Maryland Resource for the Behavioral Utilization of the Reinforcement of Negative Stimuli (MRBURNS), which is a modified version of the positive reinforcement-based Balloon Analogue Risk Task (BART). Methods Participants included a convenience sample of 116 college freshmen ever regular drinkers (aged 18–19) who completed both behavioral tasks; self-report measures of negative reinforcement/avoidance constructs and of positive reinforcement/appetitive constructs to examine convergent validity and discriminant validity, respectively; and self-report measures of alcohol use, problems, and motives to examine criterion validity. Results The MRBURNS evidenced sound experimental properties and reliability across task trials. In support of convergent validity, risk taking on the MRBURNS correlated significantly with negative urgency, difficulties in emotion regulation and depressive and anxiety-related symptoms. In support of discriminant validity, performance on the MRBURNS was unrelated to risk taking on the BART, sensation seeking, and trait impulsivity. Finally, pertaining to criterion validity, risk taking on the MRBURNS was related to alcohol-related problems but not heavy episodic alcohol use. Notably, risk taking on the MRBURNS was associated with negative reinforcement-based but not with positive reinforcement-based drinking motives. Conclusions Data from this initial investigation suggest the utility of the MRBURNS as a behavioral measure of negative-reinforcement based risk-taking that can provide a useful compliment to existing self-report measures to improve our understanding of the relationship between avoidant reinforcement processes and risky alcohol use. PMID:22309846

  1. Development and Validation of Methodology to Model Flow in Ventilation Systems Commonly Found in Nuclear Facilities. Phase I

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Strons, Philip; Bailey, James L.; Davis, John

    2016-03-01

    In this work, we apply the CFD in modeling airflow and particulate transport. This modeling is then compared to field validation studies to both inform and validate the modeling assumptions. Based on the results of field tests, modeling assumptions and boundary conditions are refined and the process is repeated until the results are found to be reliable with a high level of confidence.

  2. Dynamic Rod Worth Measurement

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chao, Y.A.; Chapman, D.M.; Hill, D.J.

    2000-12-15

    The dynamic rod worth measurement (DRWM) technique is a method of quickly validating the predicted bank worth of control rods and shutdown rods. The DRWM analytic method is based on three-dimensional, space-time kinetic simulations of the rapid rod movements. Its measurement data is processed with an advanced digital reactivity computer. DRWM has been used as the method of bank worth validation at numerous plant startups with excellent results. The process and methodology of DRWM are described, and the measurement results of using DRWM are presented.

  3. Interprofessional partnerships in chronic illness care: a conceptual model for measuring partnership effectiveness

    PubMed Central

    Butt, Gail; Markle-Reid, Maureen; Browne, Gina

    2008-01-01

    Introduction Interprofessional health and social service partnerships (IHSSP) are internationally acknowledged as integral for comprehensive chronic illness care. However, the evidence-base for partnership effectiveness is lacking. This paper aims to clarify partnership measurement issues, conceptualize IHSSP at the front-line staff level, and identify tools valid for group process measurement. Theory and methods A systematic literature review utilizing three interrelated searches was conducted. Thematic analysis techniques were supported by NVivo 7 software. Complexity theory was used to guide the analysis, ground the new conceptualization and validate the selected measures. Other properties of the measures were critiqued using established criteria. Results There is a need for a convergent view of what constitutes a partnership and its measurement. The salient attributes of IHSSP and their interorganizational context were described and grounded within complexity theory. Two measures were selected and validated for measurement of proximal group outcomes. Conclusion This paper depicts a novel complexity theory-based conceptual model for IHSSP of front-line staff who provide chronic illness care. The conceptualization provides the underpinnings for a comprehensive evaluative framework for partnerships. Two partnership process measurement tools, the PSAT and TCI are valid for IHSSP process measurement with consideration of their strengths and limitations. PMID:18493591

  4. Onboard FPGA-based SAR processing for future spaceborne systems

    NASA Technical Reports Server (NTRS)

    Le, Charles; Chan, Samuel; Cheng, Frank; Fang, Winston; Fischman, Mark; Hensley, Scott; Johnson, Robert; Jourdan, Michael; Marina, Miguel; Parham, Bruce; hide

    2004-01-01

    We present a real-time high-performance and fault-tolerant FPGA-based hardware architecture for the processing of synthetic aperture radar (SAR) images in future spaceborne system. In particular, we will discuss the integrated design approach, from top-level algorithm specifications and system requirements, design methodology, functional verification and performance validation, down to hardware design and implementation.

  5. The quality of instruments to assess the process of shared decision making: A systematic review

    PubMed Central

    Bomhof-Roordink, Hanna; Smith, Ian P.; Scholl, Isabelle; Stiggelbout, Anne M.; Pieterse, Arwen H.

    2018-01-01

    Objective To inventory instruments assessing the process of shared decision making and appraise their measurement quality, taking into account the methodological quality of their validation studies. Methods In a systematic review we searched seven databases (PubMed, Embase, Emcare, Cochrane, PsycINFO, Web of Science, Academic Search Premier) for studies investigating instruments measuring the process of shared decision making. Per identified instrument, we assessed the level of evidence separately for 10 measurement properties following a three-step procedure: 1) appraisal of the methodological quality using the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist, 2) appraisal of the psychometric quality of the measurement property using three possible quality scores, 3) best-evidence synthesis based on the number of studies, their methodological and psychometrical quality, and the direction and consistency of the results. The study protocol was registered at PROSPERO: CRD42015023397. Results We included 51 articles describing the development and/or evaluation of 40 shared decision-making process instruments: 16 patient questionnaires, 4 provider questionnaires, 18 coding schemes and 2 instruments measuring multiple perspectives. There is an overall lack of evidence for their measurement quality, either because validation is missing or methods are poor. The best-evidence synthesis indicated positive results for a major part of instruments for content validity (50%) and structural validity (53%) if these were evaluated, but negative results for a major part of instruments when inter-rater reliability (47%) and hypotheses testing (59%) were evaluated. Conclusions Due to the lack of evidence on measurement quality, the choice for the most appropriate instrument can best be based on the instrument’s content and characteristics such as the perspective that they assess. We recommend refinement and validation of existing instruments, and the use of COSMIN-guidelines to help guarantee high-quality evaluations. PMID:29447193

  6. Exploring the Role of Assessment Criteria during Teachers' Collaborative Judgement Processes of Students' Portfolios

    ERIC Educational Resources Information Center

    Van der Schaaf, Marieke; Baartman, Liesbeth; Prins, Frans

    2012-01-01

    Student portfolios are increasingly used for assessing student competences in higher education, but results about the construct validity of portfolio assessment are mixed. A prerequisite for construct validity is that the portfolio assessment is based on relevant portfolio content. Assessment criteria, are often used to enhance this condition.…

  7. Establishing the Validity of the Task-Based English Speaking Test (TBEST) for International Teaching Assistants

    ERIC Educational Resources Information Center

    Witt, Autumn Song

    2010-01-01

    This dissertation follows an oral language assessment tool from initial design and implementation to validity analysis. The specialized variables of this study are the population: international teaching assistants and the purpose: spoken assessment as a hiring prerequisite. However, the process can easily be applied to other populations and…

  8. 48 CFR 1852.245-73 - Financial reporting of NASA property in the custody of contractors.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... due. However, contractors' procedures must document the process for developing these estimates based... shall have formal policies and procedures, which address the validation of NF 1018 data, including data... validation is to ensure that information reported is accurate and in compliance with the NASA FAR Supplement...

  9. 48 CFR 1852.245-73 - Financial reporting of NASA property in the custody of contractors.

    Code of Federal Regulations, 2012 CFR

    2012-10-01

    ... due. However, contractors' procedures must document the process for developing these estimates based... shall have formal policies and procedures, which address the validation of NF 1018 data, including data... validation is to ensure that information reported is accurate and in compliance with the NASA FAR Supplement...

  10. Validation of GOES-10 Satellite-derived Cloud and Radiative Properties for the MASRAD ARM Mobile Facility Deployment

    NASA Technical Reports Server (NTRS)

    Khaiyer, M. M.; Doelling, D. R.; Palikonda, R.; Mordeen, M. L.; Minnis, P.

    2007-01-01

    This poster presentation reviews the process used to validate the GOES-10 satellite derived cloud and radiative properties. The ARM Mobile Facility (AMF) deployment at Pt Reyes, CA as part of the Marine Stratus Radiation Aerosol and Drizzle experiment (MASRAD), 14 March - 14 September 2005 provided an excellent chance to validate satellite cloud-property retrievals with the AMF's flexible suite of ground-based remote sensing instruments. For this comparison, NASA LaRC GOES10 satellite retrievals covering this region and period were re-processed using an updated version of the Visible Infrared Solar-Infrared Split-Window Technique (VISST), which uses data taken at 4 wavelengths (0.65, 3.9,11 and 12 m resolution), and computes broadband fluxes using improved CERES (Clouds and Earth's Radiant Energy System)-GOES-10 narrowband-to-broadband flux conversion coefficients. To validate MASRAD GOES-10 satellite-derived cloud property data, VISST-derived cloud amounts, heights, liquid water paths are compared with similar quantities derived from available ARM ground-based instrumentation and with CERES fluxes from Terra.

  11. Design and validation of the INICIARE instrument, for the assessment of dependency level in acutely ill hospitalised patients.

    PubMed

    Morales-Asencio, José Miguel; Porcel-Gálvez, Ana María; Oliveros-Valenzuela, Rosa; Rodríguez-Gómez, Susana; Sánchez-Extremera, Lucrecia; Serrano-López, Francisco Andrés; Aranda-Gallardo, Marta; Canca-Sánchez, José Carlos; Barrientos-Trigo, Sergio

    2015-03-01

    The aim of this study was to establish the validity and reliability of an instrument (Inventario del NIvel de Cuidados mediante IndicAdores de clasificación de Resultados de Enfermería) used to assess the dependency level in acutely hospitalised patients. This instrument is novel, and it is based on the Nursing Outcomes Classification. Multiple existing instruments for needs assessment have been poorly validated and based predominately on interventions. Standardised Nursing Languages offer an ideal framework to develop nursing sensitive instruments. A cross-sectional validation study in two acute care hospitals in Spain. This study was implemented in two phases. First, the research team developed the instrument to be validated. In the second phase, the validation process was performed by experts, and the data analysis was conducted to establish the psychometric properties of the instrument. Seven hundred and sixty-one patient ratings performed by nurses were collected during the course of the research study. Data analysis yielded a Cronbach's alpha of 0·91. An exploratory factorial analysis identified three factors (Physiological, Instrumental and Cognitive-behavioural), which explained 74% of the variance. Inventario del NIvel de Cuidados mediante IndicAdores de clasificación de Resultados de Enfermería was demonstrated to be a valid and reliable instrument based on its use in acutely hospitalised patients to assess the level of dependency. Inventario del NIvel de Cuidados mediante IndicAdores de clasificación de Resultados de Enfermería can be used as an assessment tool in hospitalised patients during the nursing process throughout the entire hospitalisation period. It contributes information to support decisions on nursing diagnoses, interventions and outcomes. It also enables data codification in large databases. © 2014 John Wiley & Sons Ltd.

  12. A formal approach to validation and verification for knowledge-based control systems

    NASA Technical Reports Server (NTRS)

    Castore, Glen

    1987-01-01

    As control systems become more complex in response to desires for greater system flexibility, performance and reliability, the promise is held out that artificial intelligence might provide the means for building such systems. An obstacle to the use of symbolic processing constructs in this domain is the need for verification and validation (V and V) of the systems. Techniques currently in use do not seem appropriate for knowledge-based software. An outline of a formal approach to V and V for knowledge-based control systems is presented.

  13. Community-Based Participatory Research Conceptual Model: Community Partner Consultation and Face Validity.

    PubMed

    Belone, Lorenda; Lucero, Julie E; Duran, Bonnie; Tafoya, Greg; Baker, Elizabeth A; Chan, Domin; Chang, Charlotte; Greene-Moton, Ella; Kelley, Michele A; Wallerstein, Nina

    2016-01-01

    A national community-based participatory research (CBPR) team developed a conceptual model of CBPR partnerships to understand the contribution of partnership processes to improved community capacity and health outcomes. With the model primarily developed through academic literature and expert consensus building, we sought community input to assess face validity and acceptability. Our research team conducted semi-structured focus groups with six partnerships nationwide. Participants validated and expanded on existing model constructs and identified new constructs based on "real-world" praxis, resulting in a revised model. Four cross-cutting constructs were identified: trust development, capacity, mutual learning, and power dynamics. By empirically testing the model, we found community face validity and capacity to adapt the model to diverse contexts. We recommend partnerships use and adapt the CBPR model and its constructs, for collective reflection and evaluation, to enhance their partnering practices and achieve their health and research goals. © The Author(s) 2014.

  14. An initial investigation into the validity of a computer-based auditory processing assessment (Feather Squadron).

    PubMed

    Barker, Matthew D; Purdy, Suzanne C

    2016-01-01

    This research investigates a novel method for identifying and measuring school-aged children with poor auditory processing through a tablet computer. Feasibility and test-retest reliability are investigated by examining the percentage of Group 1 participants able to complete the tasks and developmental effects on performance. Concurrent validity was investigated against traditional tests of auditory processing using Group 2. There were 847 students aged 5 to 13 years in group 1, and 46 aged 5 to 14 years in group 2. Some tasks could not be completed by the youngest participants. Significant correlations were found between results of most auditory processing areas assessed by the Feather Squadron test and traditional auditory processing tests. Test-retest comparisons indicated good reliability for most of the Feather Squadron assessments and some of the traditional tests. The results indicate the Feather Squadron assessment is a time-efficient, feasible, concurrently valid, and reliable approach for measuring auditory processing in school-aged children. Clinically, this may be a useful option for audiologists when performing auditory processing assessments as it is a relatively fast, engaging, and easy way to assess auditory processing abilities. Research is needed to investigate further the construct validity of this new assessment by examining the association between performance on Feather Squadron and objective evoked potential, lesion studies, and/or functional imaging measures of auditory function.

  15. Validating and determining the weight of items used for evaluating clinical governance implementation based on analytic hierarchy process model.

    PubMed

    Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein

    2015-04-08

    The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.

  16. Integrated Process Modeling-A Process Validation Life Cycle Companion.

    PubMed

    Zahel, Thomas; Hauer, Stefan; Mueller, Eric M; Murphy, Patrick; Abad, Sandra; Vasilieva, Elena; Maurer, Daniel; Brocard, Cécile; Reinisch, Daniela; Sagmeister, Patrick; Herwig, Christoph

    2017-10-17

    During the regulatory requested process validation of pharmaceutical manufacturing processes, companies aim to identify, control, and continuously monitor process variation and its impact on critical quality attributes (CQAs) of the final product. It is difficult to directly connect the impact of single process parameters (PPs) to final product CQAs, especially in biopharmaceutical process development and production, where multiple unit operations are stacked together and interact with each other. Therefore, we want to present the application of Monte Carlo (MC) simulation using an integrated process model (IPM) that enables estimation of process capability even in early stages of process validation. Once the IPM is established, its capability in risk and criticality assessment is furthermore demonstrated. IPMs can be used to enable holistic production control strategies that take interactions of process parameters of multiple unit operations into account. Moreover, IPMs can be trained with development data, refined with qualification runs, and maintained with routine manufacturing data which underlines the lifecycle concept. These applications will be shown by means of a process characterization study recently conducted at a world-leading contract manufacturing organization (CMO). The new IPM methodology therefore allows anticipation of out of specification (OOS) events, identify critical process parameters, and take risk-based decisions on counteractions that increase process robustness and decrease the likelihood of OOS events.

  17. Occurrence analysis of daily rainfalls through non-homogeneous Poissonian processes

    NASA Astrophysics Data System (ADS)

    Sirangelo, B.; Ferrari, E.; de Luca, D. L.

    2011-06-01

    A stochastic model based on a non-homogeneous Poisson process, characterised by a time-dependent intensity of rainfall occurrence, is employed to explain seasonal effects of daily rainfalls exceeding prefixed threshold values. The data modelling has been performed with a partition of observed daily rainfall data into a calibration period for parameter estimation and a validation period for checking on occurrence process changes. The model has been applied to a set of rain gauges located in different geographical areas of Southern Italy. The results show a good fit for time-varying intensity of rainfall occurrence process by 2-harmonic Fourier law and no statistically significant evidence of changes in the validation period for different threshold values.

  18. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.

  19. The specification-based validation of reliable multicast protocol: Problem Report. M.S. Thesis

    NASA Technical Reports Server (NTRS)

    Wu, Yunqing

    1995-01-01

    Reliable Multicast Protocol (RMP) is a communication protocol that provides an atomic, totally ordered, reliable multicast service on top of unreliable IP multicasting. In this report, we develop formal models for RMP using existing automated verification systems, and perform validation on the formal RMP specifications. The validation analysis help identifies some minor specification and design problems. We also use the formal models of RMP to generate a test suite for conformance testing of the implementation. Throughout the process of RMP development, we follow an iterative, interactive approach that emphasizes concurrent and parallel progress of implementation and verification processes. Through this approach, we incorporate formal techniques into our development process, promote a common understanding for the protocol, increase the reliability of our software, and maintain high fidelity between the specifications of RMP and its implementation.

  20. Increasing the efficiency of designing hemming processes by using an element-based metamodel approach

    NASA Astrophysics Data System (ADS)

    Kaiser, C.; Roll, K.; Volk, W.

    2017-09-01

    In the automotive industry, the manufacturing of automotive outer panels requires hemming processes in which two sheet metal parts are joined together by bending the flange of the outer part over the inner part. Because of decreasing development times and the steadily growing number of vehicle derivatives, an efficient digital product and process validation is necessary. Commonly used simulations, which are based on the finite element method, demand significant modelling effort, which results in disadvantages especially in the early product development phase. To increase the efficiency of designing hemming processes this paper presents a hemming-specific metamodel approach. The approach includes a part analysis in which the outline of the automotive outer panels is initially split into individual segments. By doing a para-metrization of each of the segments and assigning basic geometric shapes, the outline of the part is approximated. Based on this, the hemming parameters such as flange length, roll-in, wrinkling and plastic strains are calculated for each of the geometric basic shapes by performing a meta-model-based segmental product validation. The metamodel is based on an element similar formulation that includes a reference dataset of various geometric basic shapes. A random automotive outer panel can now be analysed and optimized based on the hemming-specific database. By implementing this approach into a planning system, an efficient optimization of designing hemming processes will be enabled. Furthermore, valuable time and cost benefits can be realized in a vehicle’s development process.

  1. Validating a model that predicts daily growth and feed quality of New Zealand dairy pastures.

    PubMed

    Woodward, S J

    2001-09-01

    The Pasture Quality (PQ) model is a simple, mechanistic, dynamical system model that was designed to capture the essential biological processes in grazed grass-clover pasture, and to be optimised to derive improved grazing strategies for New Zealand dairy farms. While the individual processes represented in the model (photosynthesis, tissue growth, flowering, leaf death, decomposition, worms) were based on experimental data, this did not guarantee that the assembled model would accurately predict the behaviour of the system as a whole (i.e., pasture growth and quality). Validation of the whole model was thus a priority, since any strategy derived from the model could impact a farm business in the order of thousands of dollars per annum if adopted. This paper describes the process of defining performance criteria for the model, obtaining suitable data to test the model, and carrying out the validation analysis. The validation process highlighted a number of weaknesses in the model, which will lead to the model being improved. As a result, the model's utility will be enhanced. Furthermore, validation was found to have an unexpected additional benefit, in that despite the model's poor initial performance, support was generated for the model among field scientists involved in the wider project.

  2. [Support of the nursing process through electronic nursing documentation systems (UEPD) – Initial validation of an instrument].

    PubMed

    Hediger, Hannele; Müller-Staub, Maria; Petry, Heidi

    2016-01-01

    Electronic nursing documentation systems, with standardized nursing terminology, are IT-based systems for recording the nursing processes. These systems have the potential to improve the documentation of the nursing process and to support nurses in care delivery. This article describes the development and initial validation of an instrument (known by its German acronym UEPD) to measure the subjectively-perceived benefits of an electronic nursing documentation system in care delivery. The validity of the UEPD was examined by means of an evaluation study carried out in an acute care hospital (n = 94 nurses) in German-speaking Switzerland. Construct validity was analyzed by principal components analysis. Initial references of validity of the UEPD could be verified. The analysis showed a stable four factor model (FS = 0.89) scoring in 25 items. All factors loaded ≥ 0.50 and the scales demonstrated high internal consistency (Cronbach's α = 0.73 – 0.90). Principal component analysis revealed four dimensions of support: establishing nursing diagnosis and goals; recording a case history/an assessment and documenting the nursing process; implementation and evaluation as well as information exchange. Further testing with larger control samples and with different electronic documentation systems are needed. Another potential direction would be to employ the UEPD in a comparison of various electronic documentation systems.

  3. Funding for the 2ND IAEA technical meeting on fusion data processing, validation and analysis

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Greenwald, Martin

    The International Atomic Energy Agency (IAEA) will organize the second Technical Meeting on Fusion Da Processing, Validation and Analysis from 30 May to 02 June, 2017, in Cambridge, MA USA. The meeting w be hosted by the MIT Plasma Science and Fusion Center (PSFC). The objective of the meeting is to provide a platform where a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucialmore » for a knowledge based understanding of the physical processes governing the dynamics of these plasmas. The meeting will aim at fostering, in particular, discussions of research and development results that set out or underline trends observed in the current major fusion confinement devices. General information on the IAEA, including its mission and organization, can be found at the IAEA websit Uncertainty quantification (UQ) Model selection, validation, and verification (V&V) Probability theory and statistical analysis Inverse problems & equilibrium reconstru ction Integrated data analysis Real time data analysis Machine learning Signal/image proc essing & pattern recognition Experimental design and synthetic diagnostics Data management« less

  4. Is There a Critical Distance for Fickian Transport? - a Statistical Approach to Sub-Fickian Transport Modelling in Porous Media

    NASA Astrophysics Data System (ADS)

    Most, S.; Nowak, W.; Bijeljic, B.

    2014-12-01

    Transport processes in porous media are frequently simulated as particle movement. This process can be formulated as a stochastic process of particle position increments. At the pore scale, the geometry and micro-heterogeneities prohibit the commonly made assumption of independent and normally distributed increments to represent dispersion. Many recent particle methods seek to loosen this assumption. Recent experimental data suggest that we have not yet reached the end of the need to generalize, because particle increments show statistical dependency beyond linear correlation and over many time steps. The goal of this work is to better understand the validity regions of commonly made assumptions. We are investigating after what transport distances can we observe: A statistical dependence between increments, that can be modelled as an order-k Markov process, boils down to order 1. This would be the Markovian distance for the process, where the validity of yet-unexplored non-Gaussian-but-Markovian random walks would start. A bivariate statistical dependence that simplifies to a multi-Gaussian dependence based on simple linear correlation (validity of correlated PTRW). Complete absence of statistical dependence (validity of classical PTRW/CTRW). The approach is to derive a statistical model for pore-scale transport from a powerful experimental data set via copula analysis. The model is formulated as a non-Gaussian, mutually dependent Markov process of higher order, which allows us to investigate the validity ranges of simpler models.

  5. Statistical validation and an empirical model of hydrogen production enhancement found by utilizing passive flow disturbance in the steam-reformation process

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Erickson, Paul A.; Liao, Chang-hsien

    2007-11-15

    A passive flow disturbance has been proven to enhance the conversion of fuel in a methanol-steam reformer. This study presents a statistical validation of the experiment based on a standard 2{sup k} factorial experiment design and the resulting empirical model of the enhanced hydrogen producing process. A factorial experiment design was used to statistically analyze the effects and interactions of various input factors in the experiment. Three input factors, including the number of flow disturbers, catalyst size, and reactant flow rate were investigated for their effects on the fuel conversion in the steam-reformation process. Based on the experimental results, anmore » empirical model was developed and further evaluated with an uncertainty analysis and interior point data. (author)« less

  6. Empirical agreement in model validation.

    PubMed

    Jebeile, Julie; Barberousse, Anouk

    2016-04-01

    Empirical agreement is often used as an important criterion when assessing the validity of scientific models. However, it is by no means a sufficient criterion as a model can be so adjusted as to fit available data even though it is based on hypotheses whose plausibility is known to be questionable. Our aim in this paper is to investigate into the uses of empirical agreement within the process of model validation. Copyright © 2015 Elsevier Ltd. All rights reserved.

  7. A comparison between the original and Tablet-based Symbol Digit Modalities Test in patients with schizophrenia: Test-retest agreement, random measurement error, practice effect, and ecological validity.

    PubMed

    Tang, Shih-Fen; Chen, I-Hui; Chiang, Hsin-Yu; Wu, Chien-Te; Hsueh, I-Ping; Yu, Wan-Hui; Hsieh, Ching-Lin

    2017-11-27

    We aimed to compare the test-retest agreement, random measurement error, practice effect, and ecological validity of the original and Tablet-based Symbol Digit Modalities Test (T-SDMT) over five serial assessments, and to examine the concurrent validity of the T-SDMT in patients with schizophrenia. Sixty patients with chronic schizophrenia completed five serial assessments (one week apart) of the SDMT and T-SDMT and one assessment of the Activities of Daily Living Rating Scale III at the first time point. Both measures showed high test-retest agreement, similar levels of random measurement error over five serial assessments. Moreover, the practice effects of the two measures did not reach a plateau phase after five serial assessments in young and middle-aged participants. Nevertheless, only the practice effect of the T-SDMT became trivial after the first assessment. Like the SDMT, the T-SDMT had good ecological validity. The T-SDMT also had good concurrent validity with the SDMT. In addition, only the T-SDMT had discriminative validity to discriminate processing speed in young and middle-aged participants. Compared to the SDMT, the T-SDMT had overall slightly better psychometric properties, so it can be an alternative measure to the SDMT for assessing processing speed in patients with schizophrenia. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. Brazilian Center for the Validation of Alternative Methods (BraCVAM) and the process of validation in Brazil.

    PubMed

    Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra

    2016-03-01

    The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation. 2016 FRAME.

  9. Three Dimensional Vapor Intrusion Modeling: Model Validation and Uncertainty Analysis

    NASA Astrophysics Data System (ADS)

    Akbariyeh, S.; Patterson, B.; Rakoczy, A.; Li, Y.

    2013-12-01

    Volatile organic chemicals (VOCs), such as chlorinated solvents and petroleum hydrocarbons, are prevalent groundwater contaminants due to their improper disposal and accidental spillage. In addition to contaminating groundwater, VOCs may partition into the overlying vadose zone and enter buildings through gaps and cracks in foundation slabs or basement walls, a process termed vapor intrusion. Vapor intrusion of VOCs has been recognized as a detrimental source for human exposures to potential carcinogenic or toxic compounds. The simulation of vapor intrusion from a subsurface source has been the focus of many studies to better understand the process and guide field investigation. While multiple analytical and numerical models were developed to simulate the vapor intrusion process, detailed validation of these models against well controlled experiments is still lacking, due to the complexity and uncertainties associated with site characterization and soil gas flux and indoor air concentration measurement. In this work, we present an effort to validate a three-dimensional vapor intrusion model based on a well-controlled experimental quantification of the vapor intrusion pathways into a slab-on-ground building under varying environmental conditions. Finally, a probabilistic approach based on Monte Carlo simulations is implemented to determine the probability distribution of indoor air concentration based on the most uncertain input parameters.

  10. Plan delivery quality assurance for CyberKnife: Statistical process control analysis of 350 film-based patient-specific QAs.

    PubMed

    Bellec, J; Delaby, N; Jouyaux, F; Perdrieux, M; Bouvier, J; Sorel, S; Henry, O; Lafond, C

    2017-07-01

    Robotic radiosurgery requires plan delivery quality assurance (DQA) but there has never been a published comprehensive analysis of a patient-specific DQA process in a clinic. We proposed to evaluate 350 consecutive film-based patient-specific DQAs using statistical process control. We evaluated the performance of the process to propose achievable tolerance criteria for DQA validation and we sought to identify suboptimal DQA using control charts. DQAs were performed on a CyberKnife-M6 using Gafchromic-EBT3 films. The signal-to-dose conversion was performed using a multichannel-correction and a scanning protocol that combined measurement and calibration in a single scan. The DQA analysis comprised a gamma-index analysis at 3%/1.5mm and a separate evaluation of spatial and dosimetric accuracy of the plan delivery. Each parameter was plotted on a control chart and control limits were calculated. A capability index (Cpm) was calculated to evaluate the ability of the process to produce results within specifications. The analysis of capability showed that a gamma pass rate of 85% at 3%/1.5mm was highly achievable as acceptance criteria for DQA validation using a film-based protocol (Cpm>1.33). 3.4% of DQA were outside a control limit of 88% for gamma pass-rate. The analysis of the out-of-control DQA helped identify a dosimetric error in our institute for a specific treatment type. We have defined initial tolerance criteria for DQA validations. We have shown that the implementation of a film-based patient-specific DQA protocol with the use of control charts is an effective method to improve patient treatment safety on CyberKnife. Copyright © 2017 Associazione Italiana di Fisica Medica. Published by Elsevier Ltd. All rights reserved.

  11. LinkEHR-Ed: a multi-reference model archetype editor based on formal semantics.

    PubMed

    Maldonado, José A; Moner, David; Boscá, Diego; Fernández-Breis, Jesualdo T; Angulo, Carlos; Robles, Montserrat

    2009-08-01

    To develop a powerful archetype editing framework capable of handling multiple reference models and oriented towards the semantic description and standardization of legacy data. The main prerequisite for implementing tools providing enhanced support for archetypes is the clear specification of archetype semantics. We propose a formalization of the definition section of archetypes based on types over tree-structured data. It covers the specialization of archetypes, the relationship between reference models and archetypes and conformance of data instances to archetypes. LinkEHR-Ed, a visual archetype editor based on the former formalization with advanced processing capabilities that supports multiple reference models, the editing and semantic validation of archetypes, the specification of mappings to data sources, and the automatic generation of data transformation scripts, is developed. LinkEHR-Ed is a useful tool for building, processing and validating archetypes based on any reference model.

  12. PREDICTING SUBSURFACE CONTAMINANT TRANSPORT AND TRANSFORMATION: CONSIDERATIONS FOR MODEL SELECTION AND FIELD VALIDATION

    EPA Science Inventory

    Predicting subsurface contaminant transport and transformation requires mathematical models based on a variety of physical, chemical, and biological processes. The mathematical model is an attempt to quantitatively describe observed processes in order to permit systematic forecas...

  13. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  14. MotiveValidator: interactive web-based validation of ligand and residue structure in biomolecular complexes.

    PubMed

    Vařeková, Radka Svobodová; Jaiswal, Deepti; Sehnal, David; Ionescu, Crina-Maria; Geidl, Stanislav; Pravda, Lukáš; Horský, Vladimír; Wimmerová, Michaela; Koča, Jaroslav

    2014-07-01

    Structure validation has become a major issue in the structural biology community, and an essential step is checking the ligand structure. This paper introduces MotiveValidator, a web-based application for the validation of ligands and residues in PDB or PDBx/mmCIF format files provided by the user. Specifically, MotiveValidator is able to evaluate in a straightforward manner whether the ligand or residue being studied has a correct annotation (3-letter code), i.e. if it has the same topology and stereochemistry as the model ligand or residue with this annotation. If not, MotiveValidator explicitly describes the differences. MotiveValidator offers a user-friendly, interactive and platform-independent environment for validating structures obtained by any type of experiment. The results of the validation are presented in both tabular and graphical form, facilitating their interpretation. MotiveValidator can process thousands of ligands or residues in a single validation run that takes no more than a few minutes. MotiveValidator can be used for testing single structures, or the analysis of large sets of ligands or fragments prepared for binding site analysis, docking or virtual screening. MotiveValidator is freely available via the Internet at http://ncbr.muni.cz/MotiveValidator. © The Author(s) 2014. Published by Oxford University Press on behalf of Nucleic Acids Research.

  15. Rationality versus reality: the challenges of evidence-based decision making for health policy makers

    PubMed Central

    2010-01-01

    Background Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. Discussion We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. Summary In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved. PMID:20504357

  16. Rationality versus reality: the challenges of evidence-based decision making for health policy makers.

    PubMed

    McCaughey, Deirdre; Bruning, Nealia S

    2010-05-26

    Current healthcare systems have extended the evidence-based medicine (EBM) approach to health policy and delivery decisions, such as access-to-care, healthcare funding and health program continuance, through attempts to integrate valid and reliable evidence into the decision making process. These policy decisions have major impacts on society and have high personal and financial costs associated with those decisions. Decision models such as these function under a shared assumption of rational choice and utility maximization in the decision-making process. We contend that health policy decision makers are generally unable to attain the basic goals of evidence-based decision making (EBDM) and evidence-based policy making (EBPM) because humans make decisions with their naturally limited, faulty, and biased decision-making processes. A cognitive information processing framework is presented to support this argument, and subtle cognitive processing mechanisms are introduced to support the focal thesis: health policy makers' decisions are influenced by the subjective manner in which they individually process decision-relevant information rather than on the objective merits of the evidence alone. As such, subsequent health policy decisions do not necessarily achieve the goals of evidence-based policy making, such as maximizing health outcomes for society based on valid and reliable research evidence. In this era of increasing adoption of evidence-based healthcare models, the rational choice, utility maximizing assumptions in EBDM and EBPM, must be critically evaluated to ensure effective and high-quality health policy decisions. The cognitive information processing framework presented here will aid health policy decision makers by identifying how their decisions might be subtly influenced by non-rational factors. In this paper, we identify some of the biases and potential intervention points and provide some initial suggestions about how the EBDM/EBPM process can be improved.

  17. A Java-based fMRI processing pipeline evaluation system for assessment of univariate general linear model and multivariate canonical variate analysis-based pipelines.

    PubMed

    Zhang, Jing; Liang, Lichen; Anderson, Jon R; Gatewood, Lael; Rottenberg, David A; Strother, Stephen C

    2008-01-01

    As functional magnetic resonance imaging (fMRI) becomes widely used, the demands for evaluation of fMRI processing pipelines and validation of fMRI analysis results is increasing rapidly. The current NPAIRS package, an IDL-based fMRI processing pipeline evaluation framework, lacks system interoperability and the ability to evaluate general linear model (GLM)-based pipelines using prediction metrics. Thus, it can not fully evaluate fMRI analytical software modules such as FSL.FEAT and NPAIRS.GLM. In order to overcome these limitations, a Java-based fMRI processing pipeline evaluation system was developed. It integrated YALE (a machine learning environment) into Fiswidgets (a fMRI software environment) to obtain system interoperability and applied an algorithm to measure GLM prediction accuracy. The results demonstrated that the system can evaluate fMRI processing pipelines with univariate GLM and multivariate canonical variates analysis (CVA)-based models on real fMRI data based on prediction accuracy (classification accuracy) and statistical parametric image (SPI) reproducibility. In addition, a preliminary study was performed where four fMRI processing pipelines with GLM and CVA modules such as FSL.FEAT and NPAIRS.CVA were evaluated with the system. The results indicated that (1) the system can compare different fMRI processing pipelines with heterogeneous models (NPAIRS.GLM, NPAIRS.CVA and FSL.FEAT) and rank their performance by automatic performance scoring, and (2) the rank of pipeline performance is highly dependent on the preprocessing operations. These results suggest that the system will be of value for the comparison, validation, standardization and optimization of functional neuroimaging software packages and fMRI processing pipelines.

  18. At-line process analytical technology (PAT) for more efficient scale up of biopharmaceutical microfiltration unit operations.

    PubMed

    Watson, Douglas S; Kerchner, Kristi R; Gant, Sean S; Pedersen, Joseph W; Hamburger, James B; Ortigosa, Allison D; Potgieter, Thomas I

    2016-01-01

    Tangential flow microfiltration (MF) is a cost-effective and robust bioprocess separation technique, but successful full scale implementation is hindered by the empirical, trial-and-error nature of scale-up. We present an integrated approach leveraging at-line process analytical technology (PAT) and mass balance based modeling to de-risk MF scale-up. Chromatography-based PAT was employed to improve the consistency of an MF step that had been a bottleneck in the process used to manufacture a therapeutic protein. A 10-min reverse phase ultra high performance liquid chromatography (RP-UPLC) assay was developed to provide at-line monitoring of protein concentration. The method was successfully validated and method performance was comparable to previously validated methods. The PAT tool revealed areas of divergence from a mass balance-based model, highlighting specific opportunities for process improvement. Adjustment of appropriate process controls led to improved operability and significantly increased yield, providing a successful example of PAT deployment in the downstream purification of a therapeutic protein. The general approach presented here should be broadly applicable to reduce risk during scale-up of filtration processes and should be suitable for feed-forward and feed-back process control. © 2015 American Institute of Chemical Engineers.

  19. Situation awareness acquired from monitoring process plants - the Process Overview concept and measure.

    PubMed

    Lau, Nathan; Jamieson, Greg A; Skraaning, Gyrd

    2016-07-01

    We introduce Process Overview, a situation awareness characterisation of the knowledge derived from monitoring process plants. Process Overview is based on observational studies of process control work in the literature. The characterisation is applied to develop a query-based measure called the Process Overview Measure. The goal of the measure is to improve coupling between situation and awareness according to process plant properties and operator cognitive work. A companion article presents the empirical evaluation of the Process Overview Measure in a realistic process control setting. The Process Overview Measure demonstrated sensitivity and validity by revealing significant effects of experimental manipulations that corroborated with other empirical results. The measure also demonstrated adequate inter-rater reliability and practicality for measuring SA based on data collected by process experts. Practitioner Summary: The Process Overview Measure is a query-based measure for assessing operator situation awareness from monitoring process plants in representative settings.

  20. Constructing and Validating the Foreign Language Attitudes and Goals Survey (FLAGS)

    ERIC Educational Resources Information Center

    Cid, Eva; Granena, Gisela; Tragant, Elsa

    2009-01-01

    The present study describes the process that was followed in the construction and validation of the foreign language attitudes and goals survey (FLAGS), a new questionnaire based on qualitative data from Tragant and Munoz [Tragant, Munoz, C., 2000. "La motivacion y su relacion con la edad en un contexto escolar de aprendizaje de una lengua…

  1. The Japanese version of the questionnaire about the process of recovery: development and validity and reliability testing.

    PubMed

    Kanehara, Akiko; Kotake, Risa; Miyamoto, Yuki; Kumakura, Yousuke; Morita, Kentaro; Ishiura, Tomoko; Shimizu, Kimiko; Fujieda, Yumiko; Ando, Shuntaro; Kondo, Shinsuke; Kasai, Kiyoto

    2017-11-07

    Personal recovery is increasingly recognised as an important outcome measure in mental health services. This study aimed to develop a Japanese version of the Questionnaire about the Process of Recovery (QPR-J) and test its validity and reliability. The study comprised two stages that employed the cross-sectional and prospective cohort designs, respectively. We translated the questionnaire using a standard translation/back-translation method. Convergent validity was examined by calculating Pearson's correlation coefficients with scores on the Recovery Assessment Scale (RAS) and the Short-Form-8 Health Survey (SF-8). An exploratory factor analysis (EFA) was conducted to examine factorial validity. We used intraclass correlation and Cronbach's alpha to examine the test-retest and internal consistency reliability of the QPR-J's 22-item full scale, 17-item intrapersonal and 5-item interpersonal subscales. We conducted an EFA along with a confirmatory factor analysis (CFA). Data were obtained from 197 users of mental health services (mean age: 42.0 years; 61.9% female; 49.2% diagnosed with schizophrenia). The QPR-J showed adequate convergent validity, exhibiting significant, positive correlations with the RAS and SF-8 scores. The QPR-J's full version, subscales, showed excellent test-retest and internal consistency reliability, with the exception of acceptable but relatively low internal consistency reliability for the interpersonal subscale. Based on the results of the CFA and EFA, we adopted the factor structure extracted from the original 2-factor model based on the present CFA. The QPR-J is an adequately valid and reliable measure of the process of recovery among Japanese users with mental health services.

  2. Development of the Quality of Australian Nursing Documentation in Aged Care (QANDAC) instrument to assess paper-based and electronic resident records.

    PubMed

    Wang, Ning; Björvell, Catrin; Hailey, David; Yu, Ping

    2014-12-01

    To develop an Australian nursing documentation in aged care (Quality of Australian Nursing Documentation in Aged Care (QANDAC)) instrument to measure the quality of paper-based and electronic resident records. The instrument was based on the nursing process model and on three attributes of documentation quality identified in a systematic review. The development process involved five phases following approaches to designing criterion-referenced measures. The face and content validities and the inter-rater reliability of the instrument were estimated using a focus group approach and consensus model. The instrument contains 34 questions in three sections: completion of nursing history and assessment, description of care process and meeting the requirements of data entry. Estimates of the validity and inter-rater reliability of the instrument gave satisfactory results. The QANDAC instrument may be a useful audit tool for quality improvement and research in aged care documentation. © 2013 ACOTA.

  3. Expert system for web based collaborative CAE

    NASA Astrophysics Data System (ADS)

    Hou, Liang; Lin, Zusheng

    2006-11-01

    An expert system for web based collaborative CAE was developed based on knowledge engineering, relational database and commercial FEA (Finite element analysis) software. The architecture of the system was illustrated. In this system, the experts' experiences, theories and typical examples and other related knowledge, which will be used in the stage of pre-process in FEA, were categorized into analysis process and object knowledge. Then, the integrated knowledge model based on object-oriented method and rule based method was described. The integrated reasoning process based on CBR (case based reasoning) and rule based reasoning was presented. Finally, the analysis process of this expert system in web based CAE application was illustrated, and an analysis example of a machine tool's column was illustrated to prove the validity of the system.

  4. Mapping Perinatal Nursing Process Measurement Concepts to Standard Terminologies.

    PubMed

    Ivory, Catherine H

    2016-07-01

    The use of standard terminologies is an essential component for using data to inform practice and conduct research; perinatal nursing data standardization is needed. This study explored whether 76 distinct process elements important for perinatal nursing were present in four American Nurses Association-recognized standard terminologies. The 76 process elements were taken from a valid paper-based perinatal nursing process measurement tool. Using terminology-supported browsers, the elements were manually mapped to the selected terminologies by the researcher. A five-member expert panel validated 100% of the mapping findings. The majority of the process elements (n = 63, 83%) were present in SNOMED-CT, 28% (n = 21) in LOINC, 34% (n = 26) in ICNP, and 15% (n = 11) in CCC. SNOMED-CT and LOINC are terminologies currently recommended for use to facilitate interoperability in the capture of assessment and problem data in certified electronic medical records. Study results suggest that SNOMED-CT and LOINC contain perinatal nursing process elements and are useful standard terminologies to support perinatal nursing practice in electronic health records. Terminology mapping is the first step toward incorporating traditional paper-based tools into electronic systems.

  5. The effect of feature-based attention on flanker interference processing: An fMRI-constrained source analysis.

    PubMed

    Siemann, Julia; Herrmann, Manfred; Galashan, Daniela

    2018-01-25

    The present study examined whether feature-based cueing affects early or late stages of flanker conflict processing using EEG and fMRI. Feature cues either directed participants' attention to the upcoming colour of the target or were neutral. Validity-specific modulations during interference processing were investigated using the N200 event-related potential (ERP) component and BOLD signal differences. Additionally, both data sets were integrated using an fMRI-constrained source analysis. Finally, the results were compared with a previous study in which spatial instead of feature-based cueing was applied to an otherwise identical flanker task. Feature-based and spatial attention recruited a common fronto-parietal network during conflict processing. Irrespective of attention type (feature-based; spatial), this network responded to focussed attention (valid cueing) as well as context updating (invalid cueing), hinting at domain-general mechanisms. However, spatially and non-spatially directed attention also demonstrated domain-specific activation patterns for conflict processing that were observable in distinct EEG and fMRI data patterns as well as in the respective source analyses. Conflict-specific activity in visual brain regions was comparable between both attention types. We assume that the distinction between spatially and non-spatially directed attention types primarily applies to temporal differences (domain-specific dynamics) between signals originating in the same brain regions (domain-general localization).

  6. SLS Navigation Model-Based Design Approach

    NASA Technical Reports Server (NTRS)

    Oliver, T. Emerson; Anzalone, Evan; Geohagan, Kevin; Bernard, Bill; Park, Thomas

    2018-01-01

    The SLS Program chose to implement a Model-based Design and Model-based Requirements approach for managing component design information and system requirements. This approach differs from previous large-scale design efforts at Marshall Space Flight Center where design documentation alone conveyed information required for vehicle design and analysis and where extensive requirements sets were used to scope and constrain the design. The SLS Navigation Team has been responsible for the Program-controlled Design Math Models (DMMs) which describe and represent the performance of the Inertial Navigation System (INS) and the Rate Gyro Assemblies (RGAs) used by Guidance, Navigation, and Controls (GN&C). The SLS Navigation Team is also responsible for the navigation algorithms. The navigation algorithms are delivered for implementation on the flight hardware as a DMM. For the SLS Block 1-B design, the additional GPS Receiver hardware is managed as a DMM at the vehicle design level. This paper provides a discussion of the processes and methods used to engineer, design, and coordinate engineering trades and performance assessments using SLS practices as applied to the GN&C system, with a particular focus on the Navigation components. These include composing system requirements, requirements verification, model development, model verification and validation, and modeling and analysis approaches. The Model-based Design and Requirements approach does not reduce the effort associated with the design process versus previous processes used at Marshall Space Flight Center. Instead, the approach takes advantage of overlap between the requirements development and management process, and the design and analysis process by efficiently combining the control (i.e. the requirement) and the design mechanisms. The design mechanism is the representation of the component behavior and performance in design and analysis tools. The focus in the early design process shifts from the development and management of design requirements to the development of usable models, model requirements, and model verification and validation efforts. The models themselves are represented in C/C++ code and accompanying data files. Under the idealized process, potential ambiguity in specification is reduced because the model must be implementable versus a requirement which is not necessarily subject to this constraint. Further, the models are shown to emulate the hardware during validation. For models developed by the Navigation Team, a common interface/standalone environment was developed. The common environment allows for easy implementation in design and analysis tools. Mechanisms such as unit test cases ensure implementation as the developer intended. The model verification and validation process provides a very high level of component design insight. The origin and implementation of the SLS variant of Model-based Design is described from the perspective of the SLS Navigation Team. The format of the models and the requirements are described. The Model-based Design approach has many benefits but is not without potential complications. Key lessons learned associated with the implementation of the Model Based Design approach and process from infancy to verification and certification are discussed

  7. Towards Automatic Validation and Healing of Citygml Models for Geometric and Semantic Consistency

    NASA Astrophysics Data System (ADS)

    Alam, N.; Wagner, D.; Wewetzer, M.; von Falkenhausen, J.; Coors, V.; Pries, M.

    2013-09-01

    A steadily growing number of application fields for large 3D city models have emerged in recent years. Like in many other domains, data quality is recognized as a key factor for successful business. Quality management is mandatory in the production chain nowadays. Automated domain-specific tools are widely used for validation of business-critical data but still common standards defining correct geometric modeling are not precise enough to define a sound base for data validation of 3D city models. Although the workflow for 3D city models is well-established from data acquisition to processing, analysis and visualization, quality management is not yet a standard during this workflow. Processing data sets with unclear specification leads to erroneous results and application defects. We show that this problem persists even if data are standard compliant. Validation results of real-world city models are presented to demonstrate the potential of the approach. A tool to repair the errors detected during the validation process is under development; first results are presented and discussed. The goal is to heal defects of the models automatically and export a corrected CityGML model.

  8. Development Instrument’s Learning of Physics Through Scientific Inquiry Model Based Batak Culture to Improve Science Process Skill and Student’s Curiosity

    NASA Astrophysics Data System (ADS)

    Nasution, Derlina; Syahreni Harahap, Putri; Harahap, Marabangun

    2018-03-01

    This research aims to: (1) developed a instrument’s learning (lesson plan, worksheet, student’s book, teacher’s guide book, and instrument test) of physics learning through scientific inquiry learning model based Batak culture to achieve skills improvement process of science students and the students’ curiosity; (2) describe the quality of the result of develop instrument’s learning in high school using scientific inquiry learning model based Batak culture (lesson plan, worksheet, student’s book, teacher’s guide book, and instrument test) to achieve the science process skill improvement of students and the student curiosity. This research is research development. This research developed a instrument’s learning of physics by using a development model that is adapted from the development model Thiagarajan, Semmel, and Semmel. The stages are traversed until retrieved a valid physics instrument’s learning, practical, and effective includes :(1) definition phase, (2) the planning phase, and (3) stages of development. Test performed include expert test/validation testing experts, small groups, and test classes is limited. Test classes are limited to do in SMAN 1 Padang Bolak alternating on a class X MIA. This research resulted in: 1) the learning of physics static fluid material specially for high school grade 10th consisted of (lesson plan, worksheet, student’s book, teacher’s guide book, and instrument test) and quality worthy of use in the learning process; 2) each component of the instrument’s learning meet the criteria have valid learning, practical, and effective way to reach the science process skill improvement and curiosity in students.

  9. Design and Evaluation Process of a Personal and Motive-Based Competencies Questionnaire in Spanish-Speaking Contexts.

    PubMed

    Batista-Foguet, Joan; Sipahi-Dantas, Alaide; Guillén, Laura; Martínez Arias, Rosario; Serlavós, Ricard

    2016-03-22

    Most questionnaires used for managerial purposes have been developed in Anglo-Saxon countries and then adapted for other cultures. However, this process is controversial. This paper fills the gap for more culturally sensitive assessment instruments in the specific field of human resources while also addressing the methodological issues that scientists and practitioners face in the development of questionnaires. First, we present the development process of a Personal and Motive-based competencies questionnaire targeted to Spanish-speaking countries. Second, we address the validation process by guiding the reader through testing the questionnaire construct validity. We performed two studies: a first study with 274 experts and practitioners of competency development and a definitive study with 482 members of the general public. Our results support a model of nineteen competencies grouped into four higher-order factors. To assure valid construct comparisons we have tested the factorial invariance of gender and work experience. Subsequent analysis have found that women self-rate themselves significantly higher than men on only two of the nineteen competencies, empathy (p < .001) and service orientation (p < .05). The effect of work experience was significant in twelve competencies (p < .001), in which less experienced workers self-rate higher than experienced workers. Finally, we derive theoretical and practical implications.

  10. Measuring organizational readiness for knowledge translation in chronic care.

    PubMed

    Gagnon, Marie-Pierre; Labarthe, Jenni; Légaré, France; Ouimet, Mathieu; Estabrooks, Carole A; Roch, Geneviève; Ghandour, El Kebir; Grimshaw, Jeremy

    2011-07-13

    Knowledge translation (KT) is an imperative in order to implement research-based and contextualized practices that can answer the numerous challenges of complex health problems. The Chronic Care Model (CCM) provides a conceptual framework to guide the implementation process in chronic care. Yet, organizations aiming to improve chronic care require an adequate level of organizational readiness (OR) for KT. Available instruments on organizational readiness for change (ORC) have shown limited validity, and are not tailored or adapted to specific phases of the knowledge-to-action (KTA) process. We aim to develop an evidence-based, comprehensive, and valid instrument to measure OR for KT in healthcare. The OR for KT instrument will be based on core concepts retrieved from existing literature and validated by a Delphi study. We will specifically test the instrument in chronic care that is of an increasing importance for the health system. Phase one: We will conduct a systematic review of the theories and instruments assessing ORC in healthcare. The retained theoretical information will be synthesized in a conceptual map. A bibliography and database of ORC instruments will be prepared after appraisal of their psychometric properties according to the standards for educational and psychological testing. An online Delphi study will be carried out among decision makers and knowledge users across Canada to assess the importance of these concepts and measures at different steps in the KTA process in chronic care.Phase two: A final OR for KT instrument will be developed and validated both in French and in English and tested in chronic disease management to measure OR for KT regarding the adoption of comprehensive, patient-centered, and system-based CCMs. This study provides a comprehensive synthesis of current knowledge on explanatory models and instruments assessing OR for KT. Moreover, this project aims to create more consensus on the theoretical underpinnings and the instrumentation of OR for KT in chronic care. The final product--a comprehensive and valid OR for KT instrument--will provide the chronic care settings with an instrument to assess their readiness to implement evidence-based chronic care.

  11. Measuring organizational readiness for knowledge translation in chronic care

    PubMed Central

    2011-01-01

    Background Knowledge translation (KT) is an imperative in order to implement research-based and contextualized practices that can answer the numerous challenges of complex health problems. The Chronic Care Model (CCM) provides a conceptual framework to guide the implementation process in chronic care. Yet, organizations aiming to improve chronic care require an adequate level of organizational readiness (OR) for KT. Available instruments on organizational readiness for change (ORC) have shown limited validity, and are not tailored or adapted to specific phases of the knowledge-to-action (KTA) process. We aim to develop an evidence-based, comprehensive, and valid instrument to measure OR for KT in healthcare. The OR for KT instrument will be based on core concepts retrieved from existing literature and validated by a Delphi study. We will specifically test the instrument in chronic care that is of an increasing importance for the health system. Methods Phase one: We will conduct a systematic review of the theories and instruments assessing ORC in healthcare. The retained theoretical information will be synthesized in a conceptual map. A bibliography and database of ORC instruments will be prepared after appraisal of their psychometric properties according to the standards for educational and psychological testing. An online Delphi study will be carried out among decision makers and knowledge users across Canada to assess the importance of these concepts and measures at different steps in the KTA process in chronic care. Phase two: A final OR for KT instrument will be developed and validated both in French and in English and tested in chronic disease management to measure OR for KT regarding the adoption of comprehensive, patient-centered, and system-based CCMs. Discussion This study provides a comprehensive synthesis of current knowledge on explanatory models and instruments assessing OR for KT. Moreover, this project aims to create more consensus on the theoretical underpinnings and the instrumentation of OR for KT in chronic care. The final product--a comprehensive and valid OR for KT instrument--will provide the chronic care settings with an instrument to assess their readiness to implement evidence-based chronic care. PMID:21752264

  12. A rational approach to legacy data validation when transitioning between electronic health record systems.

    PubMed

    Pageler, Natalie M; Grazier G'Sell, Max Jacob; Chandler, Warren; Mailes, Emily; Yang, Christine; Longhurst, Christopher A

    2016-09-01

    The objective of this project was to use statistical techniques to determine the completeness and accuracy of data migrated during electronic health record conversion. Data validation during migration consists of mapped record testing and validation of a sample of the data for completeness and accuracy. We statistically determined a randomized sample size for each data type based on the desired confidence level and error limits. The only error identified in the post go-live period was a failure to migrate some clinical notes, which was unrelated to the validation process. No errors in the migrated data were found during the 12- month post-implementation period. Compared to the typical industry approach, we have demonstrated that a statistical approach to sampling size for data validation can ensure consistent confidence levels while maximizing efficiency of the validation process during a major electronic health record conversion. © The Author 2016. Published by Oxford University Press on behalf of the American Medical Informatics Association. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  13. Assessing Technology in the Absence of Proof: Trust Based on the Interplay of Others' Opinions and the Interaction Process.

    PubMed

    de Vries, Peter W; van den Berg, Stéphanie M; Midden, Cees

    2015-12-01

    The present research addresses the question of how trust in systems is formed when unequivocal information about system accuracy and reliability is absent, and focuses on the interaction of indirect information (others' evaluations) and direct (experiential) information stemming from the interaction process. Trust in decision-supporting technology, such as route planners, is important for satisfactory user interactions. Little is known, however, about trust formation in the absence of outcome feedback, that is, when users have not yet had opportunity to verify actual outcomes. Three experiments manipulated others' evaluations ("endorsement cues") and various forms of experience-based information ("process feedback") in interactions with a route planner and measured resulting trust using rating scales and credits staked on the outcome. Subsequently, an overall analysis was conducted. Study 1 showed that effectiveness of endorsement cues on trust is moderated by mere process feedback. In Study 2, consistent (i.e., nonrandom) process feedback overruled the effect of endorsement cues on trust, whereas inconsistent process feedback did not. Study 3 showed that although the effects of consistent and inconsistent process feedback largely remained regardless of face validity, high face validity in process feedback caused higher trust than those with low face validity. An overall analysis confirmed these findings. Experiential information impacts trust even if outcome feedback is not available, and, moreover, overrules indirect trust cues-depending on the nature of the former. Designing systems so that they allow novice users to make inferences about their inner workings may foster initial trust. © 2015, Human Factors and Ergonomics Society.

  14. Creating a flipbook as a medium of instruction based on the research on activity test of kencur extract

    NASA Astrophysics Data System (ADS)

    Monika, Icha; Yeni, Laili Fitri; Ariyati, Eka

    2016-02-01

    This research aimed to reveal the validity of the flipbook as a medium of learning for the sub-material of environmental pollution in the tenth grade based on the results of the activity test of kencur (Kaempferia galanga) extract to control the growth of the Fusarium oxysporum fungus. The research consisted of two stages. First, testing the validity of the medium of flipbook through validation by seven assessors and analyzed based on the total average score of all aspects. Second, testing the activity of the kencur extract against the growth of Fusarium oxysporum by using the experimental method with 10 treatments and 3 repetitions which were analyzed using one-way analysis of variance (ANOVA) test. The making of the flipbook medium was done through the stages of analysis for the potential and problems, data collection, design, validation, and revision. The validation analysis on the flipbook received an average score of 3.7 and was valid to a certain extent, so it could be used in the teaching and learning process especially in the sub-material of environmental pollution in the tenth grade of the senior high school.

  15. Sensitivity-Uncertainty Based Nuclear Criticality Safety Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Brown, Forrest B.

    2016-09-20

    These are slides from a seminar given to the University of Mexico Nuclear Engineering Department. Whisper is a statistical analysis package developed to support nuclear criticality safety validation. It uses the sensitivity profile data for an application as computed by MCNP6 along with covariance files for the nuclear data to determine a baseline upper-subcritical-limit for the application. Whisper and its associated benchmark files are developed and maintained as part of MCNP6, and will be distributed with all future releases of MCNP6. Although sensitivity-uncertainty methods for NCS validation have been under development for 20 years, continuous-energy Monte Carlo codes such asmore » MCNP could not determine the required adjoint-weighted tallies for sensitivity profiles. The recent introduction of the iterated fission probability method into MCNP led to the rapid development of sensitivity analysis capabilities for MCNP6 and the development of Whisper. Sensitivity-uncertainty based methods represent the future for NCS validation – making full use of today’s computer power to codify past approaches based largely on expert judgment. Validation results are defensible, auditable, and repeatable as needed with different assumptions and process models. The new methods can supplement, support, and extend traditional validation approaches.« less

  16. A Global Lake Ecological Observatory Network (GLEON) for synthesising high-frequency sensor data for validation of deterministic ecological models

    USGS Publications Warehouse

    David, Hamilton P; Carey, Cayelan C.; Arvola, Lauri; Arzberger, Peter; Brewer, Carol A.; Cole, Jon J; Gaiser, Evelyn; Hanson, Paul C.; Ibelings, Bas W; Jennings, Eleanor; Kratz, Tim K; Lin, Fang-Pang; McBride, Christopher G.; de Motta Marques, David; Muraoka, Kohji; Nishri, Ami; Qin, Boqiang; Read, Jordan S.; Rose, Kevin C.; Ryder, Elizabeth; Weathers, Kathleen C.; Zhu, Guangwei; Trolle, Dennis; Brookes, Justin D

    2014-01-01

    A Global Lake Ecological Observatory Network (GLEON; www.gleon.org) has formed to provide a coordinated response to the need for scientific understanding of lake processes, utilising technological advances available from autonomous sensors. The organisation embraces a grassroots approach to engage researchers from varying disciplines, sites spanning geographic and ecological gradients, and novel sensor and cyberinfrastructure to synthesise high-frequency lake data at scales ranging from local to global. The high-frequency data provide a platform to rigorously validate process- based ecological models because model simulation time steps are better aligned with sensor measurements than with lower-frequency, manual samples. Two case studies from Trout Bog, Wisconsin, USA, and Lake Rotoehu, North Island, New Zealand, are presented to demonstrate that in the past, ecological model outputs (e.g., temperature, chlorophyll) have been relatively poorly validated based on a limited number of directly comparable measurements, both in time and space. The case studies demonstrate some of the difficulties of mapping sensor measurements directly to model state variable outputs as well as the opportunities to use deviations between sensor measurements and model simulations to better inform process understanding. Well-validated ecological models provide a mechanism to extrapolate high-frequency sensor data in space and time, thereby potentially creating a fully 3-dimensional simulation of key variables of interest.

  17. Analytical Method Development and Validation for the Quantification of Acetone and Isopropyl Alcohol in the Tartaric Acid Base Pellets of Dipyridamole Modified Release Capsules by Using Headspace Gas Chromatographic Technique

    PubMed Central

    2018-01-01

    A simple, sensitive, accurate, robust headspace gas chromatographic method was developed for the quantitative determination of acetone and isopropyl alcohol in tartaric acid-based pellets of dipyridamole modified release capsules. The residual solvents acetone and isopropyl alcohol were used in the manufacturing process of the tartaric acid-based pellets of dipyridamole modified release capsules by considering the solubility of the dipyridamole and excipients in the different manufacturing stages. The method was developed and optimized by using fused silica DB-624 (30 m × 0.32 mm × 1.8 µm) column with the flame ionization detector. The method validation was carried out with regard to the guidelines for validation of analytical procedures Q2 demanded by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). All the validation characteristics were meeting the acceptance criteria. Hence, the developed and validated method can be applied for the intended routine analysis. PMID:29686931

  18. Analytical Method Development and Validation for the Quantification of Acetone and Isopropyl Alcohol in the Tartaric Acid Base Pellets of Dipyridamole Modified Release Capsules by Using Headspace Gas Chromatographic Technique.

    PubMed

    Valavala, Sriram; Seelam, Nareshvarma; Tondepu, Subbaiah; Jagarlapudi, V Shanmukha Kumar; Sundarmurthy, Vivekanandan

    2018-01-01

    A simple, sensitive, accurate, robust headspace gas chromatographic method was developed for the quantitative determination of acetone and isopropyl alcohol in tartaric acid-based pellets of dipyridamole modified release capsules. The residual solvents acetone and isopropyl alcohol were used in the manufacturing process of the tartaric acid-based pellets of dipyridamole modified release capsules by considering the solubility of the dipyridamole and excipients in the different manufacturing stages. The method was developed and optimized by using fused silica DB-624 (30 m × 0.32 mm × 1.8  µ m) column with the flame ionization detector. The method validation was carried out with regard to the guidelines for validation of analytical procedures Q2 demanded by the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH). All the validation characteristics were meeting the acceptance criteria. Hence, the developed and validated method can be applied for the intended routine analysis.

  19. Providing a Science Base for the Evaluation of Tobacco Products

    PubMed Central

    Berman, Micah L.; Connolly, Greg; Cummings, K. Michael; Djordjevic, Mirjana V.; Hatsukami, Dorothy K.; Henningfield, Jack E.; Myers, Matthew; O'Connor, Richard J.; Parascandola, Mark; Rees, Vaughan; Rice, Jerry M.

    2015-01-01

    Objective Evidence-based tobacco regulation requires a comprehensive scientific framework to guide the evaluation of new tobacco products and health-related claims made by product manufacturers. Methods The Tobacco Product Assessment Consortium (TobPRAC) employed an iterative process involving consortia investigators, consultants, a workshop of independent scientists and public health experts, and written reviews in order to develop a conceptual framework for evaluating tobacco products. Results The consortium developed a four-phased framework for the scientific evaluation of tobacco products. The four phases addressed by the framework are: (1) pre-market evaluation, (2) pre-claims evaluation, (3) post-market activities, and (4) monitoring and re-evaluation. For each phase, the framework proposes the use of validated testing procedures that will evaluate potential harms at both the individual and population level. Conclusions While the validation of methods for evaluating tobacco products is an ongoing and necessary process, the proposed framework need not wait for fully validated methods to be used in guiding tobacco product regulation today. PMID:26665160

  20. Perception SoC Based on an Ultrasonic Array of Sensors: Efficient DSP Core Implementation and Subsequent Experimental Results

    NASA Astrophysics Data System (ADS)

    Kassem, A.; Sawan, M.; Boukadoum, M.; Haidar, A.

    2005-12-01

    We are concerned with the design, implementation, and validation of a perception SoC based on an ultrasonic array of sensors. The proposed SoC is dedicated to ultrasonic echography applications. A rapid prototyping platform is used to implement and validate the new architecture of the digital signal processing (DSP) core. The proposed DSP core efficiently integrates all of the necessary ultrasonic B-mode processing modules. It includes digital beamforming, quadrature demodulation of RF signals, digital filtering, and envelope detection of the received signals. This system handles 128 scan lines and 6400 samples per scan line with a[InlineEquation not available: see fulltext.] angle of view span. The design uses a minimum size lookup memory to store the initial scan information. Rapid prototyping using an ARM/FPGA combination is used to validate the operation of the described system. This system offers significant advantages of portability and a rapid time to market.

  1. A high-fidelity weather time series generator using the Markov Chain process on a piecewise level

    NASA Astrophysics Data System (ADS)

    Hersvik, K.; Endrerud, O.-E. V.

    2017-12-01

    A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.

  2. Experimental Validation of Hybrid Distillation-Vapor Permeation Process for Energy Efficient Ethanol-Water Separation

    EPA Science Inventory

    The energy demand of distillation-based systems for ethanol recovery and dehydration can be significant, particularly for dilute solutions. An alternative separation process integrating vapor stripping with a vapor compression step and a vapor permeation membrane separation step...

  3. Experimental Validation of Hybrid Distillation-Vapor Permeation Process for Energy Efficient Ethanol-Water Separation

    EPA Science Inventory

    The energy demand of distillation-based systems for ethanol recovery and dehydration can be significant, particularly for dilute solutions. An alternative separation process integrating vapor stripping with a vapor compression step and a vapor permeation membrane separation step,...

  4. Measurement of Functional Cognition and Complex Everyday Activities in Older Adults with Mild Cognitive Impairment and Mild Dementia: Validity of the Large Allen's Cognitive Level Screen.

    PubMed

    Wesson, Jacqueline; Clemson, Lindy; Crawford, John D; Kochan, Nicole A; Brodaty, Henry; Reppermund, Simone

    2017-05-01

    To explore the validity of the Large Allen's Cognitive Level Screen-5 (LACLS-5) as a performance-based measure of functional cognition, representing an ability to perform complex everyday activities in older adults with mild cognitive impairment (MCI) and mild dementia living in the community. Using cross-sectional data from the Sydney Memory and Ageing Study, 160 community-dwelling older adults with normal cognition (CN; N = 87), MCI (N = 43), or dementia (N = 30) were studied. Functional cognition (LACLS-5), complex everyday activities (Disability Assessment for Dementia [DAD]), Assessment of Motor and Process Skills [AMPS]), and neuropsychological measures were used. Participants with dementia performed worse than CN on all clinical measures, and MCI participants were intermediate. Correlational analyses showed that LACLS-5 was most strongly related to AMPS Process scores, DAD instrumental activities of daily living subscale, Mini-Mental State Exam, Block Design, Logical Memory, and Trail Making Test B. Multiple regression analysis indicated that both cognitive (Block Design) and functional measures (AMPS Process score) and sex predicted LACLS-5 performance. Finally, LACLS-5 was able to adequately discriminate between CN and dementia and between MCI and dementia but was unable to reliably distinguish between CN and MCI. Construct validity, including convergent and discriminative validity, was supported. LACLS-5 is a valid performance-based measure for evaluating functional cognition. Discriminativevalidity is acceptable for identifying mild dementia but requires further refinement for detecting MCI. Copyright © 2017 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.

  5. Validation and detection of vessel landmarks by using anatomical knowledge

    NASA Astrophysics Data System (ADS)

    Beck, Thomas; Bernhardt, Dominik; Biermann, Christina; Dillmann, Rüdiger

    2010-03-01

    The detection of anatomical landmarks is an important prerequisite to analyze medical images fully automatically. Several machine learning approaches have been proposed to parse 3D CT datasets and to determine the location of landmarks with associated uncertainty. However, it is a challenging task to incorporate high-level anatomical knowledge to improve these classification results. We propose a new approach to validate candidates for vessel bifurcation landmarks which is also applied to systematically search missed and to validate ambiguous landmarks. A knowledge base is trained providing human-readable geometric information of the vascular system, mainly vessel lengths, radii and curvature information, for validation of landmarks and to guide the search process. To analyze the bifurcation area surrounding a vessel landmark of interest, a new approach is proposed which is based on Fast Marching and incorporates anatomical information from the knowledge base. Using the proposed algorithms, an anatomical knowledge base has been generated based on 90 manually annotated CT images containing different parts of the body. To evaluate the landmark validation a set of 50 carotid datasets has been tested in combination with a state of the art landmark detector with excellent results. Beside the carotid bifurcation the algorithm is designed to handle a wide range of vascular landmarks, e.g. celiac, superior mesenteric, renal, aortic, iliac and femoral bifurcation.

  6. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE PAGES

    An, Ke; Yuan, Lang; Dial, Laura; ...

    2017-09-11

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  7. Neutron residual stress measurement and numerical modeling in a curved thin-walled structure by laser powder bed fusion additive manufacturing

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    An, Ke; Yuan, Lang; Dial, Laura

    Severe residual stresses in metal parts made by laser powder bed fusion additive manufacturing processes (LPBFAM) can cause both distortion and cracking during the fabrication processes. Limited data is currently available for both iterating through process conditions and design, and in particular, for validating numerical models to accelerate process certification. In this work, residual stresses of a curved thin-walled structure, made of Ni-based superalloy Inconel 625™ and fabricated by LPBFAM, were resolved by neutron diffraction without measuring the stress-free lattices along both the build and the transverse directions. The stresses of the entire part during fabrication and after cooling downmore » were predicted by a simplified layer-by-layer finite element based numerical model. The simulated and measured stresses were found in good quantitative agreement. The validated simplified simulation methodology will allow to assess residual stresses in more complex structures and to significantly reduce manufacturing cycle time.« less

  8. Multibody dynamical modeling for spacecraft docking process with spring-damper buffering device: A new validation approach

    NASA Astrophysics Data System (ADS)

    Daneshjou, Kamran; Alibakhshi, Reza

    2018-01-01

    In the current manuscript, the process of spacecraft docking, as one of the main risky operations in an on-orbit servicing mission, is modeled based on unconstrained multibody dynamics. The spring-damper buffering device is utilized here in the docking probe-cone system for micro-satellites. Owing to the impact occurs inevitably during docking process and the motion characteristics of multibody systems are remarkably affected by this phenomenon, a continuous contact force model needs to be considered. Spring-damper buffering device, keeping the spacecraft stable in an orbit when impact occurs, connects a base (cylinder) inserted in the chaser satellite and the end of docking probe. Furthermore, by considering a revolute joint equipped with torsional shock absorber, between base and chaser satellite, the docking probe can experience both translational and rotational motions simultaneously. Although spacecraft docking process accompanied by the buffering mechanisms may be modeled by constrained multibody dynamics, this paper deals with a simple and efficient formulation to eliminate the surplus generalized coordinates and solve the impact docking problem based on unconstrained Lagrangian mechanics. By an example problem, first, model verification is accomplished by comparing the computed results with those recently reported in the literature. Second, according to a new alternative validation approach, which is based on constrained multibody problem, the accuracy of presented model can be also evaluated. This proposed verification approach can be applied to indirectly solve the constrained multibody problems by minimum required effort. The time history of impact force, the influence of system flexibility and physical interaction between shock absorber and penetration depth caused by impact are the issues followed in this paper. Third, the MATLAB/SIMULINK multibody dynamic analysis software will be applied to build impact docking model to validate computed results and then, investigate the trajectories of both satellites to take place the successful capture process.

  9. Validation of Western North America Models based on finite-frequency and ray theory imaging methods

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Larmat, Carene; Maceira, Monica; Porritt, Robert W.

    2015-02-02

    We validate seismic models developed for western North America with a focus on effect of imaging methods on data fit. We use the DNA09 models for which our collaborators provide models built with both the body-­wave FF approach and the RT approach, when the data selection, processing and reference models are the same.

  10. Multilevel Factor Structure, Concurrent Validity, and Test-Retest Reliability of the High School Teacher Version of the Authoritative School Climate Survey

    ERIC Educational Resources Information Center

    Huang, Francis L.; Cornell, Dewey G.

    2016-01-01

    Although school climate has long been recognized as an important factor in the school improvement process, there are few psychometrically supported measures based on teacher perspectives. The current study replicated and extended the factor structure, concurrent validity, and test-retest reliability of the teacher version of the Authoritative…

  11. Calibration and validation of the SWAT model for a forested watershed in coastal South Carolina

    Treesearch

    Devendra M. Amatya; Elizabeth B. Haley; Norman S. Levine; Timothy J. Callahan; Artur Radecki-Pawlik; Manoj K. Jha

    2008-01-01

    Modeling the hydrology of low-gradient coastal watersheds on shallow, poorly drained soils is a challenging task due to the complexities in watershed delineation, runoff generation processes and pathways, flooding, and submergence caused by tropical storms. The objective of the study is to calibrate and validate a GIS-based spatially-distributed hydrologic model, SWAT...

  12. A long-term validation of the modernised DC-ARC-OES solid-sample method.

    PubMed

    Flórián, K; Hassler, J; Förster, O

    2001-12-01

    The validation procedure based on ISO 17025 standard has been used to study and illustrate both the longterm stability of the calibration process of the DC-ARC solid sample spectrometric method and the main validation criteria of the method. In the calculation of the validation characteristics depending on the linearity(calibration), also the fulfilment of predetermining criteria such as normality and homoscedasticity was checked. In order to decide whether there are any trends in the time-variation of the analytical signal or not, also the Neumann test of trend was applied and evaluated. Finally, a comparison with similar validation data of the ETV-ICP-OES method was carried out.

  13. Validation of biomarkers of food intake-critical assessment of candidate biomarkers.

    PubMed

    Dragsted, L O; Gao, Q; Scalbert, A; Vergères, G; Kolehmainen, M; Manach, C; Brennan, L; Afman, L A; Wishart, D S; Andres Lacueva, C; Garcia-Aloy, M; Verhagen, H; Feskens, E J M; Praticò, G

    2018-01-01

    Biomarkers of food intake (BFIs) are a promising tool for limiting misclassification in nutrition research where more subjective dietary assessment instruments are used. They may also be used to assess compliance to dietary guidelines or to a dietary intervention. Biomarkers therefore hold promise for direct and objective measurement of food intake. However, the number of comprehensively validated biomarkers of food intake is limited to just a few. Many new candidate biomarkers emerge from metabolic profiling studies and from advances in food chemistry. Furthermore, candidate food intake biomarkers may also be identified based on extensive literature reviews such as described in the guidelines for Biomarker of Food Intake Reviews (BFIRev). To systematically and critically assess the validity of candidate biomarkers of food intake, it is necessary to outline and streamline an optimal and reproducible validation process. A consensus-based procedure was used to provide and evaluate a set of the most important criteria for systematic validation of BFIs. As a result, a validation procedure was developed including eight criteria, plausibility, dose-response, time-response, robustness, reliability, stability, analytical performance, and inter-laboratory reproducibility. The validation has a dual purpose: (1) to estimate the current level of validation of candidate biomarkers of food intake based on an objective and systematic approach and (2) to pinpoint which additional studies are needed to provide full validation of each candidate biomarker of food intake. This position paper on biomarker of food intake validation outlines the second step of the BFIRev procedure but may also be used as such for validation of new candidate biomarkers identified, e.g., in food metabolomic studies.

  14. Standing wave design and experimental validation of a tandem simulated moving bed process for insulin purification.

    PubMed

    Xie, Yi; Mun, Sungyong; Kim, Jinhyun; Wang, Nien-Hwa Linda

    2002-01-01

    A tandem simulated moving bed (SMB) process for insulin purification has been proposed and validated experimentally. The mixture to be separated consists of insulin, high molecular weight proteins, and zinc chloride. A systematic approach based on the standing wave design, rate model simulations, and experiments was used to develop this multicomponent separation process. The standing wave design was applied to specify the SMB operating conditions of a lab-scale unit with 10 columns. The design was validated with rate model simulations prior to experiments. The experimental results show 99.9% purity and 99% yield, which closely agree with the model predictions and the standing wave design targets. The agreement proves that the standing wave design can ensure high purity and high yield for the tandem SMB process. Compared to a conventional batch SEC process, the tandem SMB has 10% higher yield, 400% higher throughput, and 72% lower eluant consumption. In contrast, a design that ignores the effects of mass transfer and nonideal flow cannot meet the purity requirement and gives less than 96% yield.

  15. A Process Analytical Technology (PAT) approach to control a new API manufacturing process: development, validation and implementation.

    PubMed

    Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric

    2014-03-01

    Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. © 2013 Published by Elsevier B.V.

  16. Evacuation performance evaluation tool.

    PubMed

    Farra, Sharon; Miller, Elaine T; Gneuhs, Matthew; Timm, Nathan; Li, Gengxin; Simon, Ashley; Brady, Whittney

    2016-01-01

    Hospitals conduct evacuation exercises to improve performance during emergency events. An essential aspect in this process is the creation of reliable and valid evaluation tools. The objective of this article is to describe the development and implications of a disaster evacuation performance tool that measures one portion of the very complex process of evacuation. Through the application of the Delphi technique and DeVellis's framework, disaster and neonatal experts provided input in developing this performance evaluation tool. Following development, content validity and reliability of this tool were assessed. Large pediatric hospital and medical center in the Midwest. The tool was pilot tested with an administrative, medical, and nursing leadership group and then implemented with a group of 68 healthcare workers during a disaster exercise of a neonatal intensive care unit (NICU). The tool has demonstrated high content validity with a scale validity index of 0.979 and inter-rater reliability G coefficient (0.984, 95% CI: 0.948-0.9952). The Delphi process based on the conceptual framework of DeVellis yielded a psychometrically sound evacuation performance evaluation tool for a NICU.

  17. An object-based approach to weather analysis and its applications

    NASA Astrophysics Data System (ADS)

    Troemel, Silke; Diederich, Malte; Horvath, Akos; Simmer, Clemens; Kumjian, Matthew

    2013-04-01

    The research group 'Object-based Analysis and SEamless prediction' (OASE) within the Hans Ertel Centre for Weather Research programme (HErZ) pursues an object-based approach to weather analysis. The object-based tracking approach adopts the Lagrange perspective by identifying and following the development of convective events over the course of their lifetime. Prerequisites of the object-based analysis are a high-resolved observational data base and a tracking algorithm. A near real-time radar and satellite remote sensing-driven 3D observation-microphysics composite covering Germany, currently under development, contains gridded observations and estimated microphysical quantities. A 3D scale-space tracking identifies convective rain events in the dual-composite and monitors the development over the course of their lifetime. The OASE-group exploits the object-based approach in several fields of application: (1) For a better understanding and analysis of precipitation processes responsible for extreme weather events, (2) in nowcasting, (3) as a novel approach for validation of meso-γ atmospheric models, and (4) in data assimilation. Results from the different fields of application will be presented. The basic idea of the object-based approach is to identify a small set of radar- and satellite derived descriptors which characterize the temporal development of precipitation systems which constitute the objects. So-called proxies of the precipitation process are e.g. the temporal change of the brightband, vertically extensive columns of enhanced differential reflectivity ZDR or the cloud top temperature and heights identified in the 4D field of ground-based radar reflectivities and satellite retrievals generated by a cell during its life time. They quantify (micro-) physical differences among rain events and relate to the precipitation yield. Analyses on the informative content of ZDR columns as precursor for storm evolution for example will be presented to demonstrate the use of such system-oriented predictors for nowcasting. Columns of differential reflectivity ZDR measured by polarimetric weather radars are prominent signatures associated with thunderstorm updrafts. Since greater vertical velocities can loft larger drops and water-coated ice particles to higher altitudes above the environmental freezing level, the integrated ZDR column above the freezing level increases with increasing updraft intensity. Validation of atmospheric models concerning precipitation representation or prediction is usually confined to comparisons of precipitation fields or their temporal and spatial statistics. A comparison of the rain rates alone, however, does not immediately explain discrepancies between models and observations, because similar rain rates might be produced by different processes. Within the event-based approach for validation of models both observed and modeled rain events are analyzed by means of proxies of the precipitation process. Both sets of descriptors represent the basis for model validation since different leading descriptors - in a statistical sense- hint at process formulations potentially responsible for model failures.

  18. Relationships between Language Background, Secondary School Scores, Tutorial Group Processes, and Students' Academic Achievement in PBL: Testing a Causal Model

    ERIC Educational Resources Information Center

    Singaram, Veena S.; van der Vleuten, Cees P. M; Muijtjens, Arno M. M.; Dolmans, Diana H. J. M

    2012-01-01

    Little is known about the influence of language background in problem-based learning (PBL) tutorial groups on group processes and students' academic achievement. This study investigated the relationship between language background, secondary school score, tutorial group processes, and students' academic achievement in PBL. A validated tutorial…

  19. Computer Modeling and Simulation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Pronskikh, V. S.

    2014-05-09

    Verification and validation of computer codes and models used in simulation are two aspects of the scientific practice of high importance and have recently been discussed by philosophers of science. While verification is predominantly associated with the correctness of the way a model is represented by a computer code or algorithm, validation more often refers to model’s relation to the real world and its intended use. It has been argued that because complex simulations are generally not transparent to a practitioner, the Duhem problem can arise for verification and validation due to their entanglement; such an entanglement makes it impossiblemore » to distinguish whether a coding error or model’s general inadequacy to its target should be blamed in the case of the model failure. I argue that in order to disentangle verification and validation, a clear distinction between computer modeling (construction of mathematical computer models of elementary processes) and simulation (construction of models of composite objects and processes by means of numerical experimenting with them) needs to be made. Holding on to that distinction, I propose to relate verification (based on theoretical strategies such as inferences) to modeling and validation, which shares the common epistemology with experimentation, to simulation. To explain reasons of their intermittent entanglement I propose a weberian ideal-typical model of modeling and simulation as roles in practice. I suggest an approach to alleviate the Duhem problem for verification and validation generally applicable in practice and based on differences in epistemic strategies and scopes« less

  20. Simulation-based assessment in anesthesiology: requirements for practical implementation.

    PubMed

    Boulet, John R; Murray, David J

    2010-04-01

    Simulations have taken a central role in the education and assessment of medical students, residents, and practicing physicians. The introduction of simulation-based assessments in anesthesiology, especially those used to establish various competencies, has demanded fairly rigorous studies concerning the psychometric properties of the scores. Most important, major efforts have been directed at identifying, and addressing, potential threats to the validity of simulation-based assessment scores. As a result, organizations that wish to incorporate simulation-based assessments into their evaluation practices can access information regarding effective test development practices, the selection of appropriate metrics, the minimization of measurement errors, and test score validation processes. The purpose of this article is to provide a broad overview of the use of simulation for measuring physician skills and competencies. For simulations used in anesthesiology, studies that describe advances in scenario development, the development of scoring rubrics, and the validation of assessment results are synthesized. Based on the summary of relevant research, psychometric requirements for practical implementation of simulation-based assessments in anesthesiology are forwarded. As technology expands, and simulation-based education and evaluation takes on a larger role in patient safety initiatives, the groundbreaking work conducted to date can serve as a model for those individuals and organizations that are responsible for developing, scoring, or validating simulation-based education and assessment programs in anesthesiology.

  1. Assessing Attachment in Psychotherapy: Validation of the Patient Attachment Coding System (PACS).

    PubMed

    Talia, Alessandro; Miller-Bottome, Madeleine; Daniel, Sarah I F

    2017-01-01

    The authors present and validate the Patient Attachment Coding System (PACS), a transcript-based instrument that assesses clients' in-session attachment based on any session of psychotherapy, in multiple treatment modalities. One-hundred and sixty clients in different types of psychotherapy (cognitive-behavioural, cognitive-behavioural-enhanced, psychodynamic, relational, supportive) and from three different countries were administered the Adult Attachment Interview (AAI) prior to treatment, and one session for each client was rated with the PACS by independent coders. Results indicate strong inter-rater reliability, and high convergent validity of the PACS scales and classifications with the AAI. These results present the PACS as a practical alternative to the AAI in psychotherapy research and suggest that clinicians using the PACS can assess clients' attachment status on an ongoing basis by monitoring clients' verbal activity. These results also provide information regarding the ways in which differences in attachment status play out in therapy sessions and further the study of attachment in psychotherapy from a pre-treatment client factor to a process variable. Copyright © 2015 John Wiley & Sons, Ltd. The Patient Attachment Coding System is a valid measure of attachment that can classify clients' attachment based on any single psychotherapy transcript, in many therapeutic modalities Client differences in attachment manifest in part independently of the therapist's contributions Client adult attachment patterns are likely to affect psychotherapeutic processes. Copyright © 2015 John Wiley & Sons, Ltd.

  2. A Framework for Text Mining in Scientometric Study: A Case Study in Biomedicine Publications

    NASA Astrophysics Data System (ADS)

    Silalahi, V. M. M.; Hardiyati, R.; Nadhiroh, I. M.; Handayani, T.; Rahmaida, R.; Amelia, M.

    2018-04-01

    The data of Indonesians research publications in the domain of biomedicine has been collected to be text mined for the purpose of a scientometric study. The goal is to build a predictive model that provides a classification of research publications on the potency for downstreaming. The model is based on the drug development processes adapted from the literatures. An effort is described to build the conceptual model and the development of a corpus on the research publications in the domain of Indonesian biomedicine. Then an investigation is conducted relating to the problems associated with building a corpus and validating the model. Based on our experience, a framework is proposed to manage the scientometric study based on text mining. Our method shows the effectiveness of conducting a scientometric study based on text mining in order to get a valid classification model. This valid model is mainly supported by the iterative and close interactions with the domain experts starting from identifying the issues, building a conceptual model, to the labelling, validation and results interpretation.

  3. Rapid prediction of ochratoxin A-producing strains of Penicillium on dry-cured meat by MOS-based electronic nose.

    PubMed

    Lippolis, Vincenzo; Ferrara, Massimo; Cervellieri, Salvatore; Damascelli, Anna; Epifani, Filomena; Pascale, Michelangelo; Perrone, Giancarlo

    2016-02-02

    The availability of rapid diagnostic methods for monitoring ochratoxigenic species during the seasoning processes for dry-cured meats is crucial and constitutes a key stage in order to prevent the risk of ochratoxin A (OTA) contamination. A rapid, easy-to-perform and non-invasive method using an electronic nose (e-nose) based on metal oxide semiconductors (MOS) was developed to discriminate dry-cured meat samples in two classes based on the fungal contamination: class P (samples contaminated by OTA-producing Penicillium strains) and class NP (samples contaminated by OTA non-producing Penicillium strains). Two OTA-producing strains of Penicillium nordicum and two OTA non-producing strains of Penicillium nalgiovense and Penicillium salamii, were tested. The feasibility of this approach was initially evaluated by e-nose analysis of 480 samples of both Yeast extract sucrose (YES) and meat-based agar media inoculated with the tested Penicillium strains and incubated up to 14 days. The high recognition percentages (higher than 82%) obtained by Discriminant Function Analysis (DFA), either in calibration and cross-validation (leave-more-out approach), for both YES and meat-based samples demonstrated the validity of the used approach. The e-nose method was subsequently developed and validated for the analysis of dry-cured meat samples. A total of 240 e-nose analyses were carried out using inoculated sausages, seasoned by a laboratory-scale process and sampled at 5, 7, 10 and 14 days. DFA provided calibration models that permitted discrimination of dry-cured meat samples after only 5 days of seasoning with mean recognition percentages in calibration and cross-validation of 98 and 88%, respectively. A further validation of the developed e-nose method was performed using 60 dry-cured meat samples produced by an industrial-scale seasoning process showing a total recognition percentage of 73%. The pattern of volatile compounds of dry-cured meat samples was identified and characterized by a developed HS-SPME/GC-MS method. Seven volatile compounds (2-methyl-1-butanol, octane, 1R-α-pinene, d-limonene, undecane, tetradecanal, 9-(Z)-octadecenoic acid methyl ester) allowed discrimination between dry-cured meat samples of classes P and NP. These results demonstrate that MOS-based electronic nose can be a useful tool for a rapid screening in preventing OTA contamination in the cured meat supply chain. Copyright © 2015 Elsevier B.V. All rights reserved.

  4. Validation of new psychosocial factors questionnaires: a Colombian national study.

    PubMed

    Villalobos, Gloria H; Vargas, Angélica M; Rondón, Martin A; Felknor, Sarah A

    2013-01-01

    The study of workers' health problems possibly associated with stressful conditions requires valid and reliable tools for monitoring risk factors. The present study validates two questionnaires to assess psychosocial risk factors for stress-related illnesses within a sample of Colombian workers. The validation process was based on a representative sample survey of 2,360 Colombian employees, aged 18-70 years. Worker response rate was 90%; 46% of the responders were women. Internal consistency was calculated, construct validity was tested with factor analysis and concurrent validity was tested with Spearman correlations. The questionnaires demonstrated adequate reliability (0.88-0.95). Factor analysis confirmed the dimensions proposed in the measurement model. Concurrent validity resulted in significant correlations with stress and health symptoms. "Work and Non-work Psychosocial Factors Questionnaires" were found to be valid and reliable for the assessment of workers' psychosocial factors, and they provide information for research and intervention. Copyright © 2012 Wiley Periodicals, Inc.

  5. Constructing a question bank based on script concordance approach as a novel assessment methodology in surgical education.

    PubMed

    Aldekhayel, Salah A; Alselaim, Nahar A; Magzoub, Mohi Eldin; Al-Qattan, Mohammad M; Al-Namlah, Abdullah M; Tamim, Hani; Al-Khayal, Abdullah; Al-Habdan, Sultan I; Zamakhshary, Mohammed F

    2012-10-24

    Script Concordance Test (SCT) is a new assessment tool that reliably assesses clinical reasoning skills. Previous descriptions of developing SCT-question banks were merely subjective. This study addresses two gaps in the literature: 1) conducting the first phase of a multistep validation process of SCT in Plastic Surgery, and 2) providing an objective methodology to construct a question bank based on SCT. After developing a test blueprint, 52 test items were written. Five validation questions were developed and a validation survey was established online. Seven reviewers were asked to answer this survey. They were recruited from two countries, Saudi Arabia and Canada, to improve the test's external validity. Their ratings were transformed into percentages. Analysis was performed to compare reviewers' ratings by looking at correlations, ranges, means, medians, and overall scores. Scores of reviewers' ratings were between 76% and 95% (mean 86% ± 5). We found poor correlations between reviewers (Pearson's: +0.38 to -0.22). Ratings of individual validation questions ranged between 0 and 4 (on a scale 1-5). Means and medians of these ranges were computed for each test item (mean: 0.8 to 2.4; median: 1 to 3). A subset of test items comprising 27 items was generated based on a set of inclusion and exclusion criteria. This study proposes an objective methodology for validation of SCT-question bank. Analysis of validation survey is done from all angles, i.e., reviewers, validation questions, and test items. Finally, a subset of test items is generated based on a set of criteria.

  6. The linguistic validation of Russian version of Dutch four-dimensional symptoms questionnaire (4DSQ) for assessing distress, depression, anxiety and somatization in patients with borderline psychosomatic disorders.

    PubMed

    Arnautov, V S; Reyhart, D V; Smulevich, A B; Yakhno, N N; Terluin, B; Zakharova, E K; Andryushchenko, A V; Parfenov, V A; Zamergrad, M V; Romanov, D V

    2015-12-12

    The four-dimensional symptom questionnaire (4DSQ) is an originally Dutch self-report questionnaire that has been developed in primary care to distinguish non-specific general distress from depression, anxiety and somatization. In order to produce the appropriate translated Russian version the process of linguistic validation has been initiated. This process has been done according to the "Linguistic Validation Manual for Health Outcome Assessments" developed by MAPI institute. To produce the appropriate Russian version of the 4DSQ that is conceptually and linguistically equivalent to the original questionnaire. The original Dutch version of the 4DSQ was translated by one translator into Russian. The validated English version of the 4DSQ was translated by another translator into Russian without mutual consultation. The consensus version was created based on two translated versions. After that the back translation was performed to Dutch, some changes were implemented to the consensus Russian version and the second target version was developed based on these results. The second target version was sent to an appropriate group of reviewers. Based on their comments, the second target version was updated. After wards this version was tested in patients during cognitive interview. The study protocol was approved by the Independent Interdisciplinary Ethics Committee on Ethical Review for Clinical Studies, and in compliance with the Helsinki Declaration and ICH-GCP guidelines and local regulations. Enrolled patients provided written informed consent. After the process of forward and backward translation, consultant and developer's comments, clinicians and cognitive review the final version of Russian 4DSQ was developed for assessment of distress, depression, anxiety and somatization. The Russian 4DSQ as a result of translation procedures and cognitive interviews linguistically corresponds to the original Dutch 4DSQ and could be assessed in psychometric validation for the further using in general practice.

  7. ASME V\\&V challenge problem: Surrogate-based V&V

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Beghini, Lauren L.; Hough, Patricia D.

    2015-12-18

    The process of verification and validation can be resource intensive. From the computational model perspective, the resource demand typically arises from long simulation run times on multiple cores coupled with the need to characterize and propagate uncertainties. In addition, predictive computations performed for safety and reliability analyses have similar resource requirements. For this reason, there is a tradeoff between the time required to complete the requisite studies and the fidelity or accuracy of the results that can be obtained. At a high level, our approach is cast within a validation hierarchy that provides a framework in which we perform sensitivitymore » analysis, model calibration, model validation, and prediction. The evidence gathered as part of these activities is mapped into the Predictive Capability Maturity Model to assess credibility of the model used for the reliability predictions. With regard to specific technical aspects of our analysis, we employ surrogate-based methods, primarily based on polynomial chaos expansions and Gaussian processes, for model calibration, sensitivity analysis, and uncertainty quantification in order to reduce the number of simulations that must be done. The goal is to tip the tradeoff balance to improving accuracy without increasing the computational demands.« less

  8. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2012 CFR

    2012-04-01

    ... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  9. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2013 CFR

    2013-04-01

    ... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  10. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2014 CFR

    2014-04-01

    ... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  11. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2011 CFR

    2011-04-01

    ... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  12. 21 CFR 820.75 - Process validation.

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...

  13. Development of Scientific Approach Based on Discovery Learning Module

    NASA Astrophysics Data System (ADS)

    Ellizar, E.; Hardeli, H.; Beltris, S.; Suharni, R.

    2018-04-01

    Scientific Approach is a learning process, designed to make the students actively construct their own knowledge through stages of scientific method. The scientific approach in learning process can be done by using learning modules. One of the learning model is discovery based learning. Discovery learning is a learning model for the valuable things in learning through various activities, such as observation, experience, and reasoning. In fact, the students’ activity to construct their own knowledge were not optimal. It’s because the available learning modules were not in line with the scientific approach. The purpose of this study was to develop a scientific approach discovery based learning module on Acid Based, also on electrolyte and non-electrolyte solution. The developing process of this chemistry modules use the Plomp Model with three main stages. The stages are preliminary research, prototyping stage, and the assessment stage. The subject of this research was the 10th and 11th Grade of Senior High School students (SMAN 2 Padang). Validation were tested by the experts of Chemistry lecturers and teachers. Practicality of these modules had been tested through questionnaire. The effectiveness had been tested through experimental procedure by comparing student achievement between experiment and control groups. Based on the findings, it can be concluded that the developed scientific approach discovery based learning module significantly improve the students’ learning in Acid-based and Electrolyte solution. The result of the data analysis indicated that the chemistry module was valid in content, construct, and presentation. Chemistry module also has a good practicality level and also accordance with the available time. This chemistry module was also effective, because it can help the students to understand the content of the learning material. That’s proved by the result of learning student. Based on the result can conclude that chemistry module based on discovery learning and scientific approach in electrolyte and non-electrolyte solution and Acid Based for the 10th and 11th grade of senior high school students were valid, practice, and effective.

  14. Parameter prediction based on Improved Process neural network and ARMA error compensation in Evaporation Process

    NASA Astrophysics Data System (ADS)

    Qian, Xiaoshan

    2018-01-01

    The traditional model of evaporation process parameters have continuity and cumulative characteristics of the prediction error larger issues, based on the basis of the process proposed an adaptive particle swarm neural network forecasting method parameters established on the autoregressive moving average (ARMA) error correction procedure compensated prediction model to predict the results of the neural network to improve prediction accuracy. Taking a alumina plant evaporation process to analyze production data validation, and compared with the traditional model, the new model prediction accuracy greatly improved, can be used to predict the dynamic process of evaporation of sodium aluminate solution components.

  15. Development of self and peer performance assessment on iodometric titration experiment

    NASA Astrophysics Data System (ADS)

    Nahadi; Siswaningsih, W.; Kusumaningtyas, H.

    2018-05-01

    This study aims to describe the process in developing of reliable and valid assessment to measure students’ performance on iodometric titration and the effect of the self and peer assessment on students’ performance. The self and peer-instrument provides valuable feedback for the student performance improvement. The developed assessment contains rubric and task for facilitating self and peer assessment. The participants are 24 students at the second-grade student in certain vocational high school in Bandung. The participants divided into two groups. The first 12 students involved in the validity test of the developed assessment, while the remain 12 students participated for the reliability test. The content validity was evaluated based on the judgment experts. Test result of content validity based on judgment expert show that the developed performance assessment instrument categorized as valid on each task with the realibity classified as very good. Analysis of the impact of the self and peer assessment implementation showed that the peer instrument supported the self assessment.

  16. Accuracy assessment/validation methodology and results of 2010–11 land-cover/land-use data for Pools 13, 26, La Grange, and Open River South, Upper Mississippi River System

    USGS Publications Warehouse

    Jakusz, J.W.; Dieck, J.J.; Langrehr, H.A.; Ruhser, J.J.; Lubinski, S.J.

    2016-01-11

    Similar to an AA, validation involves generating random points based on the total area for each map class. However, instead of collecting field data, two or three individuals not involved with the photo-interpretative mapping separately review each of the points onscreen and record a best-fit vegetation type(s) for each site. Once the individual analyses are complete, results are joined together and a comparative analysis is performed. The objective of this initial analysis is to identify areas where the validation results were in agreement (matches) and areas where validation results were in disagreement (mismatches). The two or three individuals then perform an analysis, looking at each mismatched site, and agree upon a final validation class. (If two vegetation types at a specific site appear to be equally prevalent, the validation team is permitted to assign the site two best-fit vegetation types.) Following the validation team’s comparative analysis of vegetation assignments, the data are entered into a database and compared to the mappers’ vegetation assignments. Agreements and disagreements between the map and validation classes are identified, and a contingency table is produced. This document presents the AA processes/results for Pools 13 and La Grange, as well as the validation process/results for Pools 13 and 26 and Open River South.

  17. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE PAGES

    Bacelli, Giorgio; Coe, Ryan; Patterson, David; ...

    2017-04-01

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  18. System Identification of a Heaving Point Absorber: Design of Experiment and Device Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Bacelli, Giorgio; Coe, Ryan; Patterson, David

    Empirically based modeling is an essential aspect of design for a wave energy converter. These models are used in structural, mechanical and control design processes, as well as for performance prediction. The design of experiments and methods used to produce models from collected data have a strong impact on the quality of the model. This study considers the system identification and model validation process based on data collected from a wave tank test of a model-scale wave energy converter. Experimental design and data processing techniques based on general system identification procedures are discussed and compared with the practices often followedmore » for wave tank testing. The general system identification processes are shown to have a number of advantages. The experimental data is then used to produce multiple models for the dynamics of the device. These models are validated and their performance is compared against one and other. Furthermore, while most models of wave energy converters use a formulation with wave elevation as an input, this study shows that a model using a hull pressure sensor to incorporate the wave excitation phenomenon has better accuracy.« less

  19. IV&V Project Assessment Process Validation

    NASA Technical Reports Server (NTRS)

    Driskell, Stephen

    2012-01-01

    The Space Launch System (SLS) will launch NASA's Multi-Purpose Crew Vehicle (MPCV). This launch vehicle will provide American launch capability for human exploration and travelling beyond Earth orbit. SLS is designed to be flexible for crew or cargo missions. The first test flight is scheduled for December 2017. The SLS SRR/SDR provided insight into the project development life cycle. NASA IV&V ran the standard Risk Based Assessment and Portfolio Based Risk Assessment to identify analysis tasking for the SLS program. This presentation examines the SLS System Requirements Review/System Definition Review (SRR/SDR), IV&V findings for IV&V process validation correlation to/from the selected IV&V tasking and capabilities. It also provides a reusable IEEE 1012 scorecard for programmatic completeness across the software development life cycle.

  20. Validity and Reliability Study of the Turkish Version of Ego Identity Process Questionairre

    ERIC Educational Resources Information Center

    Morsünbül, Ümit; Atak, Hasan

    2013-01-01

    The main developmental task is identity development in adolescence period. Marcia defined four identity statuses based on exploration and commitment process: Achievement, moratorium, foreclosure and diffusion. Certain scales were developed to measure identity development. Another questionnaire that evaluates both four identity statuses and the…

  1. Developing a Survey of Transformative Learning Outcomes and Processes Based on Theoretical Principles

    ERIC Educational Resources Information Center

    Stuckey, Heather L.; Taylor, Edward W.; Cranton, Patricia

    2013-01-01

    The purpose of this research was to develop an inclusive evaluation of "transformative learning theory" that encompassed varied perspectives of transformative learning. We constructed a validated quantitative survey to assess the potential outcomes and processes of how transformative learning may be experienced by college-educated…

  2. Development and Validation of the Homeostasis Concept Inventory

    ERIC Educational Resources Information Center

    McFarland, Jenny L.; Price, Rebecca M.; Wenderoth, Mary Pat; Martinková, Patrícia; Cliff, William; Michael, Joel; Modell, Harold; Wright, Ann

    2017-01-01

    We present the Homeostasis Concept Inventory (HCI), a 20-item multiple-choice instrument that assesses how well undergraduates understand this critical physiological concept. We used an iterative process to develop a set of questions based on elements in the Homeostasis Concept Framework. This process involved faculty experts and undergraduate…

  3. Towards a Trans-Disciplinary Methodology for a Game-Based Intervention Development Process

    ERIC Educational Resources Information Center

    Arnab, Sylvester; Clarke, Samantha

    2017-01-01

    The application of game-based learning adds play into educational and instructional contexts. Even though there is a lack of standard methodologies or formulaic frameworks to better inform game-based intervention development, there exist scientific and empirical studies that can serve as benchmarks for establishing scientific validity in terms of…

  4. The Use of Modeling-Based Text to Improve Students' Modeling Competencies

    ERIC Educational Resources Information Center

    Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan

    2015-01-01

    This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…

  5. GPM Ground Validation: Pre to Post-Launch Era

    NASA Astrophysics Data System (ADS)

    Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George

    2015-04-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi-radar, gauge and disdrometer facility located in coastal Virginia. This presentation will summarize the evolution of the NASA GPM GV program from pre to post-launch eras and place focus on evaluation of year-1 post-launch GPM satellite datasets including Level II GPROF, DPR and Combined algorithms, and Level III IMERG products.

  6. Real-time motion artifacts compensation of ToF sensors data on GPU

    NASA Astrophysics Data System (ADS)

    Lefloch, Damien; Hoegg, Thomas; Kolb, Andreas

    2013-05-01

    Over the last decade, ToF sensors attracted many computer vision and graphics researchers. Nevertheless, ToF devices suffer from severe motion artifacts for dynamic scenes as well as low-resolution depth data which strongly justifies the importance of a valid correction. To counterbalance this effect, a pre-processing approach is introduced to greatly improve range image data on dynamic scenes. We first demonstrate the robustness of our approach using simulated data to finally validate our method using sensor range data. Our GPU-based processing pipeline enhances range data reliability in real-time.

  7. The Development and Evaluation of the Psychometric Properties of the Negative Beliefs about Post-Event Processing Scale.

    PubMed

    Rodriguez, Hayley; Kissell, Kellie; Lucas, Lloyd; Fisak, Brian

    2017-11-01

    Although negative beliefs have been found to be associated with worry symptoms and depressive rumination, negative beliefs have yet to be examined in relation to post-event processing and social anxiety symptoms. The purpose of the current study was to examine the psychometric properties of the Negative Beliefs about Post-Event Processing Questionnaire (NB-PEPQ). A large, non-referred undergraduate sample completed the NB-PEPQ along with validation measures, including a measure of post-event processing and social anxiety symptoms. Based on factor analysis, a single-factor model was obtained, and the NB-PEPQ was found to exhibit good validity, including positive associations with measures of post-event processing and social anxiety symptoms. These findings add to the literature on the metacognitive variables that may lead to the development and maintenance of post-event processing and social anxiety symptoms, and have relevant clinical applications.

  8. [The added value of information summaries supporting clinical decisions at the point-of-care.

    PubMed

    Banzi, Rita; González-Lorenzo, Marien; Kwag, Koren Hyogene; Bonovas, Stefanos; Moja, Lorenzo

    2016-11-01

    Evidence-based healthcare requires the integration of the best research evidence with clinical expertise and patients' values. International publishers are developing evidence-based information services and resources designed to overcome the difficulties in retrieving, assessing and updating medical information as well as to facilitate a rapid access to valid clinical knowledge. Point-of-care information summaries are defined as web-based medical compendia that are specifically designed to deliver pre-digested, rapidly accessible, comprehensive, and periodically updated information to health care providers. Their validity must be assessed against marketing claims that they are evidence-based. We periodically evaluate the content development processes of several international point-of-care information summaries. The number of these products has increased along with their quality. The last analysis done in 2014 identified 26 products and found that three of them (Best Practice, Dynamed e Uptodate) scored the highest across all evaluated dimensions (volume, quality of the editorial process and evidence-based methodology). Point-of-care information summaries as stand-alone products or integrated with other systems, are gaining ground to support clinical decisions. The choice of one product over another depends both on the properties of the service and the preference of users. However, even the most innovative information system must rely on transparent and valid contents. Individuals and institutions should regularly assess the value of point-of-care summaries as their quality changes rapidly over time.

  9. Interactive learning media based on flash for basic electronic engineering development for SMK Negeri 1 Driyorejo - Gresik

    NASA Astrophysics Data System (ADS)

    Mandigo Anggana Raras, Gustav

    2018-04-01

    This research aims to produce a product in the form of flash based interactive learning media on a basic electronic engineering subject that reliable to be used and to know students’ responses about the media. The target of this research is X-TEI 1 class at SMK Negeri 1 Driyorejo – Gresik. The method used in this study is R&D that has been limited into seven stages only (1) potential and problems, (2) data collection, (3) product design, (4) product validation, (5) product revision, (6) field test, and (7) analysis and writing. The obtained result is interactive learning media named MELDASH. Validation process used to produce a valid interactive learning media. The result of media validation state that the interactive learning media has a 90.83% rating. Students’ responses to this interactive learning media is really good with 88.89% rating.

  10. Design of capability measurement instruments pedagogic content knowledge (PCK) for prospective mathematics teachers

    NASA Astrophysics Data System (ADS)

    Aminah, N.; Wahyuni, I.

    2018-05-01

    The purpose of this study is to find out how the process of designing a tool of measurement Pedagogical Content Knowledge (PCK) capabilities, especially for prospective mathematics teachers are valid and practical. The design study of this measurement appliance uses modified Plomp development step, which consists of (1) initial assessment stage, (2) design stage at this stage, the researcher designs the measuring grille of PCK capability, (3) realization stage that is making measurement tool ability of PCK, (4) test phase, evaluation, and revision that is testing validation of measurement tools conducted by experts. Based on the results obtained that the design of PCK capability measurement tool is valid as indicated by the assessment of expert validator, and the design of PCK capability measurement tool, shown based on the assessment of teachers and lecturers as users of states strongly agree the design of PCK measurement tools can be used.

  11. Content validation of the nursing diagnosis acute pain in the Czech Republic and Slovakia.

    PubMed

    Zeleníková, Renáta; Žiaková, Katarína; Čáp, Juraj; Jarošová, Darja

    2014-10-01

    The main purpose of the study was to validate the defining characteristics of the nursing diagnosis acute pain in the Czech Republic and Slovakia. This is a descriptive study. The validation process involved was based on Fehring's diagnostic content validity model. Four defining characteristics were classified as major by Slovak nurses and eight defining characteristics were classified as major by Czech nurses. Validation of the nursing diagnosis acute pain in the Czech and Slovak sociocultural context has shown that nurses prioritize characteristics that are behavioral in nature as well as patients' verbal reports of pain. Verbal reports of pain and behavioral indicators are important for arriving at the nursing diagnosis acute pain. © 2014 NANDA International, Inc.

  12. Observations on the Use of SCAN To Identify Children at Risk for Central Auditory Processing Disorder.

    ERIC Educational Resources Information Center

    Emerson, Maria F.; And Others

    1997-01-01

    The SCAN: A Screening Test for Auditory Processing Disorders was administered to 14 elementary children with a history of otitis media and 14 typical children, to evaluate the validity of the test in identifying children with central auditory processing disorder. Another experiment found that test results differed based on the testing environment…

  13. On demand processing of climate station sensor data

    NASA Astrophysics Data System (ADS)

    Wöllauer, Stephan; Forteva, Spaska; Nauss, Thomas

    2015-04-01

    Large sets of climate stations with several sensors produce big amounts of finegrained time series data. To gain value of this data, further processing and aggregation is needed. We present a flexible system to process the raw data on demand. Several aspects need to be considered to process the raw data in a way that scientists can use the processed data conveniently for their specific research interests. First of all, it is not feasible to pre-process the data in advance because of the great variety of ways it can be processed. Therefore, in this approach only the raw measurement data is archived in a database. When a scientist requires some time series, the system processes the required raw data according to the user-defined request. Based on the type of measurement sensor, some data validation is needed, because the climate station sensors may produce erroneous data. Currently, three validation methods are integrated in the on demand processing system and are optionally selectable. The most basic validation method checks if measurement values are within a predefined range of possible values. For example, it may be assumed that an air temperature sensor measures values within a range of -40 °C to +60 °C. Values outside of this range are considered as a measurement error by this validation method and consequently rejected. An other validation method checks for outliers in the stream of measurement values by defining a maximum change rate between subsequent measurement values. The third validation method compares measurement data to the average values of neighboring stations and rejects measurement values with a high variance. These quality checks are optional, because especially extreme climatic values may be valid but rejected by some quality check method. An other important task is the preparation of measurement data in terms of time. The observed stations measure values in intervals of minutes to hours. Often scientists need a coarser temporal resolution (days, months, years). Therefore, the interval of time aggregation is selectable for the processing. For some use cases it is desirable that the resulting time series are as continuous as possible. To meet these requirements, the processing system includes techniques to fill gaps of missing values by interpolating measurement values with data from adjacent stations using available contemporaneous measurements from the respective stations as training datasets. Alongside processing of sensor values, we created interactive visualization techniques to get a quick overview of a big amount of archived time series data.

  14. AIRS Retrieval Validation During the EAQUATE

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Smith, William L.; Cuomo, Vincenzo; Taylor, Jonathan P.; Barnet, Christopher D.; DiGirolamo, Paolo; Pappalardo, Gelsomina; Larar, Allen M.; Liu, Xu; Newman, Stuart M.

    2006-01-01

    Atmospheric and surface thermodynamic parameters retrieved with advanced hyperspectral remote sensors of Earth observing satellites are critical for weather prediction and scientific research. The retrieval algorithms and retrieved parameters from satellite sounders must be validated to demonstrate the capability and accuracy of both observation and data processing systems. The European AQUA Thermodynamic Experiment (EAQUATE) was conducted mainly for validation of the Atmospheric InfraRed Sounder (AIRS) on the AQUA satellite, but also for assessment of validation systems of both ground-based and aircraft-based instruments which will be used for other satellite systems such as the Infrared Atmospheric Sounding Interferometer (IASI) on the European MetOp satellite, the Cross-track Infrared Sounder (CrIS) from the NPOESS Preparatory Project and the following NPOESS series of satellites. Detailed inter-comparisons were conducted and presented using different retrieval methodologies: measurements from airborne ultraspectral Fourier transform spectrometers, aircraft in-situ instruments, dedicated dropsondes and radiosondes, and ground based Raman Lidar, as well as from the European Center for Medium range Weather Forecasting (ECMWF) modeled thermal structures. The results of this study not only illustrate the quality of the measurements and retrieval products but also demonstrate the capability of these validation systems which are put in place to validate current and future hyperspectral sounding instruments and their scientific products.

  15. Challenges and opportunities in the manufacture and expansion of cells for therapy.

    PubMed

    Maartens, Joachim H; De-Juan-Pardo, Elena; Wunner, Felix M; Simula, Antonio; Voelcker, Nicolas H; Barry, Simon C; Hutmacher, Dietmar W

    2017-10-01

    Laboratory-based ex vivo cell culture methods are largely manual in their manufacturing processes. This makes it extremely difficult to meet regulatory requirements for process validation, quality control and reproducibility. Cell culture concepts with a translational focus need to embrace a more automated approach where cell yields are able to meet the quantitative production demands, the correct cell lineage and phenotype is readily confirmed and reagent usage has been optimized. Areas covered: This article discusses the obstacles inherent in classical laboratory-based methods, their concomitant impact on cost-of-goods and that a technology step change is required to facilitate translation from bed-to-bedside. Expert opinion: While traditional bioreactors have demonstrated limited success where adherent cells are used in combination with microcarriers, further process optimization will be required to find solutions for commercial-scale therapies. New cell culture technologies based on 3D-printed cell culture lattices with favourable surface to volume ratios have the potential to change the paradigm in industry. An integrated Quality-by-Design /System engineering approach will be essential to facilitate the scaled-up translation from proof-of-principle to clinical validation.

  16. Effective data validation of high-frequency data: time-point-, time-interval-, and trend-based methods.

    PubMed

    Horn, W; Miksch, S; Egghart, G; Popow, C; Paky, F

    1997-09-01

    Real-time systems for monitoring and therapy planning, which receive their data from on-line monitoring equipment and computer-based patient records, require reliable data. Data validation has to utilize and combine a set of fast methods to detect, eliminate, and repair faulty data, which may lead to life-threatening conclusions. The strength of data validation results from the combination of numerical and knowledge-based methods applied to both continuously-assessed high-frequency data and discontinuously-assessed data. Dealing with high-frequency data, examining single measurements is not sufficient. It is essential to take into account the behavior of parameters over time. We present time-point-, time-interval-, and trend-based methods for validation and repair. These are complemented by time-independent methods for determining an overall reliability of measurements. The data validation benefits from the temporal data-abstraction process, which provides automatically derived qualitative values and patterns. The temporal abstraction is oriented on a context-sensitive and expectation-guided principle. Additional knowledge derived from domain experts forms an essential part for all of these methods. The methods are applied in the field of artificial ventilation of newborn infants. Examples from the real-time monitoring and therapy-planning system VIE-VENT illustrate the usefulness and effectiveness of the methods.

  17. Exploring the validity and reliability of a questionnaire for evaluating veterinary clinical teachers' supervisory skills during clinical rotations.

    PubMed

    Boerboom, T B B; Dolmans, D H J M; Jaarsma, A D C; Muijtjens, A M M; Van Beukelen, P; Scherpbier, A J J A

    2011-01-01

    Feedback to aid teachers in improving their teaching requires validated evaluation instruments. When implementing an evaluation instrument in a different context, it is important to collect validity evidence from multiple sources. We examined the validity and reliability of the Maastricht Clinical Teaching Questionnaire (MCTQ) as an instrument to evaluate individual clinical teachers during short clinical rotations in veterinary education. We examined four sources of validity evidence: (1) Content was examined based on theory of effective learning. (2) Response process was explored in a pilot study. (3) Internal structure was assessed by confirmatory factor analysis using 1086 student evaluations and reliability was examined utilizing generalizability analysis. (4) Relations with other relevant variables were examined by comparing factor scores with other outcomes. Content validity was supported by theory underlying the cognitive apprenticeship model on which the instrument is based. The pilot study resulted in an additional question about supervision time. A five-factor model showed a good fit with the data. Acceptable reliability was achievable with 10-12 questionnaires per teacher. Correlations between the factors and overall teacher judgement were strong. The MCTQ appears to be a valid and reliable instrument to evaluate clinical teachers' performance during short rotations.

  18. Diagnostic accuracy of eye movements in assessing pedophilia.

    PubMed

    Fromberger, Peter; Jordan, Kirsten; Steinkrauss, Henrike; von Herder, Jakob; Witzel, Joachim; Stolpmann, Georg; Kröner-Herwig, Birgit; Müller, Jürgen Leo

    2012-07-01

    Given that recurrent sexual interest in prepubescent children is one of the strongest single predictors for pedosexual offense recidivism, valid and reliable diagnosis of pedophilia is of particular importance. Nevertheless, current assessment methods still fail to fulfill psychometric quality criteria. The aim of the study was to evaluate the diagnostic accuracy of eye-movement parameters in regard to pedophilic sexual preferences. Eye movements were measured while 22 pedophiles (according to ICD-10 F65.4 diagnosis), 8 non-pedophilic forensic controls, and 52 healthy controls simultaneously viewed the picture of a child and the picture of an adult. Fixation latency was assessed as a parameter for automatic attentional processes and relative fixation time to account for controlled attentional processes. Receiver operating characteristic (ROC) analyses, which are based on calculated age-preference indices, were carried out to determine the classifier performance. Cross-validation using the leave-one-out method was used to test the validity of classifiers. Pedophiles showed significantly shorter fixation latencies and significantly longer relative fixation times for child stimuli than either of the control groups. Classifier performance analysis revealed an area under the curve (AUC) = 0.902 for fixation latency and an AUC = 0.828 for relative fixation time. The eye-tracking method based on fixation latency discriminated between pedophiles and non-pedophiles with a sensitivity of 86.4% and a specificity of 90.0%. Cross-validation demonstrated good validity of eye-movement parameters. Despite some methodological limitations, measuring eye movements seems to be a promising approach to assess deviant pedophilic interests. Eye movements, which represent automatic attentional processes, demonstrated high diagnostic accuracy. © 2012 International Society for Sexual Medicine.

  19. Developing Worksheet (LKS) Base on Process Skills in Curriculum 2013 at Elementary School Grade IV,V,VI

    NASA Astrophysics Data System (ADS)

    Subhan, M.; Oktolita, N.; Kn, M.

    2018-04-01

    The Lacks of students' skills in the learning process is due to lacks of exercises in the form of LKS. In the curriculum of 2013, there is no LKS as a companion to improve the students' skills. In order to solve those problem, it is necessary to develop LKS based on process skills as a teaching material to improve students' process skills. The purpose of this study is to develop LKS Process Skills based elementary school grade IV, V, VI which is integrated by process skill. The development of LKS can be used to develop the thematic process skills of elementary school students grade IV, V, VI based on curriculum 2013. The expected long-term goal is to produce teaching materials LKS Process Skill based of Thematic learning that is able to develop the process skill of elementary school students grade IV, V, VI. This development research refers to the steps developed by Borg & Gall (1983). The development process is carried out through 10 stages: preliminary research and gathering information, planning, draft development, initial test (limited trial), first product revision, final trial (field trial), product operational revision, Desemination and implementation. The limited subject of the this research is the students of SDN in Dharmasraya grade IV, V, VI. The field trial subjects in the experimental class are the students of SDN Dharmasraya grade IV, V, VI who have implemented the curriculum 2013. The data are collected by using LKS validation sheets, process skill observation sheets, and Thematic learning test (pre-test And post-test). The result of LKS development on the validity score is 81.70 (very valid), on practical score is 83.94 (very practical), and on effectiveness score is 86.67 (very effective). In the trial step the use of LKS using One Group Pretest-Posttest Design research design. The purpose of this trial is to know the effectiveness level of LKS result of development for improving the process skill of students in grade IV, V, and VI of elementary school. The data collection in this research uses the test result sheet of the process skill through pre-test and post-test. Observation results were analyzed with SPSS 16.0 software. The Result of analysis learning process of student skill of Sig value. (2-tailed) (0,000) <α (0.005) then H0 is rejected. There is a significant difference to the development of process skills between students using LKS with students who do not use LKS. It can be concluded that LKS have accuracy, ease and can improve result learn on aspect of skill process of student of grade IV, V and VI elementary school.

  20. 76 FR 4360 - Guidance for Industry on Process Validation: General Principles and Practices; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-25

    ... elements of process validation for the manufacture of human and animal drug and biological products... process validation for the manufacture of human and animal drug and biological products, including APIs. This guidance describes process validation activities in three stages: In Stage 1, Process Design, the...

  1. Failure mode and effects analysis outputs: are they valid?

    PubMed

    Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick

    2012-06-10

    Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA's validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues.

  2. Testing Strategies for Model-Based Development

    NASA Technical Reports Server (NTRS)

    Heimdahl, Mats P. E.; Whalen, Mike; Rajan, Ajitha; Miller, Steven P.

    2006-01-01

    This report presents an approach for testing artifacts generated in a model-based development process. This approach divides the traditional testing process into two parts: requirements-based testing (validation testing) which determines whether the model implements the high-level requirements and model-based testing (conformance testing) which determines whether the code generated from a model is behaviorally equivalent to the model. The goals of the two processes differ significantly and this report explores suitable testing metrics and automation strategies for each. To support requirements-based testing, we define novel objective requirements coverage metrics similar to existing specification and code coverage metrics. For model-based testing, we briefly describe automation strategies and examine the fault-finding capability of different structural coverage metrics using tests automatically generated from the model.

  3. Development and validation of the ASPIRE-VA coaching fidelity checklist (ACFC): a tool to help ensure delivery of high-quality weight management interventions.

    PubMed

    Damschroder, Laura J; Goodrich, David E; Kim, Hyungjin Myra; Holleman, Robert; Gillon, Leah; Kirsh, Susan; Richardson, Caroline R; Lutes, Lesley D

    2016-09-01

    Practical and valid instruments are needed to assess fidelity of coaching for weight loss. The purpose of this study was to develop and validate the ASPIRE Coaching Fidelity Checklist (ACFC). Classical test theory guided ACFC development. Principal component analyses were used to determine item groupings. Psychometric properties, internal consistency, and inter-rater reliability were evaluated for each subscale. Criterion validity was tested by predicting weight loss as a function of coaching fidelity. The final 19-item ACFC consists of two domains (session process and session structure) and five subscales (sets goals and monitor progress, assess and personalize self-regulatory content, manages the session, creates a supportive and empathetic climate, and stays on track). Four of five subscales showed high internal consistency (Cronbach alphas > 0.70) for group-based coaching; only two of five subscales had high internal reliability for phone-based coaching. All five sub-scales were positively and significantly associated with weight loss for group- but not for phone-based coaching. The ACFC is a reliable and valid instrument that can be used to assess fidelity and guide skill-building for weight management interventionists.

  4. Surface De-Wetting Based Critical Heat Flux Model Development and Validation

    DTIC Science & Technology

    2013-02-05

    the onset of CHF. When the process of dewetting occurs at contact line and micro region, the temperature of dry spots increases, hence dryout areas...increase and the CHF occurs. Finally, we proposed the CHF mechanism based on the surface dewetting and experimental data. 15. SUBJECT TERMS spray...determines the overall heat transfer, contact line heat transfer wall is critically important to trigger the onset of CHF. When the process of dewetting

  5. Nonparametric estimation of the heterogeneity of a random medium using compound Poisson process modeling of wave multiple scattering.

    PubMed

    Le Bihan, Nicolas; Margerin, Ludovic

    2009-07-01

    In this paper, we present a nonparametric method to estimate the heterogeneity of a random medium from the angular distribution of intensity of waves transmitted through a slab of random material. Our approach is based on the modeling of forward multiple scattering using compound Poisson processes on compact Lie groups. The estimation technique is validated through numerical simulations based on radiative transfer theory.

  6. Initial Retrieval Validation from the Joint Airborne IASI Validation Experiment (JAIVEx)

    NASA Technical Reports Server (NTRS)

    Zhou, Daniel K.; Liu, Xu; Smith, WIlliam L.; Larar, Allen M.; Taylor, Jonathan P.; Revercomb, Henry E.; Mango, Stephen A.; Schluessel, Peter; Calbet, Xavier

    2007-01-01

    The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite, but also included a strong component focusing on validation of the Atmospheric InfraRed Sounder (AIRS) aboard the AQUA satellite. The cross validation of IASI and AIRS is important for the joint use of their data in the global Numerical Weather Prediction process. Initial inter-comparisons of geophysical products have been conducted from different aspects, such as using different measurements from airborne ultraspectral Fourier transform spectrometers (specifically, the NPOESS Airborne Sounder Testbed Interferometer (NAST-I) and the Scanning-High resolution Interferometer Sounder (S-HIS) aboard the NASA WB-57 aircraft), UK Facility for Airborne Atmospheric Measurements (FAAM) BAe146-301 aircraft insitu instruments, dedicated dropsondes, radiosondes, and ground based Raman Lidar. An overview of the JAIVEx retrieval validation plan and some initial results of this field campaign are presented.

  7. Bio-markers: traceability in food safety issues.

    PubMed

    Raspor, Peter

    2005-01-01

    Research and practice are focusing on development, validation and harmonization of technologies and methodologies to ensure complete traceability process throughout the food chain. The main goals are: scale-up, implementation and validation of methods in whole food chains, assurance of authenticity, validity of labelling and application of HACCP (hazard analysis and critical control point) to the entire food chain. The current review is to sum the scientific and technological basis for ensuring complete traceability. Tracing and tracking (traceability) of foods are complex processes due to the (bio)markers, technical solutions and different circumstances in different technologies which produces various foods (processed, semi-processed, or raw). Since the food is produced for human or animal consumption we need suitable markers to be stable and traceable all along the production chain. Specific biomarkers can have a function in technology and in nutrition. Such approach would make this development faster and more comprehensive and would make possible that food effect could be monitored with same set of biomarkers in consumer. This would help to develop and implement food safety standards that would be based on real physiological function of particular food component.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Chen, X; Wang, J; Hu, W

    Purpose: The Varian RapidPlan™ is a commercial knowledge-based optimization process which uses a set of clinically used treatment plans to train a model that can predict individualized dose-volume objectives. The purpose of this study is to evaluate the performance of RapidPlan to generate intensity modulated radiation therapy (IMRT) plans for cervical cancer. Methods: Totally 70 IMRT plans for cervical cancer with varying clinical and physiological indications were enrolled in this study. These patients were all previously treated in our institution. There were two prescription levels usually used in our institution: 45Gy/25 fractions and 50.4Gy/28 fractions. 50 of these plans weremore » selected to train the RapidPlan model for predicting dose-volume constraints. After model training, this model was validated with 10 plans from training pool(internal validation) and additional other 20 new plans(external validation). All plans used for the validation were re-optimized with the original beam configuration and the generated priorities from RapidPlan were manually adjusted to ensure that re-optimized DVH located in the range of the model prediction. DVH quantitative analysis was performed to compare the RapidPlan generated and the original manual optimized plans. Results: For all the validation cases, RapidPlan based plans (RapidPlan) showed similar or superior results compared to the manual optimized ones. RapidPlan increased the result of D98% and homogeneity in both two validations. For organs at risk, the RapidPlan decreased mean doses of bladder by 1.25Gy/1.13Gy (internal/external validation) on average, with p=0.12/p<0.01. The mean dose of rectum and bowel were also decreased by an average of 2.64Gy/0.83Gy and 0.66Gy/1.05Gy,with p<0.01/ p<0.01and p=0.04/<0.01 for the internal/external validation, respectively. Conclusion: The RapidPlan model based cervical cancer plans shows ability to systematically improve the IMRT plan quality. It suggests that RapidPlan has great potential to make the treatment planning process more efficient.« less

  9. [Assessment of the validity and reliability of the processes of change scale based on the transtheoretical model of vegetable consumption behavior in Japanese male workers].

    PubMed

    Kushida, Osamu; Murayama, Nobuko

    2012-12-01

    A core construct of the Transtheoretical model is that the processes and stages of change are strongly related to observable behavioral changes. We created the Processes of Change Scale of vegetable consumption behavior and examined the validity and reliability of this scale. In September 2009, a self-administered questionnaire was administered to male Japanese employees, aged 20-59 years, working at 20 worksites in Niigata City in Japan. The stages of change (precontempration, contemplation, preparation, action, and maintenance stage) were measured using 2 items that assessed participants' current implementation of the target behavior (eating 5 or more servings of vegetables per day) and their readiness to change their habits. The Processes of Change Scale of vegetable consumption behavior comprised 10 items assessing 5 cognitive processes (consciousness raising, emotional arousal, environmental reevaluation, self-reevaluation, and social liberation) and 5 behavioral processes (commitment, rewards, helping relationships, countering, and environment control). Each item was selected from an existing scale. Decisional balance (pros [2 items] and cons [2 items]), and self-efficacy (3 items) were also assessed, because these constructs were considered to be relevant to the processes of change. The internal consistency reliability of the scale was examined using Cronbach's alpha. Its construct validity was examined using a factor analysis of the processes of change, decisional balance, and self-efficacy variables, while its criterion-related validity was determined by assessing the association between the scale scores and the stages of change. The data of 527 (out of 600) participants (mean age, 41.1 years) were analyzed. Results indicated that the Processes of Change Scale had sufficient internal consistency reliability (Cronbach's alpha: cognitive processes=0.722, behavioral processes=0.803). The processes of change were divided into 2 factors: "consciousness raising, emotional arousal, environmental reevaluation, self-reevaluation, commitment, rewards, helping relationships, and social liberation" and "countering and environment control" in the factor analysis. Moreover, each construct--the processes of change, decisional balance, and self-efficacy--could be classified into different factors. The scores for cognitive processes were higher in the contemplation and preparation stages than in the precontemplation stage (P<0.05). Scores for behavioral processes increased from the precontemplation stage to the preparation stages (P<0.05), and were higher in the action + maintenance stage than in the precontemplation stage (P< 0.05). For male workers, the Processes of Change Scale has sufficient validity and reliability, as demonstrated by the internal fitness and the construct and criterion-related validity of the scale found in this study.

  10. Development and validation of a registry-based definition of eosinophilic esophagitis in Denmark

    PubMed Central

    Dellon, Evan S; Erichsen, Rune; Pedersen, Lars; Shaheen, Nicholas J; Baron, John A; Sørensen, Henrik T; Vyberg, Mogens

    2013-01-01

    AIM: To develop and validate a case definition of eosinophilic esophagitis (EoE) in the linked Danish health registries. METHODS: For case definition development, we queried the Danish medical registries from 2006-2007 to identify candidate cases of EoE in Northern Denmark. All International Classification of Diseases-10 (ICD-10) and prescription codes were obtained, and archived pathology slides were obtained and re-reviewed to determine case status. We used an iterative process to select inclusion/exclusion codes, refine the case definition, and optimize sensitivity and specificity. We then re-queried the registries from 2008-2009 to yield a validation set. The case definition algorithm was applied, and sensitivity and specificity were calculated. RESULTS: Of the 51 and 49 candidate cases identified in both the development and validation sets, 21 and 24 had EoE, respectively. Characteristics of EoE cases in the development set [mean age 35 years; 76% male; 86% dysphagia; 103 eosinophils per high-power field (eos/hpf)] were similar to those in the validation set (mean age 42 years; 83% male; 67% dysphagia; 77 eos/hpf). Re-review of archived slides confirmed that the pathology coding for esophageal eosinophilia was correct in greater than 90% of cases. Two registry-based case algorithms based on pathology, ICD-10, and pharmacy codes were successfully generated in the development set, one that was sensitive (90%) and one that was specific (97%). When these algorithms were applied to the validation set, they remained sensitive (88%) and specific (96%). CONCLUSION: Two registry-based definitions, one highly sensitive and one highly specific, were developed and validated for the linked Danish national health databases, making future population-based studies feasible. PMID:23382628

  11. Using the Internet to Improve HRD Research: The Case of the Web-Based Delphi Research Technique to Achieve Content Validity of an HRD-Oriented Measurement

    ERIC Educational Resources Information Center

    Hatcher, Tim; Colton, Sharon

    2007-01-01

    Purpose: The purpose of this article is to highlight the results of the online Delphi research project; in particular the procedures used to establish an online and innovative process of content validation and obtaining "rich" and descriptive information using the internet and current e-learning technologies. The online Delphi was proven to be an…

  12. A clinical reasoning model focused on clients' behaviour change with reference to physiotherapists: its multiphase development and validation.

    PubMed

    Elvén, Maria; Hochwälder, Jacek; Dean, Elizabeth; Söderlund, Anne

    2015-05-01

    A biopsychosocial approach and behaviour change strategies have long been proposed to serve as a basis for addressing current multifaceted health problems. This emphasis has implications for clinical reasoning of health professionals. This study's aim was to develop and validate a conceptual model to guide physiotherapists' clinical reasoning focused on clients' behaviour change. Phase 1 consisted of the exploration of existing research and the research team's experiences and knowledge. Phases 2a and 2b consisted of validation and refinement of the model based on input from physiotherapy students in two focus groups (n = 5 per group) and from experts in behavioural medicine (n = 9). Phase 1 generated theoretical and evidence bases for the first version of a model. Phases 2a and 2b established the validity and value of the model. The final model described clinical reasoning focused on clients' behaviour change as a cognitive, reflective, collaborative and iterative process with multiple interrelated levels that included input from the client and physiotherapist, a functional behavioural analysis of the activity-related target behaviour and the selection of strategies for behaviour change. This unique model, theory- and evidence-informed, has been developed to help physiotherapists to apply clinical reasoning systematically in the process of behaviour change with their clients.

  13. [Development and Validation of the Academic Resilience Inventory for Nursing Students in Taiwan].

    PubMed

    Li, Cheng-Chieh; Wei, Chi-Fang; Tung, Yuk-Ying

    2017-10-01

    Failure to cope with learning pressures has been shown to influence the learning achievement and professional performance of nursing students. In order to enable nursing students to adapt successfully to their academic stress, it is essential to explore their academic resilience in the process of learning. To develop the Academic Resilience Inventory for Nursing Students (ARINS) and to test its reliability and validity. A total of 611 nursing students in central and southern Taiwan were recruited as participants. We divided the sample into two subsamples randomly using R software. The first sample was used to conduct item analysis and exploratory factor analysis. The other sample was used to conduct confirmatory factor analysis, cross validation, and criterion-related validity. There are 15 items in the ARINS, with cognitive maturity, emotional regulation, and help-seeking behavior used as the measurement indicators of academic resilience in nursing students. The assessed goodness-of-fit index indicates that the model fit the data well based upon the CFA and has good convergent validity and discriminant validity. Criterion-related validity was supported by the correlation among ARINS, learning performance and attitude, hope and optimistic, and depression. The ARINS has good reliability and validation and is a suitable measure of academic resilience in nursing students. It is helpful for nursing students to examine their academic stress and coping efficacy in the learning process.

  14. Integrating Model-Based Transmission Reduction into a multi-tier architecture

    NASA Astrophysics Data System (ADS)

    Straub, J.

    A multi-tier architecture consists of numerous craft as part of the system, orbital, aerial, and surface tiers. Each tier is able to collect progressively greater levels of information. Generally, craft from lower-level tiers are deployed to a target of interest based on its identification by a higher-level craft. While the architecture promotes significant amounts of science being performed in parallel, this may overwhelm the computational and transmission capabilities of higher-tier craft and links (particularly the deep space link back to Earth). Because of this, a new paradigm in in-situ data processing is required. Model-based transmission reduction (MBTR) is such a paradigm. Under MBTR, each node (whether a single spacecraft in orbit of the Earth or another planet or a member of a multi-tier network) is given an a priori model of the phenomenon that it is assigned to study. It performs activities to validate this model. If the model is found to be erroneous, corrective changes are identified, assessed to ensure their significance for being passed on, and prioritized for transmission. A limited amount of verification data is sent with each MBTR assertion message to allow those that might rely on the data to validate the correct operation of the spacecraft and MBTR engine onboard. Integrating MBTR with a multi-tier framework creates an MBTR hierarchy. Higher levels of the MBTR hierarchy task lower levels with data collection and assessment tasks that are required to validate or correct elements of its model. A model of the expected conditions is sent to the lower level craft; which then engages its own MBTR engine to validate or correct the model. This may include tasking a yet lower level of craft to perform activities. When the MBTR engine at a given level receives all of its component data (whether directly collected or from delegation), it randomly chooses some to validate (by reprocessing the validation data), performs analysis and sends its own results (v- lidation and/or changes of model elements and supporting validation data) to its upstream node. This constrains data transmission to only significant (either because it includes a change or is validation data critical for assessing overall performance) information and reduces the processing requirements (by not having to process insignificant data) at higher-level nodes. This paper presents a framework for multi-tier MBTR and two demonstration mission concepts: an Earth sensornet and a mission to Mars. These multi-tier MBTR concepts are compared to a traditional mission approach.

  15. Benchmark tests for a Formula SAE Student car prototyping

    NASA Astrophysics Data System (ADS)

    Mariasiu, Florin

    2011-12-01

    Aerodynamic characteristics of a vehicle are important elements in its design and construction. A low drag coefficient brings significant fuel savings and increased engine power efficiency. In designing and developing vehicles trough computer simulation process to determine the vehicles aerodynamic characteristics are using dedicated CFD (Computer Fluid Dynamics) software packages. However, the results obtained by this faster and cheaper method, are validated by experiments in wind tunnels tests, which are expensive and were complex testing equipment are used in relatively high costs. Therefore, the emergence and development of new low-cost testing methods to validate CFD simulation results would bring great economic benefits for auto vehicles prototyping process. This paper presents the initial development process of a Formula SAE Student race-car prototype using CFD simulation and also present a measurement system based on low-cost sensors through which CFD simulation results were experimentally validated. CFD software package used for simulation was Solid Works with the FloXpress add-on and experimental measurement system was built using four piezoresistive force sensors FlexiForce type.

  16. On vital aid: the why, what and how of validation

    PubMed Central

    Kleywegt, Gerard J.

    2009-01-01

    Limitations to the data and subjectivity in the structure-determination process may cause errors in macromolecular crystal structures. Appropriate validation techniques may be used to reveal problems in structures, ideally before they are analysed, published or deposited. Additionally, such tech­niques may be used a posteriori to assess the (relative) merits of a model by potential users. Weak validation methods and statistics assess how well a model reproduces the information that was used in its construction (i.e. experimental data and prior knowledge). Strong methods and statistics, on the other hand, test how well a model predicts data or information that were not used in the structure-determination process. These may be data that were excluded from the process on purpose, general knowledge about macromolecular structure, information about the biological role and biochemical activity of the molecule under study or its mutants or complexes and predictions that are based on the model and that can be tested experimentally. PMID:19171968

  17. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1993-01-01

    Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.

  18. Issues and approach to develop validated analysis tools for hypersonic flows: One perspective

    NASA Technical Reports Server (NTRS)

    Deiwert, George S.

    1992-01-01

    Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.

  19. Target Detection and Classification Using Seismic and PIR Sensors

    DTIC Science & Technology

    2012-06-01

    time series analysis via wavelet - based partitioning,” Signal Process...regard, this paper presents a wavelet - based method for target detection and classification. The proposed method has been validated on data sets of...The work reported in this paper makes use of a wavelet - based feature extraction method , called Symbolic Dynamic Filtering (SDF) [12]–[14]. The

  20. Dynamic Assessment of Reading Difficulties: Predictive and Incremental Validity on Attitude toward Reading and the Use of Dialogue/Participation Strategies in Classroom Activities.

    PubMed

    Navarro, Juan-José; Lara, Laura

    2017-01-01

    Dynamic Assessment (DA) has been shown to have more predictive value than conventional tests for academic performance. However, in relation to reading difficulties, further research is needed to determine the predictive validity of DA for specific aspects of the different processes involved in reading and the differential validity of DA for different subgroups of students with an academic disadvantage. This paper analyzes the implementation of a DA device that evaluates processes involved in reading (EDPL) among 60 students with reading comprehension difficulties between 9 and 16 years of age, of whom 20 have intellectual disabilities, 24 have reading-related learning disabilities, and 16 have socio-cultural disadvantages. We specifically analyze the predictive validity of the EDPL device over attitude toward reading, and the use of dialogue/participation strategies in reading activities in the classroom during the implementation stage. We also analyze if the EDPL device provides additional information to that obtained with a conventionally applied personal-social adjustment scale (APSL). Results showed that dynamic scores, obtained from the implementation of the EDPL device, significantly predict the studied variables. Moreover, dynamic scores showed a significant incremental validity in relation to predictions based on an APSL scale. In relation to differential validity, the results indicated the superior predictive validity for DA for students with intellectual disabilities and reading disabilities than for students with socio-cultural disadvantages. Furthermore, the role of metacognition and its relation to the processes of personal-social adjustment in explaining the results is discussed.

  1. Dynamic Assessment of Reading Difficulties: Predictive and Incremental Validity on Attitude toward Reading and the Use of Dialogue/Participation Strategies in Classroom Activities

    PubMed Central

    Navarro, Juan-José; Lara, Laura

    2017-01-01

    Dynamic Assessment (DA) has been shown to have more predictive value than conventional tests for academic performance. However, in relation to reading difficulties, further research is needed to determine the predictive validity of DA for specific aspects of the different processes involved in reading and the differential validity of DA for different subgroups of students with an academic disadvantage. This paper analyzes the implementation of a DA device that evaluates processes involved in reading (EDPL) among 60 students with reading comprehension difficulties between 9 and 16 years of age, of whom 20 have intellectual disabilities, 24 have reading-related learning disabilities, and 16 have socio-cultural disadvantages. We specifically analyze the predictive validity of the EDPL device over attitude toward reading, and the use of dialogue/participation strategies in reading activities in the classroom during the implementation stage. We also analyze if the EDPL device provides additional information to that obtained with a conventionally applied personal-social adjustment scale (APSL). Results showed that dynamic scores, obtained from the implementation of the EDPL device, significantly predict the studied variables. Moreover, dynamic scores showed a significant incremental validity in relation to predictions based on an APSL scale. In relation to differential validity, the results indicated the superior predictive validity for DA for students with intellectual disabilities and reading disabilities than for students with socio-cultural disadvantages. Furthermore, the role of metacognition and its relation to the processes of personal-social adjustment in explaining the results is discussed. PMID:28243215

  2. Attractor Dynamics and Semantic Neighborhood Density: Processing Is Slowed by Near Neighbors and Speeded by Distant Neighbors

    ERIC Educational Resources Information Center

    Mirman, Daniel; Magnuson, James S.

    2008-01-01

    The authors investigated semantic neighborhood density effects on visual word processing to examine the dynamics of activation and competition among semantic representations. Experiment 1 validated feature-based semantic representations as a basis for computing semantic neighborhood density and suggested that near and distant neighbors have…

  3. Validation of learning style measures: implications for medical education practice.

    PubMed

    Chapman, Dane M; Calhoun, Judith G

    2006-06-01

    It is unclear which learners would most benefit from the more individualised, student-structured, interactive approaches characteristic of problem-based and computer-assisted learning. The validity of learning style measures is uncertain, and there is no unifying learning style construct identified to predict such learners. This study was conducted to validate learning style constructs and to identify the learners most likely to benefit from problem-based and computer-assisted curricula. Using a cross-sectional design, 3 established learning style inventories were administered to 97 post-Year 2 medical students. Cognitive personality was measured by the Group Embedded Figures Test, information processing by the Learning Styles Inventory, and instructional preference by the Learning Preference Inventory. The 11 subscales from the 3 inventories were factor-analysed to identify common learning constructs and to verify construct validity. Concurrent validity was determined by intercorrelations of the 11 subscales. A total of 94 pre-clinical medical students completed all 3 inventories. Five meaningful learning style constructs were derived from the 11 subscales: student- versus teacher-structured learning; concrete versus abstract learning; passive versus active learning; individual versus group learning, and field-dependence versus field-independence. The concurrent validity of 10 of 11 subscales was supported by correlation analysis. Medical students most likely to thrive in a problem-based or computer-assisted learning environment would be expected to score highly on abstract, active and individual learning constructs and would be more field-independent. Learning style measures were validated in a medical student population and learning constructs were established for identifying learners who would most likely benefit from a problem-based or computer-assisted curriculum.

  4. Development and Validation of a Shear Punch Test Fixture

    DTIC Science & Technology

    2013-08-01

    composites (MMC) manufactured by friction stir processing (FSP) that are being developed as part of a Technology Investment Fund (TIF) project, as the...leading a team of government departments and academics to develop a friction stir processing (FSP) based procedure to create metal matrix composite... friction stir process to fabricate surface metal matrix composites in aluminum alloys for potential application in light armoured vehicles. The

  5. Developing 3D microscopy with CLARITY on human brain tissue: Towards a tool for informing and validating MRI-based histology.

    PubMed

    Morawski, Markus; Kirilina, Evgeniya; Scherf, Nico; Jäger, Carsten; Reimann, Katja; Trampel, Robert; Gavriilidis, Filippos; Geyer, Stefan; Biedermann, Bernd; Arendt, Thomas; Weiskopf, Nikolaus

    2017-11-28

    Recent breakthroughs in magnetic resonance imaging (MRI) enabled quantitative relaxometry and diffusion-weighted imaging with sub-millimeter resolution. Combined with biophysical models of MR contrast the emerging methods promise in vivo mapping of cyto- and myelo-architectonics, i.e., in vivo histology using MRI (hMRI) in humans. The hMRI methods require histological reference data for model building and validation. This is currently provided by MRI on post mortem human brain tissue in combination with classical histology on sections. However, this well established approach is limited to qualitative 2D information, while a systematic validation of hMRI requires quantitative 3D information on macroscopic voxels. We present a promising histological method based on optical 3D imaging combined with a tissue clearing method, Clear Lipid-exchanged Acrylamide-hybridized Rigid Imaging compatible Tissue hYdrogel (CLARITY), adapted for hMRI validation. Adapting CLARITY to the needs of hMRI is challenging due to poor antibody penetration into large sample volumes and high opacity of aged post mortem human brain tissue. In a pilot experiment we achieved transparency of up to 8 mm-thick and immunohistochemical staining of up to 5 mm-thick post mortem brain tissue by a combination of active and passive clearing, prolonged clearing and staining times. We combined 3D optical imaging of the cleared samples with tailored image processing methods. We demonstrated the feasibility for quantification of neuron density, fiber orientation distribution and cell type classification within a volume with size similar to a typical MRI voxel. The presented combination of MRI, 3D optical microscopy and image processing is a promising tool for validation of MRI-based microstructure estimates. Copyright © 2017 The Authors. Published by Elsevier Inc. All rights reserved.

  6. An approach for representing sensor data to validate alerts in Ambient Assisted Living.

    PubMed

    Muñoz, Andrés; Serrano, Emilio; Villa, Ana; Valdés, Mercedes; Botía, Juan A

    2012-01-01

    The mainstream of research in Ambient Assisted Living (AAL) is devoted to developing intelligent systems for processing the data collected through artificial sensing. Besides, there are other elements that must be considered to foster the adoption of AAL solutions in real environments. In this paper we focus on the problem of designing interfaces among caregivers and AAL systems. We present an alert management tool that supports carers in their task of validating alarms raised by the system. It generates text-based explanations--obtained through an argumentation process--of the causes leading to alarm activation along with graphical sensor information and 3D models, thus offering complementary types of information. Moreover, a guideline to use the tool when validating alerts is also provided. Finally, the functionality of the proposed tool is demonstrated through two real cases of alert.

  7. Capturing molecular multimode relaxation processes in excitable gases based on decomposition of acoustic relaxation spectra

    NASA Astrophysics Data System (ADS)

    Zhu, Ming; Liu, Tingting; Wang, Shu; Zhang, Kesheng

    2017-08-01

    Existing two-frequency reconstructive methods can only capture primary (single) molecular relaxation processes in excitable gases. In this paper, we present a reconstructive method based on the novel decomposition of frequency-dependent acoustic relaxation spectra to capture the entire molecular multimode relaxation process. This decomposition of acoustic relaxation spectra is developed from the frequency-dependent effective specific heat, indicating that a multi-relaxation process is the sum of the interior single-relaxation processes. Based on this decomposition, we can reconstruct the entire multi-relaxation process by capturing the relaxation times and relaxation strengths of N interior single-relaxation processes, using the measurements of acoustic absorption and sound speed at 2N frequencies. Experimental data for the gas mixtures CO2-N2 and CO2-O2 validate our decomposition and reconstruction approach.

  8. American Board of Orthodontics: Time for change.

    PubMed

    Chung, Chun-Hsi; Tadlock, Larry P; Barone, Nicholas; Pangrazio-Kulbersh, Valmy; Sabott, David G; Foley, Patrick F; Trulove, Timothy S; Park, Jae Hyun; Dugoni, Steven A

    2018-03-01

    The American Board of Orthodontics (ABO) works to certify orthodontists in a fair, reliable, and valid manner. The process must examine an orthodontist's knowledge, abilities, and critical thinking skills to ensure that each certified orthodontist has the expertise to provide the highest level of patient care. Many medical specialty boards and 4 American Dental Association specialty boards use scenario-based testing for board certification. Changing to a scenario-based clinical examination will allow the ABO to test more orthodontists. The new process will not result in an easier examination; standards will not be lowered. It will offer an improved testing method that will be fair, valid, and reliable for the specialty of orthodontics while increasing accessibility and complementing residency curricula. The ABO's written examination will remain as it is. Copyright © 2018 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  9. EEG feature selection method based on decision tree.

    PubMed

    Duan, Lijuan; Ge, Hui; Ma, Wei; Miao, Jun

    2015-01-01

    This paper aims to solve automated feature selection problem in brain computer interface (BCI). In order to automate feature selection process, we proposed a novel EEG feature selection method based on decision tree (DT). During the electroencephalogram (EEG) signal processing, a feature extraction method based on principle component analysis (PCA) was used, and the selection process based on decision tree was performed by searching the feature space and automatically selecting optimal features. Considering that EEG signals are a series of non-linear signals, a generalized linear classifier named support vector machine (SVM) was chosen. In order to test the validity of the proposed method, we applied the EEG feature selection method based on decision tree to BCI Competition II datasets Ia, and the experiment showed encouraging results.

  10. DOE-DARPA High-Performance Corrosion-Resistant Materials (HPCRM), Annual HPCRM Team Meeting & Technical Review

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Farmer, J; Brown, B; Bayles, B

    The overall goal is to develop high-performance corrosion-resistant iron-based amorphous-metal coatings for prolonged trouble-free use in very aggressive environments: seawater & hot geothermal brines. The specific technical objectives are: (1) Synthesize Fe-based amorphous-metal coating with corrosion resistance comparable/superior to Ni-based Alloy C-22; (2) Establish processing parameter windows for applying and controlling coating attributes (porosity, density, bonding); (3) Assess possible cost savings through substitution of Fe-based material for more expensive Ni-based Alloy C-22; (4) Demonstrate practical fabrication processes; (5) Produce quality materials and data with complete traceability for nuclear applications; and (6) Develop, validate and calibrate computational models to enable lifemore » prediction and process design.« less

  11. The Application of V&V within Reuse-Based Software Engineering

    NASA Technical Reports Server (NTRS)

    Addy, Edward

    1996-01-01

    Verification and Validation (V&V) is performed during application development for many systems, especially safety-critical and mission-critical systems. The V&V process is intended to discover errors as early as possible during the development process. Early discovery is important in order to minimize the cost and other impacts of correcting these errors. In reuse-based software engineering, decisions on the requirements, design and even implementation of domain assets can can be made prior to beginning development of a specific system. in order to bring the effectiveness of V&V to bear within reuse-based software engineering. V&V must be incorporated within the domain engineering process.

  12. Risk perception and information processing: the development and validation of a questionnaire to assess self-reported information processing.

    PubMed

    Smerecnik, Chris M R; Mesters, Ilse; Candel, Math J J M; De Vries, Hein; De Vries, Nanne K

    2012-01-01

    The role of information processing in understanding people's responses to risk information has recently received substantial attention. One limitation of this research concerns the unavailability of a validated questionnaire of information processing. This article presents two studies in which we describe the development and validation of the Information-Processing Questionnaire to meet that need. Study 1 describes the development and initial validation of the questionnaire. Participants were randomized to either a systematic processing or a heuristic processing condition after which they completed a manipulation check and the initial 15-item questionnaire and again two weeks later. The questionnaire was subjected to factor reliability and validity analyses on both measurement times for purposes of cross-validation of the results. A two-factor solution was observed representing a systematic processing and a heuristic processing subscale. The resulting scale showed good reliability and validity, with the systematic condition scoring significantly higher on the systematic subscale and the heuristic processing condition significantly higher on the heuristic subscale. Study 2 sought to further validate the questionnaire in a field study. Results of the second study corresponded with those of Study 1 and provided further evidence of the validity of the Information-Processing Questionnaire. The availability of this information-processing scale will be a valuable asset for future research and may provide researchers with new research opportunities. © 2011 Society for Risk Analysis.

  13. Validating a new methodology for strain estimation from cardiac cine MRI

    NASA Astrophysics Data System (ADS)

    Elnakib, Ahmed; Beache, Garth M.; Gimel'farb, Georgy; Inanc, Tamer; El-Baz, Ayman

    2013-10-01

    This paper focuses on validating a novel framework for estimating the functional strain from cine cardiac magnetic resonance imaging (CMRI). The framework consists of three processing steps. First, the left ventricle (LV) wall borders are segmented using a level-set based deformable model. Second, the points on the wall borders are tracked during the cardiac cycle based on solving the Laplace equation between the LV edges. Finally, the circumferential and radial strains are estimated at the inner, mid-wall, and outer borders of the LV wall. The proposed framework is validated using synthetic phantoms of the material strains that account for the physiological features and the LV response during the cardiac cycle. Experimental results on simulated phantom images confirm the accuracy and robustness of our method.

  14. A physics based method for combining multiple anatomy models with application to medical simulation.

    PubMed

    Zhu, Yanong; Magee, Derek; Ratnalingam, Rishya; Kessel, David

    2009-01-01

    We present a physics based approach to the construction of anatomy models by combining components from different sources; different image modalities, protocols, and patients. Given an initial anatomy, a mass-spring model is generated which mimics the physical properties of the solid anatomy components. This helps maintain valid spatial relationships between the components, as well as the validity of their shapes. Combination can be either replacing/modifying an existing component, or inserting a new component. The external forces that deform the model components to fit the new shape are estimated from Gradient Vector Flow and Distance Transform maps. We demonstrate the applicability and validity of the described approach in the area of medical simulation, by showing the processes of non-rigid surface alignment, component replacement, and component insertion.

  15. Monte Carol-based validation of neutronic methodology for EBR-II analyses

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Liaw, J.R.; Finck, P.J.

    1993-01-01

    The continuous-energy Monte Carlo code VIM (Ref. 1) has been validated extensively over the years against fast critical experiments and other neutronic analysis codes. A high degree of confidence in VIM for predicting reactor physics parameters has been firmly established. This paper presents a numerical validation of two conventional multigroup neutronic analysis codes, DIF3D (Ref. 4) and VARIANT (Ref. 5), against VIM for two Experimental Breeder Reactor II (EBR-II) core loadings in detailed three-dimensional hexagonal-z geometry. The DIF3D code is based on nodal diffusion theory, and it is used in calculations for day-today reactor operations, whereas the VARIANT code ismore » based on nodal transport theory and is used with increasing frequency for specific applications. Both DIF3D and VARIANT rely on multigroup cross sections generated from ENDF/B-V by the ETOE-2/MC[sup 2]-II/SDX (Ref. 6) code package. Hence, this study also validates the multigroup cross-section processing methodology against the continuous-energy approach used in VIM.« less

  16. Bicarbonate of soda paint stripping process validation and material characterization

    NASA Technical Reports Server (NTRS)

    Haas, Michael N.

    1995-01-01

    The Aircraft Production Division at San Antonio Air Logistics Center has conducted extensive investigation into the replacement of hazardous chemicals in aircraft component cleaning, degreasing, and depainting. One of the most viable solutions is process substitution utilizing abrasive techniques. SA-ALC has incorporated the use of Bicarbonate of Soda Blasting as one such substitution. Previous utilization of methylene chloride based chemical strippers and carbon removal agents has been replaced by a walk-in blast booth in which we remove carbon from engine nozzles and various gas turbine engine parts, depaint cowlings, and perform various other functions on a variety of parts. Prior to implementation of this new process, validation of the process was performed, and materials and waste stream characterization studies were conducted. These characterization studies examined the effects of the blasting process on the integrity of the thin-skinned aluminum substrates, the effects of the process on both air emissions and effluent disposal, and the effects on the personnel exposed to the process.

  17. Using Android-Based Educational Game for Learning Colloid Material

    NASA Astrophysics Data System (ADS)

    Sari, S.; Anjani, R.; Farida, I.; Ramdhani, M. A.

    2017-09-01

    This research is based on the importance of the development of student’s chemical literacy on Colloid material using Android-based educational game media. Educational game products are developed through research and development design. In the analysis phase, material analysis is performed to generate concept maps, determine chemical literacy indicators, game strategies and set game paths. In the design phase, product packaging is carried out, then validation and feasibility test are performed. Research produces educational game based on Android that has the characteristics that is: Colloid material presented in 12 levels of game in the form of questions and challenges, presents visualization of discourse, images and animation contextually to develop the process of thinking and attitude. Based on the analysis of validation and trial results, the product is considered feasible to use.

  18. Automatic and controlled components of judgment and decision making.

    PubMed

    Ferreira, Mario B; Garcia-Marques, Leonel; Sherman, Steven J; Sherman, Jeffrey W

    2006-11-01

    The categorization of inductive reasoning into largely automatic processes (heuristic reasoning) and controlled analytical processes (rule-based reasoning) put forward by dual-process approaches of judgment under uncertainty (e.g., K. E. Stanovich & R. F. West, 2000) has been primarily a matter of assumption with a scarcity of direct empirical findings supporting it. The present authors use the process dissociation procedure (L. L. Jacoby, 1991) to provide convergent evidence validating a dual-process perspective to judgment under uncertainty based on the independent contributions of heuristic and rule-based reasoning. Process dissociations based on experimental manipulation of variables were derived from the most relevant theoretical properties typically used to contrast the two forms of reasoning. These include processing goals (Experiment 1), cognitive resources (Experiment 2), priming (Experiment 3), and formal training (Experiment 4); the results consistently support the author's perspective. They conclude that judgment under uncertainty is neither an automatic nor a controlled process but that it reflects both processes, with each making independent contributions.

  19. Achieving accurate compound concentration in cell-based screening: validation of acoustic droplet ejection technology.

    PubMed

    Grant, Richard John; Roberts, Karen; Pointon, Carly; Hodgson, Clare; Womersley, Lynsey; Jones, Darren Craig; Tang, Eric

    2009-06-01

    Compound handling is a fundamental and critical step in compound screening throughout the drug discovery process. Although most compound-handling processes within compound management facilities use 100% DMSO solvent, conventional methods of manual or robotic liquid-handling systems in screening workflows often perform dilutions in aqueous solutions to maintain solvent tolerance of the biological assay. However, the use of aqueous media in these applications can lead to suboptimal data quality due to compound carryover or precipitation during the dilution steps. In cell-based assays, this effect is worsened by the unpredictable physical characteristics of compounds and the low DMSO tolerance within the assay. In some cases, the conventional approaches using manual or automated liquid handling resulted in variable IC(50) dose responses. This study examines the cause of this variability and evaluates the accuracy of screening data in these case studies. A number of liquid-handling options have been explored to address the issues and establish a generic compound-handling workflow to support cell-based screening across our screening functions. The authors discuss the validation of the Labcyte Echo reformatter as an effective noncontact solution for generic compound-handling applications against diverse compound classes using triple-quad liquid chromatography/mass spectrometry. The successful validation and implementation challenges of this technology for direct dosing onto cells in cell-based screening is discussed.

  20. Knowledge Acquisition, Validation, and Maintenance in a Planning System for Automated Image Processing

    NASA Technical Reports Server (NTRS)

    Chien, Steve A.

    1996-01-01

    A key obstacle hampering fielding of AI planning applications is the considerable expense of developing, verifying, updating, and maintainting the planning knowledge base (KB). Planning systems must be able to compare favorably in terms of software lifecycle costs to other means of automation such as scripts or rule-based expert systems. This paper describes a planning application of automated imaging processing and our overall approach to knowledge acquisition for this application.

  1. Feasibility, reliability, and validity of a smartphone based application for the assessment of cognitive function in the elderly.

    PubMed

    Brouillette, Robert M; Foil, Heather; Fontenot, Stephanie; Correro, Anthony; Allen, Ray; Martin, Corby K; Bruce-Keller, Annadora J; Keller, Jeffrey N

    2013-01-01

    While considerable knowledge has been gained through the use of established cognitive and motor assessment tools, there is a considerable interest and need for the development of a battery of reliable and validated assessment tools that provide real-time and remote analysis of cognitive and motor function in the elderly. Smartphones appear to be an obvious choice for the development of these "next-generation" assessment tools for geriatric research, although to date no studies have reported on the use of smartphone-based applications for the study of cognition in the elderly. The primary focus of the current study was to assess the feasibility, reliability, and validity of a smartphone-based application for the assessment of cognitive function in the elderly. A total of 57 non-demented elderly individuals were administered a newly developed smartphone application-based Color-Shape Test (CST) in order to determine its utility in measuring cognitive processing speed in the elderly. Validity of this novel cognitive task was assessed by correlating performance on the CST with scores on widely accepted assessments of cognitive function. Scores on the CST were significantly correlated with global cognition (Mini-Mental State Exam: r = 0.515, p<0.0001) and multiple measures of processing speed and attention (Digit Span: r = 0.427, p<0.0001; Trail Making Test: r = -0.651, p<0.00001; Digit Symbol Test: r = 0.508, p<0.0001). The CST was not correlated with naming and verbal fluency tasks (Boston Naming Test, Vegetable/Animal Naming) or memory tasks (Logical Memory Test). Test re-test reliability was observed to be significant (r = 0.726; p = 0.02). Together, these data are the first to demonstrate the feasibility, reliability, and validity of using a smartphone-based application for the purpose of assessing cognitive function in the elderly. The importance of these findings for the establishment of smartphone-based assessment batteries of cognitive and motor function in the elderly is discussed.

  2. The process of processing: exploring the validity of Neisser's perceptual cycle model with accounts from critical decision-making in the cockpit.

    PubMed

    Plant, Katherine L; Stanton, Neville A

    2015-01-01

    The perceptual cycle model (PCM) has been widely applied in ergonomics research in domains including road, rail and aviation. The PCM assumes that information processing occurs in a cyclical manner drawing on top-down and bottom-up influences to produce perceptual exploration and actions. However, the validity of the model has not been addressed. This paper explores the construct validity of the PCM in the context of aeronautical decision-making. The critical decision method was used to interview 20 helicopter pilots about critical decision-making. The data were qualitatively analysed using an established coding scheme, and composite PCMs for incident phases were constructed. It was found that the PCM provided a mutually exclusive and exhaustive classification of the information-processing cycles for dealing with critical incidents. However, a counter-cycle was also discovered which has been attributed to skill-based behaviour, characteristic of experts. The practical applications and future research questions are discussed. Practitioner Summary: This paper explores whether information processing, when dealing with critical incidents, occurs in the manner anticipated by the perceptual cycle model. In addition to the traditional processing cycle, a reciprocal counter-cycle was found. This research can be utilised by those who use the model as an accident analysis framework.

  3. Validation of multisource electronic health record data: an application to blood transfusion data.

    PubMed

    Hoeven, Loan R van; Bruijne, Martine C de; Kemper, Peter F; Koopman, Maria M W; Rondeel, Jan M M; Leyte, Anja; Koffijberg, Hendrik; Janssen, Mart P; Roes, Kit C B

    2017-07-14

    Although data from electronic health records (EHR) are often used for research purposes, systematic validation of these data prior to their use is not standard practice. Existing validation frameworks discuss validity concepts without translating these into practical implementation steps or addressing the potential influence of linking multiple sources. Therefore we developed a practical approach for validating routinely collected data from multiple sources and to apply it to a blood transfusion data warehouse to evaluate the usability in practice. The approach consists of identifying existing validation frameworks for EHR data or linked data, selecting validity concepts from these frameworks and establishing quantifiable validity outcomes for each concept. The approach distinguishes external validation concepts (e.g. concordance with external reports, previous literature and expert feedback) and internal consistency concepts which use expected associations within the dataset itself (e.g. completeness, uniformity and plausibility). In an example case, the selected concepts were applied to a transfusion dataset and specified in more detail. Application of the approach to a transfusion dataset resulted in a structured overview of data validity aspects. This allowed improvement of these aspects through further processing of the data and in some cases adjustment of the data extraction. For example, the proportion of transfused products that could not be linked to the corresponding issued products initially was 2.2% but could be improved by adjusting data extraction criteria to 0.17%. This stepwise approach for validating linked multisource data provides a basis for evaluating data quality and enhancing interpretation. When the process of data validation is adopted more broadly, this contributes to increased transparency and greater reliability of research based on routinely collected electronic health records.

  4. Model-based Systems Engineering: Creation and Implementation of Model Validation Rules for MOS 2.0

    NASA Technical Reports Server (NTRS)

    Schmidt, Conrad K.

    2013-01-01

    Model-based Systems Engineering (MBSE) is an emerging modeling application that is used to enhance the system development process. MBSE allows for the centralization of project and system information that would otherwise be stored in extraneous locations, yielding better communication, expedited document generation and increased knowledge capture. Based on MBSE concepts and the employment of the Systems Modeling Language (SysML), extremely large and complex systems can be modeled from conceptual design through all system lifecycles. The Operations Revitalization Initiative (OpsRev) seeks to leverage MBSE to modernize the aging Advanced Multi-Mission Operations Systems (AMMOS) into the Mission Operations System 2.0 (MOS 2.0). The MOS 2.0 will be delivered in a series of conceptual and design models and documents built using the modeling tool MagicDraw. To ensure model completeness and cohesiveness, it is imperative that the MOS 2.0 models adhere to the specifications, patterns and profiles of the Mission Service Architecture Framework, thus leading to the use of validation rules. This paper outlines the process by which validation rules are identified, designed, implemented and tested. Ultimately, these rules provide the ability to maintain model correctness and synchronization in a simple, quick and effective manner, thus allowing the continuation of project and system progress.

  5. Gene network biological validity based on gene-gene interaction relevance.

    PubMed

    Gómez-Vela, Francisco; Díaz-Díaz, Norberto

    2014-01-01

    In recent years, gene networks have become one of the most useful tools for modeling biological processes. Many inference gene network algorithms have been developed as techniques for extracting knowledge from gene expression data. Ensuring the reliability of the inferred gene relationships is a crucial task in any study in order to prove that the algorithms used are precise. Usually, this validation process can be carried out using prior biological knowledge. The metabolic pathways stored in KEGG are one of the most widely used knowledgeable sources for analyzing relationships between genes. This paper introduces a new methodology, GeneNetVal, to assess the biological validity of gene networks based on the relevance of the gene-gene interactions stored in KEGG metabolic pathways. Hence, a complete KEGG pathway conversion into a gene association network and a new matching distance based on gene-gene interaction relevance are proposed. The performance of GeneNetVal was established with three different experiments. Firstly, our proposal is tested in a comparative ROC analysis. Secondly, a randomness study is presented to show the behavior of GeneNetVal when the noise is increased in the input network. Finally, the ability of GeneNetVal to detect biological functionality of the network is shown.

  6. Grid workflow validation using ontology-based tacit knowledge: A case study for quantitative remote sensing applications

    NASA Astrophysics Data System (ADS)

    Liu, Jia; Liu, Longli; Xue, Yong; Dong, Jing; Hu, Yingcui; Hill, Richard; Guang, Jie; Li, Chi

    2017-01-01

    Workflow for remote sensing quantitative retrieval is the ;bridge; between Grid services and Grid-enabled application of remote sensing quantitative retrieval. Workflow averts low-level implementation details of the Grid and hence enables users to focus on higher levels of application. The workflow for remote sensing quantitative retrieval plays an important role in remote sensing Grid and Cloud computing services, which can support the modelling, construction and implementation of large-scale complicated applications of remote sensing science. The validation of workflow is important in order to support the large-scale sophisticated scientific computation processes with enhanced performance and to minimize potential waste of time and resources. To research the semantic correctness of user-defined workflows, in this paper, we propose a workflow validation method based on tacit knowledge research in the remote sensing domain. We first discuss the remote sensing model and metadata. Through detailed analysis, we then discuss the method of extracting the domain tacit knowledge and expressing the knowledge with ontology. Additionally, we construct the domain ontology with Protégé. Through our experimental study, we verify the validity of this method in two ways, namely data source consistency error validation and parameters matching error validation.

  7. The Designing of CALM (Computer Anxiety and Learning Measure): Validation of a Multidimensional Measure of Anxiety and Cognitions Relating to Adult Learning of Computing Skills Using Structural Equation Modeling.

    ERIC Educational Resources Information Center

    McInerney, Valentina; Marsh, Herbert W.; McInerney, Dennis M.

    This paper discusses the process through which a powerful multidimensional measure of affect and cognition in relation to adult learning of computing skills was derived from its early theoretical stages to its validation using structural equation modeling. The discussion emphasizes the importance of ensuring a strong substantive base from which to…

  8. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine.

    PubMed

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-02-06

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human-machine-environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines.

  9. A 3D Human-Machine Integrated Design and Analysis Framework for Squat Exercises with a Smith Machine

    PubMed Central

    Lee, Haerin; Jung, Moonki; Lee, Ki-Kwang; Lee, Sang Hun

    2017-01-01

    In this paper, we propose a three-dimensional design and evaluation framework and process based on a probabilistic-based motion synthesis algorithm and biomechanical analysis system for the design of the Smith machine and squat training programs. Moreover, we implemented a prototype system to validate the proposed framework. The framework consists of an integrated human–machine–environment model as well as a squat motion synthesis system and biomechanical analysis system. In the design and evaluation process, we created an integrated model in which interactions between a human body and machine or the ground are modeled as joints with constraints at contact points. Next, we generated Smith squat motion using the motion synthesis program based on a Gaussian process regression algorithm with a set of given values for independent variables. Then, using the biomechanical analysis system, we simulated joint moments and muscle activities from the input of the integrated model and squat motion. We validated the model and algorithm through physical experiments measuring the electromyography (EMG) signals, ground forces, and squat motions as well as through a biomechanical simulation of muscle forces. The proposed approach enables the incorporation of biomechanics in the design process and reduces the need for physical experiments and prototypes in the development of training programs and new Smith machines. PMID:28178184

  10. Migration and validation of non-formal and informal learning in Europe: Inclusion, exclusion or polarisation in the recognition of skills?

    NASA Astrophysics Data System (ADS)

    Souto-Otero, Manuel; Villalba-Garcia, Ernesto

    2015-10-01

    This article explores (1) the degree to which immigrants can be considered dominant groups in the area of validation of non-formal and informal learning and are subject to specific validation measures in 33 European countries; (2) whether country clusters can be identified within Europe with regard to the dominance of immigrants in the area of validation; and (3) whether validation systems are likely to lead to the inclusion of immigrants or foster a process of "devaluation" of their skills and competences in their host countries. Based on the European Inventory on validation of non- formal and informal learning project (chiefly its 2014 update) as well as a review of 124 EU-funded (Lifelong Learning Programme and European Social Fund) validation projects, the authors present the following findings: (1) in the majority of European countries, immigrants are not a dominant group in the area of validation. (2) In terms of country clusters, Central European and Nordic countries tend to consider immigrants a dominant target group for validation to a greater extent than Southern and Eastern European countries. (3) Finally, few initiatives aim to ensure that immigrants' skills and competences are not devalued in their host country, and those initiatives which are in place benefit particularly those defined as "highly skilled" individuals, on the basis of their productive potential. There is, thus, a "low road" and a "high road" to validation, leading to a process of polarisation in the recognition of the skills and competences of immigrants.

  11. Recommendations for standardizing validation procedures assessing physical activity of older persons by monitoring body postures and movements.

    PubMed

    Lindemann, Ulrich; Zijlstra, Wiebren; Aminian, Kamiar; Chastin, Sebastien F M; de Bruin, Eling D; Helbostad, Jorunn L; Bussmann, Johannes B J

    2014-01-10

    Physical activity is an important determinant of health and well-being in older persons and contributes to their social participation and quality of life. Hence, assessment tools are needed to study this physical activity in free-living conditions. Wearable motion sensing technology is used to assess physical activity. However, there is a lack of harmonisation of validation protocols and applied statistics, which make it hard to compare available and future studies. Therefore, the aim of this paper is to formulate recommendations for assessing the validity of sensor-based activity monitoring in older persons with focus on the measurement of body postures and movements. Validation studies of body-worn devices providing parameters on body postures and movements were identified and summarized and an extensive inter-active process between authors resulted in recommendations about: information on the assessed persons, the technical system, and the analysis of relevant parameters of physical activity, based on a standardized and semi-structured protocol. The recommended protocols can be regarded as a first attempt to standardize validity studies in the area of monitoring physical activity.

  12. Three validation metrics for automated probabilistic image segmentation of brain tumours

    PubMed Central

    Zou, Kelly H.; Wells, William M.; Kikinis, Ron; Warfield, Simon K.

    2005-01-01

    SUMMARY The validity of brain tumour segmentation is an important issue in image processing because it has a direct impact on surgical planning. We examined the segmentation accuracy based on three two-sample validation metrics against the estimated composite latent gold standard, which was derived from several experts’ manual segmentations by an EM algorithm. The distribution functions of the tumour and control pixel data were parametrically assumed to be a mixture of two beta distributions with different shape parameters. We estimated the corresponding receiver operating characteristic curve, Dice similarity coefficient, and mutual information, over all possible decision thresholds. Based on each validation metric, an optimal threshold was then computed via maximization. We illustrated these methods on MR imaging data from nine brain tumour cases of three different tumour types, each consisting of a large number of pixels. The automated segmentation yielded satisfactory accuracy with varied optimal thresholds. The performances of these validation metrics were also investigated via Monte Carlo simulation. Extensions of incorporating spatial correlation structures using a Markov random field model were considered. PMID:15083482

  13. Validation in the Absence of Observed Events

    DOE PAGES

    Lathrop, John; Ezell, Barry

    2015-07-22

    Here our paper addresses the problem of validating models in the absence of observed events, in the area of Weapons of Mass Destruction terrorism risk assessment. We address that problem with a broadened definition of “Validation,” based on “backing up” to the reason why modelers and decision makers seek validation, and from that basis re-define validation as testing how well the model can advise decision makers in terrorism risk management decisions. We develop that into two conditions: Validation must be based on cues available in the observable world; and it must focus on what can be done to affect thatmore » observable world, i.e. risk management. That in turn leads to two foci: 1.) the risk generating process, 2.) best use of available data. Based on our experience with nine WMD terrorism risk assessment models, we then describe three best use of available data pitfalls: SME confidence bias, lack of SME cross-referencing, and problematic initiation rates. Those two foci and three pitfalls provide a basis from which we define validation in this context in terms of four tests -- Does the model: … capture initiation? … capture the sequence of events by which attack scenarios unfold? … consider unanticipated scenarios? … consider alternative causal chains? Finally, we corroborate our approach against three key validation tests from the DOD literature: Is the model a correct representation of the simuland? To what degree are the model results comparable to the real world? Over what range of inputs are the model results useful?« less

  14. Improving satellite-based PM2.5 estimates in China using Gaussian processes modeling in a Bayesian hierarchical setting.

    PubMed

    Yu, Wenxi; Liu, Yang; Ma, Zongwei; Bi, Jun

    2017-08-01

    Using satellite-based aerosol optical depth (AOD) measurements and statistical models to estimate ground-level PM 2.5 is a promising way to fill the areas that are not covered by ground PM 2.5 monitors. The statistical models used in previous studies are primarily Linear Mixed Effects (LME) and Geographically Weighted Regression (GWR) models. In this study, we developed a new regression model between PM 2.5 and AOD using Gaussian processes in a Bayesian hierarchical setting. Gaussian processes model the stochastic nature of the spatial random effects, where the mean surface and the covariance function is specified. The spatial stochastic process is incorporated under the Bayesian hierarchical framework to explain the variation of PM 2.5 concentrations together with other factors, such as AOD, spatial and non-spatial random effects. We evaluate the results of our model and compare them with those of other, conventional statistical models (GWR and LME) by within-sample model fitting and out-of-sample validation (cross validation, CV). The results show that our model possesses a CV result (R 2  = 0.81) that reflects higher accuracy than that of GWR and LME (0.74 and 0.48, respectively). Our results indicate that Gaussian process models have the potential to improve the accuracy of satellite-based PM 2.5 estimates.

  15. A unified dislocation density-dependent physical-based constitutive model for cold metal forming

    NASA Astrophysics Data System (ADS)

    Schacht, K.; Motaman, A. H.; Prahl, U.; Bleck, W.

    2017-10-01

    Dislocation-density-dependent physical-based constitutive models of metal plasticity while are computationally efficient and history-dependent, can accurately account for varying process parameters such as strain, strain rate and temperature; different loading modes such as continuous deformation, creep and relaxation; microscopic metallurgical processes; and varying chemical composition within an alloy family. Since these models are founded on essential phenomena dominating the deformation, they have a larger range of usability and validity. Also, they are suitable for manufacturing chain simulations since they can efficiently compute the cumulative effect of the various manufacturing processes by following the material state through the entire manufacturing chain and also interpass periods and give a realistic prediction of the material behavior and final product properties. In the physical-based constitutive model of cold metal plasticity introduced in this study, physical processes influencing cold and warm plastic deformation in polycrystalline metals are described using physical/metallurgical internal variables such as dislocation density and effective grain size. The evolution of these internal variables are calculated using adequate equations that describe the physical processes dominating the material behavior during cold plastic deformation. For validation, the model is numerically implemented in general implicit isotropic elasto-viscoplasticity algorithm as a user-defined material subroutine (UMAT) in ABAQUS/Standard and used for finite element simulation of upsetting tests and a complete cold forging cycle of case hardenable MnCr steel family.

  16. The validation of the Supervision of Thesis Questionnaire (STQ).

    PubMed

    Henricson, Maria; Fridlund, Bengt; Mårtensson, Jan; Hedberg, Berith

    2018-06-01

    The supervision process is characterized by differences between the supervisors' and the students' expectations before the start of writing a bachelor thesis as well as after its completion. A review of the literature did not reveal any scientifically tested questionnaire for evaluating nursing students' expectations of the supervision process when writing a bachelor thesis. The aim of the study was to determine the construct validity and internal consistency reliability of a questionnaire for measuring nursing students' expectations of the bachelor thesis supervision process. The study had a developmental and methodological design carried out in four steps including construct validity and internal consistency reliability statistical procedures: construction of the items, assessment of face validity, data collection and data analysis. This study was conducted at a university in southern Sweden, where students on the "Nursing student thesis, 15 ECTS" course were consecutively selected for participation. Of the 512 questionnaires distributed, 327 were returned, a response rate of 64%. Five factors with a total variance of 74% and good communalities, ≥0.64, were extracted from the 10-item STQ. The internal consistency of the 10 items was 0.68. The five factors were labelled: The nature of the supervision process, The supervisor's role as a coach, The students' progression to self-support, The interaction between students and supervisor and supervisor competence. A didactic, useful and secure questionnaire measuring nursing students' expectations of the bachelor thesis supervision process based on three main forms of supervision was created. Copyright © 2018 Elsevier Ltd. All rights reserved.

  17. Forest, Trees, Dynamics: Results from a Novel Wisconsin Card Sorting Test Variant Protocol for Studying Global-Local Attention and Complex Cognitive Processes

    PubMed Central

    Cowley, Benjamin; Lukander, Kristian

    2016-01-01

    Background: Recognition of objects and their context relies heavily on the integrated functioning of global and local visual processing. In a realistic setting such as work, this processing becomes a sustained activity, implying a consequent interaction with executive functions. Motivation: There have been many studies of either global-local attention or executive functions; however it is relatively novel to combine these processes to study a more ecological form of attention. We aim to explore the phenomenon of global-local processing during a task requiring sustained attention and working memory. Methods: We develop and test a novel protocol for global-local dissociation, with task structure including phases of divided (“rule search”) and selective (“rule found”) attention, based on the Wisconsin Card Sorting Task (WCST). We test it in a laboratory study with 25 participants, and report on behavior measures (physiological data was also gathered, but not reported here). We develop novel stimuli with more naturalistic levels of information and noise, based primarily on face photographs, with consequently more ecological validity. Results: We report behavioral results indicating that sustained difficulty when participants test their hypotheses impacts matching-task performance, and diminishes the global precedence effect. Results also show a dissociation between subjectively experienced difficulty and objective dimension of performance, and establish the internal validity of the protocol. Contribution: We contribute an advance in the state of the art for testing global-local attention processes in concert with complex cognition. With three results we establish a connection between global-local dissociation and aspects of complex cognition. Our protocol also improves ecological validity and opens options for testing additional interactions in future work. PMID:26941689

  18. Social identity and worldview validation: the effects of ingroup identity primes and mortality salience on value endorsement.

    PubMed

    Halloran, Michael J; Kashima, Emiko S

    2004-07-01

    In this article, the authors report an investigation of the relationship between terror management and social identity processes by testing for the effects of social identity salience on worldview validation. Two studies, with distinct populations, were conducted to test the hypothesis that mortality salience would lead to worldview validation of values related to a salient social identity. In Study 1, reasonable support for this hypothesis was found with bicultural Aboriginal Australian participants (N = 97). It was found that thoughts of death led participants to validate ingroup and reject outgroup values depending on the social identity that had been made salient. In Study 2, when their student and Australian identities were primed, respectively, Anglo-Australian students (N = 119) validated values related to those identities, exclusively. The implications of the findings for identity-based worldview validation are discussed.

  19. Validation of Structures in the Protein Data Bank.

    PubMed

    Gore, Swanand; Sanz García, Eduardo; Hendrickx, Pieter M S; Gutmanas, Aleksandras; Westbrook, John D; Yang, Huanwang; Feng, Zukang; Baskaran, Kumaran; Berrisford, John M; Hudson, Brian P; Ikegawa, Yasuyo; Kobayashi, Naohiro; Lawson, Catherine L; Mading, Steve; Mak, Lora; Mukhopadhyay, Abhik; Oldfield, Thomas J; Patwardhan, Ardan; Peisach, Ezra; Sahni, Gaurav; Sekharan, Monica R; Sen, Sanchayita; Shao, Chenghua; Smart, Oliver S; Ulrich, Eldon L; Yamashita, Reiko; Quesada, Martha; Young, Jasmine Y; Nakamura, Haruki; Markley, John L; Berman, Helen M; Burley, Stephen K; Velankar, Sameer; Kleywegt, Gerard J

    2017-12-05

    The Worldwide PDB recently launched a deposition, biocuration, and validation tool: OneDep. At various stages of OneDep data processing, validation reports for three-dimensional structures of biological macromolecules are produced. These reports are based on recommendations of expert task forces representing crystallography, nuclear magnetic resonance, and cryoelectron microscopy communities. The reports provide useful metrics with which depositors can evaluate the quality of the experimental data, the structural model, and the fit between them. The validation module is also available as a stand-alone web server and as a programmatically accessible web service. A growing number of journals require the official wwPDB validation reports (produced at biocuration) to accompany manuscripts describing macromolecular structures. Upon public release of the structure, the validation report becomes part of the public PDB archive. Geometric quality scores for proteins in the PDB archive have improved over the past decade. Copyright © 2017 The Authors. Published by Elsevier Ltd.. All rights reserved.

  20. Development and Validation of a Web-Based Module to Teach Metacognitive Learning Strategies to Students in Higher Education

    ERIC Educational Resources Information Center

    Singh, Oma B.

    2009-01-01

    This study used a design based-research (DBR) methodology to examine how an Instructional Systematic Design (ISD) process such as ADDIE (Analysis, Design, Development, Implementation, Evaluation) can be employed to develop a web-based module to teach metacognitive learning strategies to students in higher education. The goal of the study was…

  1. Intrinsic Motivation and Engagement as "Active Ingredients" in Garden-Based Education: Examining Models and Measures Derived from Self-Determination Theory

    ERIC Educational Resources Information Center

    Skinner, Ellen A.; Chi, Una

    2012-01-01

    Building on self-determination theory, this study presents a model of intrinsic motivation and engagement as "active ingredients" in garden-based education. The model was used to create reliable and valid measures of key constructs, and to guide the empirical exploration of motivational processes in garden-based learning. Teacher- and…

  2. Assembly processes comparison for a miniaturized laser used for the Exomars European Space Agency mission

    NASA Astrophysics Data System (ADS)

    Ribes-Pleguezuelo, Pol; Inza, Andoni Moral; Basset, Marta Gilaberte; Rodríguez, Pablo; Rodríguez, Gemma; Laudisio, Marco; Galan, Miguel; Hornaff, Marcel; Beckert, Erik; Eberhardt, Ramona; Tünnermann, Andreas

    2016-11-01

    A miniaturized diode-pumped solid-state laser (DPSSL) designed as part of the Raman laser spectrometer (RLS) instrument for the European Space Agency (ESA) Exomars mission 2020 is assembled and tested for the mission purpose and requirements. Two different processes were tried for the laser assembling: one based on adhesives, following traditional laser manufacturing processes; another based on a low-stress and organic-free soldering technique called solderjet bumping technology. The manufactured devices were tested for the processes validation by passing mechanical, thermal cycles, radiation, and optical functional tests. The comparison analysis showed a device improvement in terms of reliability of the optical performances from the soldered to the assembled by adhesive-based means.

  3. Rule-based reasoning is fast and belief-based reasoning can be slow: Challenging current explanations of belief-bias and base-rate neglect.

    PubMed

    Newman, Ian R; Gibb, Maia; Thompson, Valerie A

    2017-07-01

    It is commonly assumed that belief-based reasoning is fast and automatic, whereas rule-based reasoning is slower and more effortful. Dual-Process theories of reasoning rely on this speed-asymmetry explanation to account for a number of reasoning phenomena, such as base-rate neglect and belief-bias. The goal of the current study was to test this hypothesis about the relative speed of belief-based and rule-based processes. Participants solved base-rate problems (Experiment 1) and conditional inferences (Experiment 2) under a challenging deadline; they then gave a second response in free time. We found that fast responses were informed by rules of probability and logical validity, and that slow responses incorporated belief-based information. Implications for Dual-Process theories and future research options for dissociating Type I and Type II processes are discussed. (PsycINFO Database Record (c) 2017 APA, all rights reserved).

  4. Examining the Level of Convergence among Self-Regulated Learning Microanalytic Processes, Achievement, and a Self-Report Questionnaire

    ERIC Educational Resources Information Center

    Cleary, Timothy J.; Callan, Gregory L.; Malatesta, Jaime; Adams, Tanya

    2015-01-01

    This study examined the convergent and predictive validity of self-regulated learning (SRL) microanalytic measures. Specifically, theoretically based relations among a set of self-reflection processes, self-efficacy, and achievement were examined as was the level of convergence between a microanalytic strategy measure and a SRL self-report…

  5. Position Estimation Using Image Derivative

    NASA Technical Reports Server (NTRS)

    Mortari, Daniele; deDilectis, Francesco; Zanetti, Renato

    2015-01-01

    This paper describes an image processing algorithm to process Moon and/or Earth images. The theory presented is based on the fact that Moon hard edge points are characterized by the highest values of the image derivative. Outliers are eliminated by two sequential filters. Moon center and radius are then estimated by nonlinear least-squares using circular sigmoid functions. The proposed image processing has been applied and validated using real and synthetic Moon images.

  6. Identification and location tasks rely on different mental processes: a diffusion model account of validity effects in spatial cueing paradigms with emotional stimuli.

    PubMed

    Imhoff, Roland; Lange, Jens; Germar, Markus

    2018-02-22

    Spatial cueing paradigms are popular tools to assess human attention to emotional stimuli, but different variants of these paradigms differ in what participants' primary task is. In one variant, participants indicate the location of the target (location task), whereas in the other they indicate the shape of the target (identification task). In the present paper we test the idea that although these two variants produce seemingly comparable cue validity effects on response times, they rest on different underlying processes. Across four studies (total N = 397; two in the supplement) using both variants and manipulating the motivational relevance of cue content, diffusion model analyses revealed that cue validity effects in location tasks are primarily driven by response biases, whereas the same effect rests on delay due to attention to the cue in identification tasks. Based on this, we predict and empirically support that a symmetrical distribution of valid and invalid cues would reduce cue validity effects in location tasks to a greater extent than in identification tasks. Across all variants of the task, we fail to replicate the effect of greater cue validity effects for arousing (vs. neutral) stimuli. We discuss the implications of these findings for best practice in spatial cueing research.

  7. Reflexivity, critical qualitative research and emancipation: a Foucauldian perspective.

    PubMed

    McCabe, Janet L; Holmes, Dave

    2009-07-01

    In this paper, we consider reflexivity, not only as a concept of qualitative validity, but also as a tool used during the research process to achieve the goals of emancipation that are intrinsic to qualitative research conducted within a critical paradigm. Research conducted from a critical perspective poses two challenges to researchers: validity of the research must be ensured and the emancipatory aims of the research need to be realized and communicated. The traditional view of reflexivity as a means of ensuring validity in qualitative research limits its potential to inform the research process. The Medline and CINAHL data bases were searched (1998-2008 inclusive) using keywords such as reflexivity, validity, subjectivity, bias, emancipation, empowerment and disability. In addition, the work of Michel Foucault was examined. Using the work of the late French philosopher Michel Foucault, we explore how Foucault's 'technologies of the self' can be employed during critical qualitative research to achieve emancipatory changes. Using research conducted with marginalized populations as an example (specifically, individuals with disabilities), we demonstrate the potential for using reflexivity, in a Foucauldian sense, during the research process. Shifting the traditional view of reflexivity allows researchers to focus on the subtle changes that comprise emancipation (in a Foucauldian sense). As a result, researchers are better able to see, understand and analyse this process in both the participants and themselves.

  8. Visual communications and image processing '92; Proceedings of the Meeting, Boston, MA, Nov. 18-20, 1992

    NASA Astrophysics Data System (ADS)

    Maragos, Petros

    The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)

  9. A Shipping Container-Based Sterile Processing Unit for Low Resources Settings

    PubMed Central

    2016-01-01

    Deficiencies in the sterile processing of medical instruments contribute to poor outcomes for patients, such as surgical site infections, longer hospital stays, and deaths. In low resources settings, such as some rural and semi-rural areas and secondary and tertiary cities of developing countries, deficiencies in sterile processing are accentuated due to the lack of access to sterilization equipment, improperly maintained and malfunctioning equipment, lack of power to operate equipment, poor protocols, and inadequate quality control over inventory. Inspired by our sterile processing fieldwork at a district hospital in Sierra Leone in 2013, we built an autonomous, shipping-container-based sterile processing unit to address these deficiencies. The sterile processing unit, dubbed “the sterile box,” is a full suite capable of handling instruments from the moment they leave the operating room to the point they are sterile and ready to be reused for the next surgery. The sterile processing unit is self-sufficient in power and water and features an intake for contaminated instruments, decontamination, sterilization via non-electric steam sterilizers, and secure inventory storage. To validate efficacy, we ran tests of decontamination and sterilization performance. Results of 61 trials validate convincingly that our sterile processing unit achieves satisfactory outcomes for decontamination and sterilization and as such holds promise to support healthcare facilities in low resources settings. PMID:27007568

  10. Model performance evaluation (validation and calibration) in model-based studies of therapeutic interventions for cardiovascular diseases : a review and suggested reporting framework.

    PubMed

    Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan

    2013-04-01

    Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.

  11. Assessing Learning, Quality and Engagement in Learning Objects: The Learning Object Evaluation Scale for Students (LOES-S)

    ERIC Educational Resources Information Center

    Kay, Robin H.; Knaack, Liesel

    2009-01-01

    Learning objects are interactive web-based tools that support the learning of specific concepts by enhancing, amplifying, and/or guiding the cognitive processes of learners. Research on the impact, effectiveness, and usefulness of learning objects is limited, partially because comprehensive, theoretically based, reliable, and valid evaluation…

  12. DACUM Research Chart on the Work-Based Learning Teacher Coordinator. Post Research Report

    ERIC Educational Resources Information Center

    Wichowski, Chester P.

    2011-01-01

    A research and development effort was undertaken to provide definition and validate the emerging role of the Work-Based Learning Teacher Coordinator through the use of a DACUM process in cooperation with the Pennsylvania Cooperative Education Association with funding support provided by the Pennsylvania Department of Education, Bureau of Career…

  13. Validation of electronic systems to collect patient-reported outcome (PRO) data-recommendations for clinical trial teams: report of the ISPOR ePRO systems validation good research practices task force.

    PubMed

    Zbrozek, Arthur; Hebert, Joy; Gogates, Gregory; Thorell, Rod; Dell, Christopher; Molsen, Elizabeth; Craig, Gretchen; Grice, Kenneth; Kern, Scottie; Hines, Sheldon

    2013-06-01

    Outcomes research literature has many examples of high-quality, reliable patient-reported outcome (PRO) data entered directly by electronic means, ePRO, compared to data entered from original results on paper. Clinical trial managers are increasingly using ePRO data collection for PRO-based end points. Regulatory review dictates the rules to follow with ePRO data collection for medical label claims. A critical component for regulatory compliance is evidence of the validation of these electronic data collection systems. Validation of electronic systems is a process versus a focused activity that finishes at a single point in time. Eight steps need to be described and undertaken to qualify the validation of the data collection software in its target environment: requirements definition, design, coding, testing, tracing, user acceptance testing, installation and configuration, and decommissioning. These elements are consistent with recent regulatory guidance for systems validation. This report was written to explain how the validation process works for sponsors, trial teams, and other users of electronic data collection devices responsible for verifying the quality of the data entered into relational databases from such devices. It is a guide on the requirements and documentation needed from a data collection systems provider to demonstrate systems validation. It is a practical source of information for study teams to ensure that ePRO providers are using system validation and implementation processes that will ensure the systems and services: operate reliably when in practical use; produce accurate and complete data and data files; support management control and comply with any existing regulations. Furthermore, this short report will increase user understanding of the requirements for a technology review leading to more informed and balanced recommendations or decisions on electronic data collection methods. Copyright © 2013 International Society for Pharmacoeconomics and Outcomes Research (ISPOR). Published by Elsevier Inc. All rights reserved.

  14. Towards a full integration of optimization and validation phases: An analytical-quality-by-design approach.

    PubMed

    Hubert, C; Houari, S; Rozet, E; Lebrun, P; Hubert, Ph

    2015-05-22

    When using an analytical method, defining an analytical target profile (ATP) focused on quantitative performance represents a key input, and this will drive the method development process. In this context, two case studies were selected in order to demonstrate the potential of a quality-by-design (QbD) strategy when applied to two specific phases of the method lifecycle: the pre-validation study and the validation step. The first case study focused on the improvement of a liquid chromatography (LC) coupled to mass spectrometry (MS) stability-indicating method by the means of the QbD concept. The design of experiments (DoE) conducted during the optimization step (i.e. determination of the qualitative design space (DS)) was performed a posteriori. Additional experiments were performed in order to simultaneously conduct the pre-validation study to assist in defining the DoE to be conducted during the formal validation step. This predicted protocol was compared to the one used during the formal validation. A second case study based on the LC/MS-MS determination of glucosamine and galactosamine in human plasma was considered in order to illustrate an innovative strategy allowing the QbD methodology to be incorporated during the validation phase. An operational space, defined by the qualitative DS, was considered during the validation process rather than a specific set of working conditions as conventionally performed. Results of all the validation parameters conventionally studied were compared to those obtained with this innovative approach for glucosamine and galactosamine. Using this strategy, qualitative and quantitative information were obtained. Consequently, an analyst using this approach would be able to select with great confidence several working conditions within the operational space rather than a given condition for the routine use of the method. This innovative strategy combines both a learning process and a thorough assessment of the risk involved. Copyright © 2015 Elsevier B.V. All rights reserved.

  15. Development and validation of a Clinical Assessment Tool for Nursing Education (CAT-NE).

    PubMed

    Skúladóttir, Hafdís; Svavarsdóttir, Margrét Hrönn

    2016-09-01

    The aim of this study was to develop a valid assessment tool to guide clinical education and evaluate students' performance in clinical nursing education. The development of the Clinical Assessment Tool for Nursing Education (CAT-NE) was based on the theory of nursing as professional caring and the Bologna learning outcomes. Benson and Clark's four steps of instrument development and validation guided the development and assessment of the tool. A mixed-methods approach with individual structured cognitive interviewing and quantitative assessments was used to validate the tool. Supervisory teachers, a pedagogical consultant, clinical expert teachers, clinical teachers, and nursing students at the University of Akureyri in Iceland participated in the process. This assessment tool is valid to assess the clinical performance of nursing students; it consists of rubrics that list the criteria for the students' expected performance. According to the students and their clinical teachers, the assessment tool clarified learning objectives, enhanced the focus of the assessment process, and made evaluation more objective. Training clinical teachers on how to assess students' performances in clinical studies and use the tool enhanced the quality of clinical assessment in nursing education. Copyright © 2016 Elsevier Ltd. All rights reserved.

  16. Integration of design and inspection

    NASA Astrophysics Data System (ADS)

    Simmonds, William H.

    1990-08-01

    Developments in advanced computer integrated manufacturing technology, coupled with the emphasis on Total Quality Management, are exposing needs for new techniques to integrate all functions from design through to support of the delivered product. One critical functional area that must be integrated into design is that embracing the measurement, inspection and test activities necessary for validation of the delivered product. This area is being tackled by a collaborative project supported by the UK Government Department of Trade and Industry. The project is aimed at developing techniques for analysing validation needs and for planning validation methods. Within the project an experimental Computer Aided Validation Expert system (CAVE) is being constructed. This operates with a generalised model of the validation process and helps with all design stages: specification of product requirements; analysis of the assurance provided by a proposed design and method of manufacture; development of the inspection and test strategy; and analysis of feedback data. The kernel of the system is a knowledge base containing knowledge of the manufacturing process capabilities and of the available inspection and test facilities. The CAVE system is being integrated into a real life advanced computer integrated manufacturing facility for demonstration and evaluation.

  17. Validation of an educative manual for patients with head and neck cancer submitted to radiation therapy 1

    PubMed Central

    da Cruz, Flávia Oliveira de Almeida Marques; Ferreira, Elaine Barros; Vasques, Christiane Inocêncio; da Mata, Luciana Regina Ferreira; dos Reis, Paula Elaine Diniz

    2016-01-01

    Abstract Objective: develop the content and face validation of an educative manual for patients with head and neck cancer submitted to radiation therapy. Method: descriptive methodological research. The Theory of Psychometrics was used for the validation process, developed by 15 experts in the theme area of the educative manual and by two language and publicity professionals. A minimum agreement level of 80% was considered to guarantee the validity of the material. Results: the items addressed in the assessment tool of the educative manual were divided in three blocks: objectives, structure and format, and relevance. Only one item, related to the sociocultural level of the target public, obtained an agreement rate <80%, and was reformulated based on the participants' suggestions. All other items were considered appropriate and/or complete appropriate in the three blocks proposed: objectives - 92.38%, structure and form - 89.74%, and relevance - 94.44%. Conclusion: the face and content validation of the educative manual proposed were attended to. This can contribute to the understanding of the therapeutic process the head and neck cancer patient is submitted to during the radiation therapy, besides supporting clinical practice through the nursing consultation. PMID:27305178

  18. [Monitoring radiofrequency ablation by ultrasound temperature imaging and elastography under different power intensities].

    PubMed

    Geng, Xiaonan; Li, Qiang; Tsui, Pohsiang; Wang, Chiaoyin; Liu, Haoli

    2013-09-01

    To evaluate the reliability of diagnostic ultrasound-based temperature and elasticity imaging during radiofrequency ablation (RFA) through ex vivo experiments. Procine liver samples (n=7) were employed for RFA experiments with exposures of different power intensities (10 and 50w). The RFA process was monitored by a diagnostic ultrasound imager and the information were postoperatively captured for further temperature and elasticity image analysis. Infrared thermometry was concurrently applied to provide temperature change calibration during the RFA process. Results from this study demonstrated that temperature imaging was valid under 10 W RF exposure (r=0.95), but the ablation zone was no longer consistent with the reference infrared temperature distribution under high RF exposures. The elasticity change could well reflect the ablation zone under a 50 W exposure, whereas under low exposures, the thermal lesion could not be well detected due to the limited range of temperature elevation and incomplete tissue necrosis. Diagnostic ultrasound-based temperature and elastography is valid for monitoring thr RFA process. Temperature estimation can well reflect mild-power RF ablation dynamics, whereas the elastic-change estimation can can well predict the tissue necrosis. This study provide advances toward using diagnostic ultrasound to monitor RFA or other thermal-based interventions.

  19. Automated ensemble assembly and validation of microbial genomes.

    PubMed

    Koren, Sergey; Treangen, Todd J; Hill, Christopher M; Pop, Mihai; Phillippy, Adam M

    2014-05-03

    The continued democratization of DNA sequencing has sparked a new wave of development of genome assembly and assembly validation methods. As individual research labs, rather than centralized centers, begin to sequence the majority of new genomes, it is important to establish best practices for genome assembly. However, recent evaluations such as GAGE and the Assemblathon have concluded that there is no single best approach to genome assembly. Instead, it is preferable to generate multiple assemblies and validate them to determine which is most useful for the desired analysis; this is a labor-intensive process that is often impossible or unfeasible. To encourage best practices supported by the community, we present iMetAMOS, an automated ensemble assembly pipeline; iMetAMOS encapsulates the process of running, validating, and selecting a single assembly from multiple assemblies. iMetAMOS packages several leading open-source tools into a single binary that automates parameter selection and execution of multiple assemblers, scores the resulting assemblies based on multiple validation metrics, and annotates the assemblies for genes and contaminants. We demonstrate the utility of the ensemble process on 225 previously unassembled Mycobacterium tuberculosis genomes as well as a Rhodobacter sphaeroides benchmark dataset. On these real data, iMetAMOS reliably produces validated assemblies and identifies potential contamination without user intervention. In addition, intelligent parameter selection produces assemblies of R. sphaeroides comparable to or exceeding the quality of those from the GAGE-B evaluation, affecting the relative ranking of some assemblers. Ensemble assembly with iMetAMOS provides users with multiple, validated assemblies for each genome. Although computationally limited to small or mid-sized genomes, this approach is the most effective and reproducible means for generating high-quality assemblies and enables users to select an assembly best tailored to their specific needs.

  20. Scrutinizing a Survey-Based Measure of Science and Mathematics Teacher Knowledge: Relationship to Observations of Teaching Practice

    NASA Astrophysics Data System (ADS)

    Talbot, Robert M.

    2017-12-01

    There is a clear need for valid and reliable instrumentation that measures teacher knowledge. However, the process of investigating and making a case for instrument validity is not a simple undertaking; rather, it is a complex endeavor. This paper presents the empirical case of one aspect of such an instrument validation effort. The particular instrument under scrutiny was developed in order to determine the effect of a teacher education program on novice science and mathematics teachers' strategic knowledge (SK). The relationship between novice science and mathematics teachers' SK as measured by a survey and their SK as inferred from observations of practice using a widely used observation protocol is the subject of this paper. Moderate correlations between parts of the observation-based construct and the SK construct were observed. However, the main finding of this work is that the context in which the measurement is made (in situ observations vs. ex situ survey) is an essential factor in establishing the validity of the measurement itself.

  1. Calibration and validation of coarse-grained models of atomic systems: application to semiconductor manufacturing

    NASA Astrophysics Data System (ADS)

    Farrell, Kathryn; Oden, J. Tinsley

    2014-07-01

    Coarse-grained models of atomic systems, created by aggregating groups of atoms into molecules to reduce the number of degrees of freedom, have been used for decades in important scientific and technological applications. In recent years, interest in developing a more rigorous theory for coarse graining and in assessing the predictivity of coarse-grained models has arisen. In this work, Bayesian methods for the calibration and validation of coarse-grained models of atomistic systems in thermodynamic equilibrium are developed. For specificity, only configurational models of systems in canonical ensembles are considered. Among major challenges in validating coarse-grained models are (1) the development of validation processes that lead to information essential in establishing confidence in the model's ability predict key quantities of interest and (2), above all, the determination of the coarse-grained model itself; that is, the characterization of the molecular architecture, the choice of interaction potentials and thus parameters, which best fit available data. The all-atom model is treated as the "ground truth," and it provides the basis with respect to which properties of the coarse-grained model are compared. This base all-atom model is characterized by an appropriate statistical mechanics framework in this work by canonical ensembles involving only configurational energies. The all-atom model thus supplies data for Bayesian calibration and validation methods for the molecular model. To address the first challenge, we develop priors based on the maximum entropy principle and likelihood functions based on Gaussian approximations of the uncertainties in the parameter-to-observation error. To address challenge (2), we introduce the notion of model plausibilities as a means for model selection. This methodology provides a powerful approach toward constructing coarse-grained models which are most plausible for given all-atom data. We demonstrate the theory and methods through applications to representative atomic structures and we discuss extensions to the validation process for molecular models of polymer structures encountered in certain semiconductor nanomanufacturing processes. The powerful method of model plausibility as a means for selecting interaction potentials for coarse-grained models is discussed in connection with a coarse-grained hexane molecule. Discussions of how all-atom information is used to construct priors are contained in an appendix.

  2. Conceptual dissonance: evaluating the efficacy of natural language processing techniques for validating translational knowledge constructs.

    PubMed

    Payne, Philip R O; Kwok, Alan; Dhaval, Rakesh; Borlawsky, Tara B

    2009-03-01

    The conduct of large-scale translational studies presents significant challenges related to the storage, management and analysis of integrative data sets. Ideally, the application of methodologies such as conceptual knowledge discovery in databases (CKDD) provides a means for moving beyond intuitive hypothesis discovery and testing in such data sets, and towards the high-throughput generation and evaluation of knowledge-anchored relationships between complex bio-molecular and phenotypic variables. However, the induction of such high-throughput hypotheses is non-trivial, and requires correspondingly high-throughput validation methodologies. In this manuscript, we describe an evaluation of the efficacy of a natural language processing-based approach to validating such hypotheses. As part of this evaluation, we will examine a phenomenon that we have labeled as "Conceptual Dissonance" in which conceptual knowledge derived from two or more sources of comparable scope and granularity cannot be readily integrated or compared using conventional methods and automated tools.

  3. Expert system verification and validation study. Delivery 3A and 3B: Trip summaries

    NASA Technical Reports Server (NTRS)

    French, Scott

    1991-01-01

    Key results are documented from attending the 4th workshop on verification, validation, and testing. The most interesting part of the workshop was when representatives from the U.S., Japan, and Europe presented surveys of VV&T within their respective regions. Another interesting part focused on current efforts to define industry standards for artificial intelligence and how that might affect approaches to VV&T of expert systems. The next part of the workshop focused on VV&T methods of applying mathematical techniques to verification of rule bases and techniques for capturing information relating to the process of developing software. The final part focused on software tools. A summary is also presented of the EPRI conference on 'Methodologies, Tools, and Standards for Cost Effective Reliable Software Verification and Validation. The conference was divided into discussion sessions on the following issues: development process, automated tools, software reliability, methods, standards, and cost/benefit considerations.

  4. Chronic Pain: Content Validation of Nursing Diagnosis in Slovakia and the Czech Republic.

    PubMed

    Zeleníková, Renáta; Maniaková, Lenka

    2015-10-01

    The main purpose of the study was to validate the defining characteristics and related factors of the nursing diagnosis "chronic pain" in Slovakia and the Czech Republic. This is a descriptive study. The validation process involved was based on Fehring's Diagnostic Content Validity Model. Three defining characteristics (reports pain, altered ability to continue previous activities, and depression) were classified as major by Slovak nurses, and one defining characteristic (reports pain) was classified as major by Czech nurses. The results of the study provide guidance in devising strategies of pain assessment and can aid in the formulation of accurate nursing diagnoses. The defining characteristic "reports pain" is important for arriving at the nursing diagnosis "chronic pain." © 2014 NANDA International, Inc.

  5. Measuring and assessing maintainability at the end of high level design

    NASA Technical Reports Server (NTRS)

    Briand, Lionel C.; Morasca, Sandro; Basili, Victor R.

    1993-01-01

    Software architecture appears to be one of the main factors affecting software maintainability. Therefore, in order to be able to predict and assess maintainability early in the development process we need to be able to measure the high-level design characteristics that affect the change process. To this end, we propose a measurement approach, which is based on precise assumptions derived from the change process, which is based on Object-Oriented Design principles and is partially language independent. We define metrics for cohesion, coupling, and visibility in order to capture the difficulty of isolating, understanding, designing and validating changes.

  6. Development of a physically-based planar inductors VHDL-AMS model for integrated power converter design

    NASA Astrophysics Data System (ADS)

    Ammouri, Aymen; Ben Salah, Walid; Khachroumi, Sofiane; Ben Salah, Tarek; Kourda, Ferid; Morel, Hervé

    2014-05-01

    Design of integrated power converters needs prototype-less approaches. Specific simulations are required for investigation and validation process. Simulation relies on active and passive device models. Models of planar devices, for instance, are still not available in power simulator tools. There is, thus, a specific limitation during the simulation process of integrated power systems. The paper focuses on the development of a physically-based planar inductor model and its validation inside a power converter during transient switching. The planar inductor model remains a complex device to model, particularly when the skin, the proximity and the parasitic capacitances effects are taken into account. Heterogeneous simulation scheme, including circuit and device models, is successfully implemented in VHDL-AMS language and simulated in Simplorer platform. The mixed simulation results has been favorably tested and compared with practical measurements. It is found that the multi-domain simulation results and measurements data are in close agreement.

  7. Simulation of hypersonic rarefied flows with the immersed-boundary method

    NASA Astrophysics Data System (ADS)

    Bruno, D.; De Palma, P.; de Tullio, M. D.

    2011-05-01

    This paper provides a validation of an immersed boundary method for computing hypersonic rarefied gas flows. The method is based on the solution of the Navier-Stokes equation and is validated versus numerical results obtained by the DSMC approach. The Navier-Stokes solver employs a flexible local grid refinement technique and is implemented on parallel machines using a domain-decomposition approach. Thanks to the efficient grid generation process, based on the ray-tracing technique, and the use of the METIS software, it is possible to obtain the partitioned grids to be assigned to each processor with a minimal effort by the user. This allows one to by-pass the expensive (in terms of time and human resources) classical generation process of a body fitted grid. First-order slip-velocity boundary conditions are employed and tested for taking into account rarefied gas effects.

  8. TARGET's role in knowledge acquisition, engineering, validation, and documentation

    NASA Technical Reports Server (NTRS)

    Levi, Keith R.

    1994-01-01

    We investigate the use of the TARGET task analysis tool for use in the development of rule-based expert systems. We found TARGET to be very helpful in the knowledge acquisition process. It enabled us to perform knowledge acquisition with one knowledge engineer rather than two. In addition, it improved communication between the domain expert and knowledge engineer. We also found it to be useful for both the rule development and refinement phases of the knowledge engineering process. Using the network in these phases required us to develop guidelines that enabled us to easily translate the network into production rules. A significant requirement for TARGET remaining useful throughout the knowledge engineering process was the need to carefully maintain consistency between the network and the rule representations. Maintaining consistency not only benefited the knowledge engineering process, but also has significant payoffs in the areas of validation of the expert system and documentation of the knowledge in the system.

  9. A comprehensive validation toolbox for regional ocean models - Outline, implementation and application to the Baltic Sea

    NASA Astrophysics Data System (ADS)

    Jandt, Simon; Laagemaa, Priidik; Janssen, Frank

    2014-05-01

    The systematic and objective comparison between output from a numerical ocean model and a set of observations, called validation in the context of this presentation, is a beneficial activity at several stages, starting from early steps in model development and ending at the quality control of model based products delivered to customers. Even though the importance of this kind of validation work is widely acknowledged it is often not among the most popular tasks in ocean modelling. In order to ease the validation work a comprehensive toolbox has been developed in the framework of the MyOcean-2 project. The objective of this toolbox is to carry out validation integrating different data sources, e.g. time-series at stations, vertical profiles, surface fields or along track satellite data, with one single program call. The validation toolbox, implemented in MATLAB, features all parts of the validation process - ranging from read-in procedures of datasets to the graphical and numerical output of statistical metrics of the comparison. The basic idea is to have only one well-defined validation schedule for all applications, in which all parts of the validation process are executed. Each part, e.g. read-in procedures, forms a module in which all available functions of this particular part are collected. The interface between the functions, the module and the validation schedule is highly standardized. Functions of a module are set up for certain validation tasks, new functions can be implemented into the appropriate module without affecting the functionality of the toolbox. The functions are assigned for each validation task in user specific settings, which are externally stored in so-called namelists and gather all information of the used datasets as well as paths and metadata. In the framework of the MyOcean-2 project the toolbox is frequently used to validate the forecast products of the Baltic Sea Marine Forecasting Centre. Hereby the performance of any new product version is compared with the previous version. Although, the toolbox is mainly tested for the Baltic Sea yet, it can easily be adapted to different datasets and parameters, regardless of the geographic region. In this presentation the usability of the toolbox is demonstrated along with several results of the validation process.

  10. Using a Modular Open Systems Approach in Defense Acquisitions: Implications for the Contracting Process

    DTIC Science & Technology

    2006-01-30

    He has taught contract management courses for the UCLA Government Contracts Certificate program and is also a senior faculty member for the Keller...standards for its key interfaces, and has been subjected to successful validation and verification tests to ensure the openness of its key interfaces...widely supported and consensus based standards for its key interfaces, and is subject to validation and verification tests to ensure the openness of its

  11. Validity and reliability of instruments aimed at measuring Evidence-Based Practice in Physical Therapy: a systematic review of the literature.

    PubMed

    Fernández-Domínguez, Juan Carlos; Sesé-Abad, Albert; Morales-Asencio, Jose Miguel; Oliva-Pascual-Vaca, Angel; Salinas-Bueno, Iosune; de Pedro-Gómez, Joan Ernest

    2014-12-01

    Our goal is to compile and analyse the characteristics - especially validity and reliability - of all the existing international tools that have been used to measure evidence-based clinical practice in physiotherapy. A systematic review conducted with data from exclusively quantitative-type studies synthesized in narrative format. An in-depth search of the literature was conducted in two phases: initial, structured, electronic search of databases and also journals with summarized evidence; followed by a residual-directed search in the bibliographical references of the main articles found in the primary search procedure. The studies included were assigned to members of the research team who acted as peer reviewers. Relevant information was extracted from each of the selected articles using a template that included the general characteristics of the instrument as well as an analysis of the quality of the validation processes carried out, by following the criteria of Terwee. Twenty-four instruments were found to comply with the review screening criteria; however, in all cases, they were found to be limited as regards the 'constructs' included. Besides, they can all be seen to be lacking as regards comprehensiveness associated to the validation process of the psychometric tests used. It seems that what constitutes a rigorously developed assessment instrument for EBP in physical therapy continues to be a challenge. © 2014 John Wiley & Sons, Ltd.

  12. Failure mode and effects analysis outputs: are they valid?

    PubMed Central

    2012-01-01

    Background Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies between the teams’ estimates and similar incidents reported on the trust’s incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA’s validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues. PMID:22682433

  13. Validation workflow for a clinical Bayesian network model in multidisciplinary decision making in head and neck oncology treatment.

    PubMed

    Cypko, Mario A; Stoehr, Matthaeus; Kozniewski, Marcin; Druzdzel, Marek J; Dietz, Andreas; Berliner, Leonard; Lemke, Heinz U

    2017-11-01

    Oncological treatment is being increasingly complex, and therefore, decision making in multidisciplinary teams is becoming the key activity in the clinical pathways. The increased complexity is related to the number and variability of possible treatment decisions that may be relevant to a patient. In this paper, we describe validation of a multidisciplinary cancer treatment decision in the clinical domain of head and neck oncology. Probabilistic graphical models and corresponding inference algorithms, in the form of Bayesian networks, can support complex decision-making processes by providing a mathematically reproducible and transparent advice. The quality of BN-based advice depends on the quality of the model. Therefore, it is vital to validate the model before it is applied in practice. For an example BN subnetwork of laryngeal cancer with 303 variables, we evaluated 66 patient records. To validate the model on this dataset, a validation workflow was applied in combination with quantitative and qualitative analyses. In the subsequent analyses, we observed four sources of imprecise predictions: incorrect data, incomplete patient data, outvoting relevant observations, and incorrect model. Finally, the four problems were solved by modifying the data and the model. The presented validation effort is related to the model complexity. For simpler models, the validation workflow is the same, although it may require fewer validation methods. The validation success is related to the model's well-founded knowledge base. The remaining laryngeal cancer model may disclose additional sources of imprecise predictions.

  14. Use of Synthetic Single-Stranded Oligonucleotides as Artificial Test Soiling for Validation of Surgical Instrument Cleaning Processes

    PubMed Central

    Wilhelm, Nadja; Perle, Nadja; Simmoteit, Robert; Schlensak, Christian; Wendel, Hans P.; Avci-Adali, Meltem

    2014-01-01

    Surgical instruments are often strongly contaminated with patients' blood and tissues, possibly containing pathogens. The reuse of contaminated instruments without adequate cleaning and sterilization can cause postoperative inflammation and the transmission of infectious diseases from one patient to another. Thus, based on the stringent sterility requirements, the development of highly efficient, validated cleaning processes is necessary. Here, we use for the first time synthetic single-stranded DNA (ssDNA_ODN), which does not appear in nature, as a test soiling to evaluate the cleaning efficiency of routine washing processes. Stainless steel test objects were coated with a certain amount of ssDNA_ODN. After cleaning, the amount of residual ssDNA_ODN on the test objects was determined using quantitative real-time PCR. The established method is highly specific and sensitive, with a detection limit of 20 fg, and enables the determination of the cleaning efficiency of medical cleaning processes under different conditions to obtain optimal settings for the effective cleaning and sterilization of instruments. The use of this highly sensitive method for the validation of cleaning processes can prevent, to a significant extent, the insufficient cleaning of surgical instruments and thus the transmission of pathogens to patients. PMID:24672793

  15. A rigorous approach to facilitate and guarantee the correctness of the genetic testing management in human genome information systems.

    PubMed

    Araújo, Luciano V; Malkowski, Simon; Braghetto, Kelly R; Passos-Bueno, Maria R; Zatz, Mayana; Pu, Calton; Ferreira, João E

    2011-12-22

    Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces.

  16. A rigorous approach to facilitate and guarantee the correctness of the genetic testing management in human genome information systems

    PubMed Central

    2011-01-01

    Background Recent medical and biological technology advances have stimulated the development of new testing systems that have been providing huge, varied amounts of molecular and clinical data. Growing data volumes pose significant challenges for information processing systems in research centers. Additionally, the routines of genomics laboratory are typically characterized by high parallelism in testing and constant procedure changes. Results This paper describes a formal approach to address this challenge through the implementation of a genetic testing management system applied to human genome laboratory. We introduced the Human Genome Research Center Information System (CEGH) in Brazil, a system that is able to support constant changes in human genome testing and can provide patients updated results based on the most recent and validated genetic knowledge. Our approach uses a common repository for process planning to ensure reusability, specification, instantiation, monitoring, and execution of processes, which are defined using a relational database and rigorous control flow specifications based on process algebra (ACP). The main difference between our approach and related works is that we were able to join two important aspects: 1) process scalability achieved through relational database implementation, and 2) correctness of processes using process algebra. Furthermore, the software allows end users to define genetic testing without requiring any knowledge about business process notation or process algebra. Conclusions This paper presents the CEGH information system that is a Laboratory Information Management System (LIMS) based on a formal framework to support genetic testing management for Mendelian disorder studies. We have proved the feasibility and showed usability benefits of a rigorous approach that is able to specify, validate, and perform genetic testing using easy end user interfaces. PMID:22369688

  17. Validation of Land Surface Temperature from Sentinel-3

    NASA Astrophysics Data System (ADS)

    Ghent, D.

    2017-12-01

    One of the main objectives of the Sentinel-3 mission is to measure sea- and land-surface temperature with high-end accuracy and reliability in support of environmental and climate monitoring in an operational context. Calibration and validation are thus key criteria for operationalization within the framework of the Sentinel-3 Mission Performance Centre (S3MPC). Land surface temperature (LST) has a long heritage of satellite observations which have facilitated our understanding of land surface and climate change processes, such as desertification, urbanization, deforestation and land/atmosphere coupling. These observations have been acquired from a variety of satellite instruments on platforms in both low-earth orbit and in geostationary orbit. Retrieval accuracy can be a challenge though; surface emissivities can be highly variable owing to the heterogeneity of the land, and atmospheric effects caused by the presence of aerosols and by water vapour absorption can give a bias to the underlying LST. As such, a rigorous validation is critical in order to assess the quality of the data and the associated uncertainties. Validation of the level-2 SL_2_LST product, which became freely available on an operational basis from 5th July 2017 builds on an established validation protocol for satellite-based LST. This set of guidelines provides a standardized framework for structuring LST validation activities. The protocol introduces a four-pronged approach which can be summarised thus: i) in situ validation where ground-based observations are available; ii) radiance-based validation over sites that are homogeneous in emissivity; iii) intercomparison with retrievals from other satellite sensors; iv) time-series analysis to identify artefacts on an interannual time-scale. This multi-dimensional approach is a necessary requirement for assessing the performance of the LST algorithm for the Sea and Land Surface Temperature Radiometer (SLSTR) which is designed around biome-based coefficients, thus emphasizing the importance of non-traditional forms of validation such as radiance-based techniques. Here we present examples of the ongoing routine application of the protocol to operational Sentinel-3 LST data.

  18. Off-target model based OPC

    NASA Astrophysics Data System (ADS)

    Lu, Mark; Liang, Curtis; King, Dion; Melvin, Lawrence S., III

    2005-11-01

    Model-based Optical Proximity correction has become an indispensable tool for achieving wafer pattern to design fidelity at current manufacturing process nodes. Most model-based OPC is performed considering the nominal process condition, with limited consideration of through process manufacturing robustness. This study examines the use of off-target process models - models that represent non-nominal process states such as would occur with a dose or focus variation - to understands and manipulate the final pattern correction to a more process robust configuration. The study will first examine and validate the process of generating an off-target model, then examine the quality of the off-target model. Once the off-target model is proven, it will be used to demonstrate methods of generating process robust corrections. The concepts are demonstrated using a 0.13 μm logic gate process. Preliminary indications show success in both off-target model production and process robust corrections. With these off-target models as tools, mask production cycle times can be reduced.

  19. Application of a mechanistic model as a tool for on-line monitoring of pilot scale filamentous fungal fermentation processes-The importance of evaporation effects.

    PubMed

    Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Sin, Gürkan; Gernaey, Krist V

    2017-03-01

    A mechanistic model-based soft sensor is developed and validated for 550L filamentous fungus fermentations operated at Novozymes A/S. The soft sensor is comprised of a parameter estimation block based on a stoichiometric balance, coupled to a dynamic process model. The on-line parameter estimation block models the changing rates of formation of product, biomass, and water, and the rate of consumption of feed using standard, available on-line measurements. This parameter estimation block, is coupled to a mechanistic process model, which solves the current states of biomass, product, substrate, dissolved oxygen and mass, as well as other process parameters including k L a, viscosity and partial pressure of CO 2 . State estimation at this scale requires a robust mass model including evaporation, which is a factor not often considered at smaller scales of operation. The model is developed using a historical data set of 11 batches from the fermentation pilot plant (550L) at Novozymes A/S. The model is then implemented on-line in 550L fermentation processes operated at Novozymes A/S in order to validate the state estimator model on 14 new batches utilizing a new strain. The product concentration in the validation batches was predicted with an average root mean sum of squared error (RMSSE) of 16.6%. In addition, calculation of the Janus coefficient for the validation batches shows a suitably calibrated model. The robustness of the model prediction is assessed with respect to the accuracy of the input data. Parameter estimation uncertainty is also carried out. The application of this on-line state estimator allows for on-line monitoring of pilot scale batches, including real-time estimates of multiple parameters which are not able to be monitored on-line. With successful application of a soft sensor at this scale, this allows for improved process monitoring, as well as opening up further possibilities for on-line control algorithms, utilizing these on-line model outputs. Biotechnol. Bioeng. 2017;114: 589-599. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.

  20. Research on the technique of large-aperture off-axis parabolic surface processing using tri-station machine and its applicability.

    PubMed

    Zhang, Xin; Luo, Xiao; Hu, Haixiang; Zhang, Xuejun

    2015-09-01

    In order to process large-aperture aspherical mirrors, we designed and constructed a tri-station machine processing center with a three station device, which bears vectored feed motion of up to 10 axes. Based on this processing center, an aspherical mirror-processing model is proposed, in which each station implements traversal processing of large-aperture aspherical mirrors using only two axes, while the stations are switchable, thus lowering cost and enhancing processing efficiency. The applicability of the tri-station machine is also analyzed. At the same time, a simple and efficient zero-calibration method for processing is proposed. To validate the processing model, using our processing center, we processed an off-axis parabolic SiC mirror with an aperture diameter of 1450 mm. The experimental results indicate that, with a one-step iterative process, the peak to valley (PV) and root mean square (RMS) of the mirror converged from 3.441 and 0.5203 μm to 2.637 and 0.2962 μm, respectively, where the RMS reduced by 43%. The validity and high accuracy of the model are thereby demonstrated.

  1. Process evaluation to explore internal and external validity of the "Act in Case of Depression" care program in nursing homes.

    PubMed

    Leontjevas, Ruslan; Gerritsen, Debby L; Koopmans, Raymond T C M; Smalbrugge, Martin; Vernooij-Dassen, Myrra J F J

    2012-06-01

    A multidisciplinary, evidence-based care program to improve the management of depression in nursing home residents was implemented and tested using a stepped-wedge design in 23 nursing homes (NHs): "Act in case of Depression" (AiD). Before effect analyses, to evaluate AiD process data on sampling quality (recruitment and randomization, reach) and intervention quality (relevance and feasibility, extent to which AiD was performed), which can be used for understanding internal and external validity. In this article, a model is presented that divides process evaluation data into first- and second-order process data. Qualitative and quantitative data based on personal files of residents, interviews of nursing home professionals, and a research database were analyzed according to the following process evaluation components: sampling quality and intervention quality. Nursing home. The pattern of residents' informed consent rates differed for dementia special care units and somatic units during the study. The nursing home staff was satisfied with the AiD program and reported that the program was feasible and relevant. With the exception of the first screening step (nursing staff members using a short observer-based depression scale), AiD components were not performed fully by NH staff as prescribed in the AiD protocol. Although NH staff found the program relevant and feasible and was satisfied with the program content, individual AiD components may have different feasibility. The results on sampling quality implied that statistical analyses of AiD effectiveness should account for the type of unit, whereas the findings on intervention quality implied that, next to the type of unit, analyses should account for the extent to which individual AiD program components were performed. In general, our first-order process data evaluation confirmed internal and external validity of the AiD trial, and this evaluation enabled further statistical fine tuning. The importance of evaluating the first-order process data before executing statistical effect analyses is thus underlined. Copyright © 2012 American Medical Directors Association, Inc. Published by Elsevier Inc. All rights reserved.

  2. Systematic reviews, systematic error and the acquisition of clinical knowledge

    PubMed Central

    2010-01-01

    Background Since its inception, evidence-based medicine and its application through systematic reviews, has been widely accepted. However, it has also been strongly criticised and resisted by some academic groups and clinicians. One of the main criticisms of evidence-based medicine is that it appears to claim to have unique access to absolute scientific truth and thus devalues and replaces other types of knowledge sources. Discussion The various types of clinical knowledge sources are categorised on the basis of Kant's categories of knowledge acquisition, as being either 'analytic' or 'synthetic'. It is shown that these categories do not act in opposition but rather, depend upon each other. The unity of analysis and synthesis in knowledge acquisition is demonstrated during the process of systematic reviewing of clinical trials. Systematic reviews constitute comprehensive synthesis of clinical knowledge but depend upon plausible, analytical hypothesis development for the trials reviewed. The dangers of systematic error regarding the internal validity of acquired knowledge are highlighted on the basis of empirical evidence. It has been shown that the systematic review process reduces systematic error, thus ensuring high internal validity. It is argued that this process does not exclude other types of knowledge sources. Instead, amongst these other types it functions as an integrated element during the acquisition of clinical knowledge. Conclusions The acquisition of clinical knowledge is based on interaction between analysis and synthesis. Systematic reviews provide the highest form of synthetic knowledge acquisition in terms of achieving internal validity of results. In that capacity it informs the analytic knowledge of the clinician but does not replace it. PMID:20537172

  3. Beginning science teachers' performances: Assessment in times of reform

    NASA Astrophysics Data System (ADS)

    Budzinsky, Fie K.

    2000-10-01

    The current reform in science education and the research on effective teaching and student learning have reinforced the importance of teacher competency. To better measure performances in the teaching of science, performance assessment has been added to Connecticut's licensure process for beginning science teachers. Teaching portfolios are used to document teaching and learning over time. Portfolios, however, are not without problems. One of the major concerns with the portfolio assessment process is its subjectivity. Assessors may not have opportunities to ask clarifying or follow-up questions to enhance the interpretation of a teacher's performance. In addition, portfolios often contain components based on self-documentation, which are subjective. Furthermore, the use of portfolios raises test equity issues. These concerns present challenges for persons in charge of establishing the validity of a portfolio-based licensure process. In high-stakes decision processes, such as teaching licensure, the validity of the assessment instruments must be studied. The primary purpose of this study was to explore the criterion-related validity of the Connecticut State Department of Education's Beginning Science Teaching Portfolio by comparing the interpretations of performances from science teaching portfolios to those derived from another assessment method, the Expert Science Teaching Educational and Evaluation Model, (ESTEEM). The analysis of correlations between the Beginning Science Teaching Portfolio and ESTEEM instrument scores was the primary method for establishing support for validity. The results indicated moderate correlations between all Beginning Science Teaching Portfolio and ESTEEM category and total variables. Multiple regression was used to examine whether differences existed in beginning science teachers' performances based on gender, poverty group, school level, and science discipline taught. None of these variables significantly contributed to the explanation of variance in the ESTEEM (p > .05), but poverty group and gender were significant predictors of portfolio performances, accounting for 21% of the total variance. Finally, data from interviews, written surveys, and beginning teacher attendance records at state-supported seminars were analyzed qualitatively and quantitatively. This information provided insight about the quality and quantity of support beginning science teachers received in their efforts to document, via the science teaching portfolio, their abilities to implement the Connecticut Professional Science Teaching Standards.

  4. V&V Plan for FPGA-based ESF-CCS Using System Engineering Approach.

    NASA Astrophysics Data System (ADS)

    Maerani, Restu; Mayaka, Joyce; El Akrat, Mohamed; Cheon, Jung Jae

    2018-02-01

    Instrumentation and Control (I&C) systems play an important role in maintaining the safety of Nuclear Power Plant (NPP) operation. However, most current I&C safety systems are based on Programmable Logic Controller (PLC) hardware, which is difficult to verify and validate, and is susceptible to software common cause failure. Therefore, a plan for the replacement of the PLC-based safety systems, such as the Engineered Safety Feature - Component Control System (ESF-CCS), with Field Programmable Gate Arrays (FPGA) is needed. By using a systems engineering approach, which ensures traceability in every phase of the life cycle, from system requirements, design implementation to verification and validation, the system development is guaranteed to be in line with the regulatory requirements. The Verification process will ensure that the customer and stakeholder’s needs are satisfied in a high quality, trustworthy, cost efficient and schedule compliant manner throughout a system’s entire life cycle. The benefit of the V&V plan is to ensure that the FPGA based ESF-CCS is correctly built, and to ensure that the measurement of performance indicators has positive feedback that “do we do the right thing” during the re-engineering process of the FPGA based ESF-CCS.

  5. Space processes for extended low-G testing

    NASA Technical Reports Server (NTRS)

    Steurer, W. H.; Kaye, S.; Gorham, D. J.

    1973-01-01

    Results of an investigation of verifying the capabilities of space processes in ground based experiments at low-g periods are presented. Limited time experiments were conducted with the processes. A valid representation of the complete process cycle was achieved at low-g periods ranging from 40 to 390 seconds. A minimum equipment inventory, is defined. A modular equipment design, adopted to assure low cost and high program flexibility, is presented as well as procedures and data established for the synthesis and definition of dedicated and mixed rocket payloads.

  6. Using task analysis to improve the requirements elicitation in health information system.

    PubMed

    Teixeira, Leonor; Ferreira, Carlos; Santos, Beatriz Sousa

    2007-01-01

    This paper describes the application of task analysis within the design process of a Web-based information system for managing clinical information in hemophilia care, in order to improve the requirements elicitation and, consequently, to validate the domain model obtained in a previous phase of the design process (system analysis). The use of task analysis in this case proved to be a practical and efficient way to improve the requirements engineering process by involving users in the design process.

  7. Causation and Validation of Nursing Diagnoses: A Middle Range Theory.

    PubMed

    de Oliveira Lopes, Marcos Venícios; da Silva, Viviane Martins; Herdman, T Heather

    2017-01-01

    To describe a predictive middle range theory (MRT) that provides a process for validation and incorporation of nursing diagnoses in clinical practice. Literature review. The MRT includes definitions, a pictorial scheme, propositions, causal relationships, and translation to nursing practice. The MRT can be a useful alternative for education, research, and translation of this knowledge into practice. This MRT can assist clinicians in understanding clinical reasoning, based on temporal logic and spectral interaction among elements of nursing classifications. In turn, this understanding will improve the use and accuracy of nursing diagnosis, which is a critical component of the nursing process that forms a basis for nursing practice standards worldwide. © 2015 NANDA International, Inc.

  8. A Recipe for Soft Fluidic Elastomer Robots

    PubMed Central

    Marchese, Andrew D.; Katzschmann, Robert K.

    2015-01-01

    Abstract This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes. PMID:27625913

  9. A Recipe for Soft Fluidic Elastomer Robots.

    PubMed

    Marchese, Andrew D; Katzschmann, Robert K; Rus, Daniela

    2015-03-01

    This work provides approaches to designing and fabricating soft fluidic elastomer robots. That is, three viable actuator morphologies composed entirely from soft silicone rubber are explored, and these morphologies are differentiated by their internal channel structure, namely, ribbed, cylindrical, and pleated. Additionally, three distinct casting-based fabrication processes are explored: lamination-based casting, retractable-pin-based casting, and lost-wax-based casting. Furthermore, two ways of fabricating a multiple DOF robot are explored: casting the complete robot as a whole and casting single degree of freedom (DOF) segments with subsequent concatenation. We experimentally validate each soft actuator morphology and fabrication process by creating multiple physical soft robot prototypes.

  10. Flight-Test Validation and Flying Qualities Evaluation of a Rotorcraft UAV Flight Control System

    NASA Technical Reports Server (NTRS)

    Mettler, Bernard; Tuschler, Mark B.; Kanade, Takeo

    2000-01-01

    This paper presents a process of design and flight-test validation and flying qualities evaluation of a flight control system for a rotorcraft-based unmanned aerial vehicle (RUAV). The keystone of this process is an accurate flight-dynamic model of the aircraft, derived by using system identification modeling. The model captures the most relevant dynamic features of our unmanned rotorcraft, and explicitly accounts for the presence of a stabilizer bar. Using the identified model we were able to determine the performance margins of our original control system and identify limiting factors. The performance limitations were addressed and the attitude control system was 0ptimize.d for different three performance levels: slow, medium, fast. The optimized control laws will be implemented in our RUAV. We will first determine the validity of our control design approach by flight test validating our optimized controllers. Subsequently, we will fly a series of maneuvers with the three optimized controllers to determine the level of flying qualities that can be attained. The outcome enable us to draw important conclusions on the flying qualities requirements for small-scale RUAVs.

  11. Validation and psychometric evaluation of physical activity belief scale among patients with type 2 diabetes mellitus: an application of health action process approach.

    PubMed

    Rohani, Hosein; Eslami, Ahmad Ali; Ghaderi, Arsalan; Jafari-Koshki, Tohid; Sadeghi, Erfan; Bidkhori, Mohammad; Raei, Mehdi

    2016-01-01

    Moderate increase in physical activity (PA) may be helpful in preventing or postponing the complications of type 2 diabetes mellitus (T2DM). The aim of this study was to assess the psychometric properties of a health action process approach (HAPA)-based PA inventory among T2DM patients. In 2015, this cross-sectional study was carried out on 203 participants recruited by convenience sampling in Isfahan, Iran. Content and face validity was confirmed by a panel of experts. The comments noted by 9 outpatients on the inventory were also investigated. Then,the items were administered to 203 T2DM patients. Construct validity was conducted using exploratory and structural equation modeling confirmatory factor analyses. Reliability was also assessed with Cronbach alpha and interclass correlation coefficient (ICC). Content validity was acceptable (CVR = 0.62, CVI = 0.89). Exploratory factor analysis extracted seven factors (risk- perception, action self-efficacy, outcome expectancies, maintenance self-efficacy, action and coping planning, behavioral intention, and recovery self-efficacy) explaining 82.23% of the variation. The HAPA had an acceptable fit to the observations (χ2 = 3.21, df = 3, P = 0.38; RMSEA = 0.06; AGFI = 0.90; PGFI = 0.12). The range of Cronbach alpha and ICC for the scales was about 0.63 to 0.97 and 0.862 to 0.988, respectively. The findings of the present study provided an initial support for the reliability and validity of the HAPA-based PA inventory among patients with T2DM.

  12. Rapid Spoligotyping of Mycobacterium tuberculosis Complex Bacteria by Use of a Microarray System with Automatic Data Processing and Assignment

    PubMed Central

    Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf

    2012-01-01

    Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability. PMID:22553239

  13. Rapid spoligotyping of Mycobacterium tuberculosis complex bacteria by use of a microarray system with automatic data processing and assignment.

    PubMed

    Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf; Sachse, Konrad

    2012-07-01

    Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability.

  14. Multiscale GPS tomography during COPS: validation and applications

    NASA Astrophysics Data System (ADS)

    Champollion, Cédric; Flamant, Cyrille; Masson, Frédéric; Gégout, Pascal; Boniface, Karen; Richard, Evelyne

    2010-05-01

    Accurate 3D description of the water vapour field is of interest for process studies such as convection initiation. None of the current techniques (LIDAR, satellite, radio soundings, GPS) can provide an all weather continuous 3D field of moisture. The combination of GPS tomography with radio-soundings (and/or LIDAR) has been used for such process studies using both advantages of vertically resolved soundings and high temporal density of GPS measurements. GPS tomography has been used at short scale (10 km horizontal resolution but in a 50 km² area) for process studies such as the ESCOMPTE experiment (Bastin et al., 2005) and at larger scale (50 km horizontal resolution) during IHOP_2002. But no extensive statistical validation has been done so far. The overarching goal of the COPS field experiment is to advance the quality of forecasts of orographically induced convective precipitation by four-dimensional observations and modeling of its life cycle for identifying the physical and chemical processes responsible for deficiencies in QPF over low-mountain regions. During the COPS field experiment, a GPS network of about 100 GPS stations has been continuously operating during three months in an area of 500 km² in the East of France (Vosges Mountains) and West of Germany (Black Forest). If the mean spacing between the GPS is about 50 km, an East-West GPS profile with a density of about 10 km is dedicated to high resolution tomography. One major goal of the GPS COPS experiment is to validate the GPS tomography with different spatial resolutions. Validation is based on additional radio-soundings and airborne / ground-based LIDAR measurement. The number and the high quality of vertically resolved water vapor observations give an unique data set for GPS tomography validation. Numerous tests have been done on real data to show the type water vapor structures that can be imaging by GPS tomography depending of the assimilation of additional data (radio soundings), the resolution of the tomography grid and the density of GPS network. Finally some applications to different cases studies will be shortly presented.

  15. A process-based inventory model for landfill CH4 emissions inclusive of seasonal soil microclimate and CH4 oxidation

    USDA-ARS?s Scientific Manuscript database

    We have developed and field-validated an annual inventory model for California landfill CH4 emissions that incorporates both site-specific soil properties and soil microclimate modeling coupled to 0.5o scale global climatic models. Based on 1-D diffusion, CALMIM (California Landfill Methane Inventor...

  16. What Does It Take to Scale Up and Sustain Evidence-Based Practices?

    ERIC Educational Resources Information Center

    Klingner, Janette K.; Boardman, Alison G.; Mcmaster, Kristen L.

    2013-01-01

    This article discusses the strategic scaling up of evidence-based practices. The authors draw from the scholarly work of fellow special education researchers and from the field of learning sciences. The article defines scaling up as the process by which researchers or educators initially implement interventions on a small scale, validate them, and…

  17. An AHP-Based Weighted Analysis of Network Knowledge Management Platforms for Elementary School Students

    ERIC Educational Resources Information Center

    Lee, Chung-Ping; Lou, Shi-Jer; Shih, Ru-Chu; Tseng, Kuo-Hung

    2011-01-01

    This study uses the analytical hierarchy process (AHP) to quantify important knowledge management behaviors and to analyze the weight scores of elementary school students' behaviors in knowledge transfer, sharing, and creation. Based on the analysis of Expert Choice and tests for validity and reliability, this study identified the weight scores of…

  18. Mapping fuels at multiple scales: landscape application of the fuel characteristic classification system.

    Treesearch

    D. McKenzie; C.L. Raymond; L.-K.B. Kellogg; R.A. Norheim; A.G. Andreu; A.C. Bayard; K.E. Kopper; E. Elman

    2007-01-01

    Fuel mapping is a complex and often multidisciplinary process, involving remote sensing, ground-based validation, statistical modeling, and knowledge-based systems. The scale and resolution of fuel mapping depend both on objectives and availability of spatial data layers. We demonstrate use of the Fuel Characteristic Classification System (FCCS) for fuel mapping at two...

  19. MESUR: USAGE-BASED METRICS OF SCHOLARLY IMPACT

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    BOLLEN, JOHAN; RODRIGUEZ, MARKO A.; VAN DE SOMPEL, HERBERT

    2007-01-30

    The evaluation of scholarly communication items is now largely a matter of expert opinion or metrics derived from citation data. Both approaches can fail to take into account the myriad of factors that shape scholarly impact. Usage data has emerged as a promising complement to existing methods o fassessment but the formal groundwork to reliably and validly apply usage-based metrics of schlolarly impact is lacking. The Andrew W. Mellon Foundation funded MESUR project constitutes a systematic effort to define, validate and cross-validate a range of usage-based metrics of schlolarly impact by creating a semantic model of the scholarly communication process.more » The constructed model will serve as the basis of a creating a large-scale semantic network that seamlessly relates citation, bibliographic and usage data from a variety of sources. A subsequent program that uses the established semantic network as a reference data set will determine the characteristics and semantics of a variety of usage-based metrics of schlolarly impact. This paper outlines the architecture and methodology adopted by the MESUR project and its future direction.« less

  20. Do two machine-learning based prognostic signatures for breast cancer capture the same biological processes?

    PubMed

    Drier, Yotam; Domany, Eytan

    2011-03-14

    The fact that there is very little if any overlap between the genes of different prognostic signatures for early-discovery breast cancer is well documented. The reasons for this apparent discrepancy have been explained by the limits of simple machine-learning identification and ranking techniques, and the biological relevance and meaning of the prognostic gene lists was questioned. Subsequently, proponents of the prognostic gene lists claimed that different lists do capture similar underlying biological processes and pathways. The present study places under scrutiny the validity of this claim, for two important gene lists that are at the focus of current large-scale validation efforts. We performed careful enrichment analysis, controlling the effects of multiple testing in a manner which takes into account the nested dependent structure of gene ontologies. In contradiction to several previous publications, we find that the only biological process or pathway for which statistically significant concordance can be claimed is cell proliferation, a process whose relevance and prognostic value was well known long before gene expression profiling. We found that the claims reported by others, of wider concordance between the biological processes captured by the two prognostic signatures studied, were found either to be lacking statistical rigor or were in fact based on addressing some other question.

  1. Eye-Tracking as a Tool in Process-Oriented Reading Test Validation

    ERIC Educational Resources Information Center

    Solheim, Oddny Judith; Uppstad, Per Henning

    2011-01-01

    The present paper addresses the continuous need for methodological reflection on how to validate inferences made on the basis of test scores. Validation is a process that requires many lines of evidence. In this article we discuss the potential of eye tracking methodology in process-oriented reading test validation. Methodological considerations…

  2. Using natural language processing for identification of herpes zoster ophthalmicus cases to support population-based study.

    PubMed

    Zheng, Chengyi; Luo, Yi; Mercado, Cheryl; Sy, Lina; Jacobsen, Steven J; Ackerson, Brad; Lewin, Bruno; Tseng, Hung Fu

    2018-06-19

    Diagnosis codes are inadequate for accurately identifying herpes zoster ophthalmicus (HZO). There is significant lack of population-based studies on HZO due to the high expense of manual review of medical records. To assess whether HZO can be identified from the clinical notes using natural language processing (NLP). To investigate the epidemiology of HZO among HZ population based on the developed approach. A retrospective cohort analysis. A total of 49,914 southern California residents aged over 18 years, who had a new diagnosis of HZ. An NLP-based algorithm was developed and validated with the manually curated validation dataset (n=461). The algorithm was applied on over 1 million clinical notes associated with the study population. HZO versus non-HZO cases were compared by age, sex, race, and comorbidities. We measured the accuracy of NLP algorithm. NLP algorithm achieved 95.6% sensitivity and 99.3% specificity. Compared to the diagnosis codes, NLP identified significant more HZO cases among HZ population (13.9% versus 1.7%). Compared to the non-HZO group, the HZO group was older, had more males, had more Whites, and had more outpatient visits. We developed and validated an automatic method to identify HZO cases with high accuracy. As one of the largest studies on HZO, our finding emphasizes the importance of preventing HZ in the elderly population. This method can be a valuable tool to support population-based studies and clinical care of HZO in the era of big data. This article is protected by copyright. All rights reserved.

  3. Predictive Big Data Analytics: A Study of Parkinson's Disease Using Large, Complex, Heterogeneous, Incongruent, Multi-Source and Incomplete Observations.

    PubMed

    Dinov, Ivo D; Heavner, Ben; Tang, Ming; Glusman, Gustavo; Chard, Kyle; Darcy, Mike; Madduri, Ravi; Pa, Judy; Spino, Cathie; Kesselman, Carl; Foster, Ian; Deutsch, Eric W; Price, Nathan D; Van Horn, John D; Ames, Joseph; Clark, Kristi; Hood, Leroy; Hampstead, Benjamin M; Dauer, William; Toga, Arthur W

    2016-01-01

    A unique archive of Big Data on Parkinson's Disease is collected, managed and disseminated by the Parkinson's Progression Markers Initiative (PPMI). The integration of such complex and heterogeneous Big Data from multiple sources offers unparalleled opportunities to study the early stages of prevalent neurodegenerative processes, track their progression and quickly identify the efficacies of alternative treatments. Many previous human and animal studies have examined the relationship of Parkinson's disease (PD) risk to trauma, genetics, environment, co-morbidities, or life style. The defining characteristics of Big Data-large size, incongruency, incompleteness, complexity, multiplicity of scales, and heterogeneity of information-generating sources-all pose challenges to the classical techniques for data management, processing, visualization and interpretation. We propose, implement, test and validate complementary model-based and model-free approaches for PD classification and prediction. To explore PD risk using Big Data methodology, we jointly processed complex PPMI imaging, genetics, clinical and demographic data. Collective representation of the multi-source data facilitates the aggregation and harmonization of complex data elements. This enables joint modeling of the complete data, leading to the development of Big Data analytics, predictive synthesis, and statistical validation. Using heterogeneous PPMI data, we developed a comprehensive protocol for end-to-end data characterization, manipulation, processing, cleaning, analysis and validation. Specifically, we (i) introduce methods for rebalancing imbalanced cohorts, (ii) utilize a wide spectrum of classification methods to generate consistent and powerful phenotypic predictions, and (iii) generate reproducible machine-learning based classification that enables the reporting of model parameters and diagnostic forecasting based on new data. We evaluated several complementary model-based predictive approaches, which failed to generate accurate and reliable diagnostic predictions. However, the results of several machine-learning based classification methods indicated significant power to predict Parkinson's disease in the PPMI subjects (consistent accuracy, sensitivity, and specificity exceeding 96%, confirmed using statistical n-fold cross-validation). Clinical (e.g., Unified Parkinson's Disease Rating Scale (UPDRS) scores), demographic (e.g., age), genetics (e.g., rs34637584, chr12), and derived neuroimaging biomarker (e.g., cerebellum shape index) data all contributed to the predictive analytics and diagnostic forecasting. Model-free Big Data machine learning-based classification methods (e.g., adaptive boosting, support vector machines) can outperform model-based techniques in terms of predictive precision and reliability (e.g., forecasting patient diagnosis). We observed that statistical rebalancing of cohort sizes yields better discrimination of group differences, specifically for predictive analytics based on heterogeneous and incomplete PPMI data. UPDRS scores play a critical role in predicting diagnosis, which is expected based on the clinical definition of Parkinson's disease. Even without longitudinal UPDRS data, however, the accuracy of model-free machine learning based classification is over 80%. The methods, software and protocols developed here are openly shared and can be employed to study other neurodegenerative disorders (e.g., Alzheimer's, Huntington's, amyotrophic lateral sclerosis), as well as for other predictive Big Data analytics applications.

  4. Health Sciences-Evidence Based Practice questionnaire (HS-EBP) for measuring transprofessional evidence-based practice: Creation, development and psychometric validation

    PubMed Central

    Fernández-Domínguez, Juan Carlos; de Pedro-Gómez, Joan Ernest; Morales-Asencio, José Miguel; Sastre-Fullana, Pedro; Sesé-Abad, Albert

    2017-01-01

    Introduction Most of the EBP measuring instruments available to date present limitations both in the operationalisation of the construct and also in the rigour of their psychometric development, as revealed in the literature review performed. The aim of this paper is to provide rigorous and adequate reliability and validity evidence of the scores of a new transdisciplinary psychometric tool, the Health Sciences Evidence-Based Practice (HS-EBP), for measuring the construct EBP in Health Sciences professionals. Methods A pilot study and a subsequent two-stage validation test sample were conducted to progressively refine the instrument until a reduced 60-item version with a five-factor latent structure. Reliability was analysed through both Cronbach’s alpha coefficient and intraclass correlations (ICC). Latent structure was contrasted using confirmatory factor analysis (CFA) following a model comparison aproach. Evidence of criterion validity of the scores obtained was achieved by considering attitudinal resistance to change, burnout, and quality of professional life as criterion variables; while convergent validity was assessed using the Spanish version of the Evidence-Based Practice Questionnaire (EBPQ-19). Results Adequate evidence of both reliability and ICC was obtained for the five dimensions of the questionnaire. According to the CFA model comparison, the best fit corresponded to the five-factor model (RMSEA = 0.049; CI 90% RMSEA = [0.047; 0.050]; CFI = 0.99). Adequate criterion and convergent validity evidence was also provided. Finally, the HS-EBP showed the capability to find differences between EBP training levels as an important evidence of decision validity. Conclusions Reliability and validity evidence obtained regarding the HS-EBP confirm the adequate operationalisation of the EBP construct as a process put into practice to respond to every clinical situation arising in the daily practice of professionals in health sciences (transprofessional). The tool could be useful for EBP individual assessment and for evaluating the impact of specific interventions to improve EBP. PMID:28486533

  5. Health Sciences-Evidence Based Practice questionnaire (HS-EBP) for measuring transprofessional evidence-based practice: Creation, development and psychometric validation.

    PubMed

    Fernández-Domínguez, Juan Carlos; de Pedro-Gómez, Joan Ernest; Morales-Asencio, José Miguel; Bennasar-Veny, Miquel; Sastre-Fullana, Pedro; Sesé-Abad, Albert

    2017-01-01

    Most of the EBP measuring instruments available to date present limitations both in the operationalisation of the construct and also in the rigour of their psychometric development, as revealed in the literature review performed. The aim of this paper is to provide rigorous and adequate reliability and validity evidence of the scores of a new transdisciplinary psychometric tool, the Health Sciences Evidence-Based Practice (HS-EBP), for measuring the construct EBP in Health Sciences professionals. A pilot study and a subsequent two-stage validation test sample were conducted to progressively refine the instrument until a reduced 60-item version with a five-factor latent structure. Reliability was analysed through both Cronbach's alpha coefficient and intraclass correlations (ICC). Latent structure was contrasted using confirmatory factor analysis (CFA) following a model comparison aproach. Evidence of criterion validity of the scores obtained was achieved by considering attitudinal resistance to change, burnout, and quality of professional life as criterion variables; while convergent validity was assessed using the Spanish version of the Evidence-Based Practice Questionnaire (EBPQ-19). Adequate evidence of both reliability and ICC was obtained for the five dimensions of the questionnaire. According to the CFA model comparison, the best fit corresponded to the five-factor model (RMSEA = 0.049; CI 90% RMSEA = [0.047; 0.050]; CFI = 0.99). Adequate criterion and convergent validity evidence was also provided. Finally, the HS-EBP showed the capability to find differences between EBP training levels as an important evidence of decision validity. Reliability and validity evidence obtained regarding the HS-EBP confirm the adequate operationalisation of the EBP construct as a process put into practice to respond to every clinical situation arising in the daily practice of professionals in health sciences (transprofessional). The tool could be useful for EBP individual assessment and for evaluating the impact of specific interventions to improve EBP.

  6. Implementing Lumberjacks and Black Swans Into Model-Based Tools to Support Human-Automation Interaction.

    PubMed

    Sebok, Angelia; Wickens, Christopher D

    2017-03-01

    The objectives were to (a) implement theoretical perspectives regarding human-automation interaction (HAI) into model-based tools to assist designers in developing systems that support effective performance and (b) conduct validations to assess the ability of the models to predict operator performance. Two key concepts in HAI, the lumberjack analogy and black swan events, have been studied extensively. The lumberjack analogy describes the effects of imperfect automation on operator performance. In routine operations, an increased degree of automation supports performance, but in failure conditions, increased automation results in more significantly impaired performance. Black swans are the rare and unexpected failures of imperfect automation. The lumberjack analogy and black swan concepts have been implemented into three model-based tools that predict operator performance in different systems. These tools include a flight management system, a remotely controlled robotic arm, and an environmental process control system. Each modeling effort included a corresponding validation. In one validation, the software tool was used to compare three flight management system designs, which were ranked in the same order as predicted by subject matter experts. The second validation compared model-predicted operator complacency with empirical performance in the same conditions. The third validation compared model-predicted and empirically determined time to detect and repair faults in four automation conditions. The three model-based tools offer useful ways to predict operator performance in complex systems. The three tools offer ways to predict the effects of different automation designs on operator performance.

  7. Linking simulation-based educational assessments and patient-related outcomes: a systematic review and meta-analysis.

    PubMed

    Brydges, Ryan; Hatala, Rose; Zendejas, Benjamin; Erwin, Patricia J; Cook, David A

    2015-02-01

    To examine the evidence supporting the use of simulation-based assessments as surrogates for patient-related outcomes assessed in the workplace. The authors systematically searched MEDLINE, EMBASE, Scopus, and key journals through February 26, 2013. They included original studies that assessed health professionals and trainees using simulation and then linked those scores with patient-related outcomes assessed in the workplace. Two reviewers independently extracted information on participants, tasks, validity evidence, study quality, patient-related and simulation-based outcomes, and magnitude of correlation. All correlations were pooled using random-effects meta-analysis. Of 11,628 potentially relevant articles, the 33 included studies enrolled 1,203 participants, including postgraduate physicians (n = 24 studies), practicing physicians (n = 8), medical students (n = 6), dentists (n = 2), and nurses (n = 1). The pooled correlation for provider behaviors was 0.51 (95% confidence interval [CI], 0.38 to 0.62; n = 27 studies); for time behaviors, 0.44 (95% CI, 0.15 to 0.66; n = 7); and for patient outcomes, 0.24 (95% CI, -0.02 to 0.47; n = 5). Most reported validity evidence was favorable, though studies often included only correlational evidence. Validity evidence of internal structure (n = 13 studies), content (n = 12), response process (n = 2), and consequences (n = 1) were reported less often. Three tools showed large pooled correlations and favorable (albeit incomplete) validity evidence. Simulation-based assessments often correlate positively with patient-related outcomes. Although these surrogates are imperfect, tools with established validity evidence may replace workplace-based assessments for evaluating select procedural skills.

  8. Validity evidence for an OSCE to assess competency in systems-based practice and practice-based learning and improvement: a preliminary investigation.

    PubMed

    Varkey, Prathibha; Natt, Neena; Lesnick, Timothy; Downing, Steven; Yudkowsky, Rachel

    2008-08-01

    To determine the psychometric properties and validity of an OSCE to assess the competencies of Practice-Based Learning and Improvement (PBLI) and Systems-Based Practice (SBP) in graduate medical education. An eight-station OSCE was piloted at the end of a three-week Quality Improvement elective for nine preventive medicine and endocrinology fellows at Mayo Clinic. The stations assessed performance in quality measurement, root cause analysis, evidence-based medicine, insurance systems, team collaboration, prescription errors, Nolan's model, and negotiation. Fellows' performance in each of the stations was assessed by three faculty experts using checklists and a five-point global competency scale. A modified Angoff procedure was used to set standards. Evidence for the OSCE's validity, feasibility, and acceptability was gathered. Evidence for content and response process validity was judged as excellent by institutional content experts. Interrater reliability of scores ranged from 0.85 to 1 for most stations. Interstation correlation coefficients ranged from -0.62 to 0.99, reflecting case specificity. Implementation cost was approximately $255 per fellow. All faculty members agreed that the OSCE was realistic and capable of providing accurate assessments. The OSCE provides an opportunity to systematically sample the different subdomains of Quality Improvement. Furthermore, the OSCE provides an opportunity for the demonstration of skills rather than the testing of knowledge alone, thus making it a potentially powerful assessment tool for SBP and PBLI. The study OSCE was well suited to assess SBP and PBLI. The evidence gathered through this study lays the foundation for future validation work.

  9. Validation of King's transaction process for healthcare provider-patient communication in pharmaceutical context: One cross-sectional study.

    PubMed

    Wang, Dan; Liu, Chenxi; Zhang, Zinan; Ye, Liping; Zhang, Xinping

    2018-03-27

    With the impressive advantages of patient-pharmacist communication being advocated and poor pharmacist-patient communication in different settings, it is of great significance and urgency to explore the mechanism of the pharmacist-patient communicative relationship. The King's theory of goal attainment is proposed as one of the most promising models to be applied, because it takes into consideration both improving the patient-pharmacist relationship and attaining patients' health outcomes. This study aimed to validate the King's transaction process and build the linkage between the transaction process and patient satisfaction in a pharmaceutical context. A cross-sectional study was conducted in four tertiary hospitals in two provincial cities (Wuhan and Shanghai) in central and east China in July 2017. Patients over 18 were investigated in the pharmacies of the hospitals. The instrument for the transaction process was revised and tested. Path analysis was conducted for the King's transaction process and its relationship with patient satisfaction. Five hundred eighty-nine participants were investigated for main study. Prior to the addition of covariates, the hypothesised model of the King's transaction process was validated, in which all paths of the transaction process were statistically significant (p < 0.001). The transaction process had direct effects on patient satisfaction (p < 0.001). After controlling the effects of covariates, the Multiple Indicators, Multiple Causes (MIMIC) model showed good fit to data (Tucker-Lewis index [TLI] = 0.99, comparative fit index [CFI] = 0.99, root mean square error of approximation [RMSEA] = 0.05, weighted root mean square residual [WRMR] = 1.00). The MIMIC model showed that chronic disease and site were predictors for both identifying problems and patient satisfaction (p < 0.05). Based on the well-fitting path analytic model, the transaction process was established as one valid theoretical framework of healthcare provider-patient communication in a pharmaceutical context. Copyright © 2018 Elsevier Inc. All rights reserved.

  10. Survey of Verification and Validation Techniques for Small Satellite Software Development

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    2015-01-01

    The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation.

  11. The GPM Ground Validation Program: Pre to Post-Launch

    NASA Astrophysics Data System (ADS)

    Petersen, W. A.

    2014-12-01

    NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, types and data quality are being routinely generated to facilitate statistical GV of instantaneous and merged GPM products. To assess precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of ground-satellite estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi-radar, gauge and disdrometer facility located in coastal Virginia. This presentation will summarize the evolution of the NASA GPM GV program from pre to post-launch eras and highlight early evaluations of GPM satellite datasets.

  12. Validation of GOES-9 Satellite-Derived Cloud Properties over the Tropical Western Pacific Region

    NASA Technical Reports Server (NTRS)

    Khaiyer, Mandana M.; Nordeen, Michele L.; Doeling, David R.; Chakrapani, Venkatasan; Minnis, Patrick; Smith, William L., Jr.

    2004-01-01

    Real-time processing of hourly GOES-9 images in the ARM TWP region began operationally in October 2003 and is continuing. The ARM sites provide an excellent source for validating this new satellitederived cloud and radiation property dataset. Derived cloud amounts, heights, and broadband shortwave fluxes are compared with similar quantities derived from ground-based instrumentation. The results will provide guidance for estimating uncertainties in the GOES-9 products and to develop improvements in the retrieval methodologies and input.

  13. Structures Validation Profiles in Transmission of Imaging and Data (TRIAD) for Automated NCTN Clinical Trial Digital Data Quality Assurance

    PubMed Central

    Giaddui, Tawfik; Yu, Jialu; Manfredi, Denise; Linnemann, Nancy; Hunter, Joanne; O’Meara, Elizabeth; Galvin, James; Bialecki, Brian; Xiao, Ying

    2016-01-01

    Transmission of Imaging and Data (TRIAD) is a standard-based system built by the American College of Radiology (ACR) to provide seamless exchange of images and data for accreditation of clinical trials and registries. Scripts of structures’ names validation profiles created in TRIAD are used in the automated submission process. It is essential for users to understand the logistics of these scripts for successful submission of radiotherapy cases with less iteration. PMID:27053498

  14. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo

    Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.

  15. Selecting Students for Pre-Algebra: Examination of the Relative Utility of the Anchorage Pre-Algebra Screening Tests and the State of Alaska Standards Based Benchmark 2 Mathematics Study. An Examination of Consequential Validity and Recommendation.

    ERIC Educational Resources Information Center

    Fenton, Ray

    This study examined the relative efficacy of the Anchorage (Alaska) Pre-Algebra Test and the State of Alaska Benchmark in 2 Math examination as tools used in the process of recommending grade 6 students for grade 7 Pre-Algebra placement. The consequential validity of the tests is explored in the context of class placements and grades earned. The…

  16. Risk-Based Tailoring of the Verification, Validation, and Accreditation/Acceptance Processes (Adaptation fondee sur le risque, des processus de verification, de validation, et d’accreditation/d’acceptation)

    DTIC Science & Technology

    2012-04-01

    Systems Concepts and Integration SET Sensors and Electronics Technology SISO Simulation Interoperability Standards Organization SIW Simulation...conjunction with 2006 Fall SIW 2006 September SISO Standards Activity Committee approved beginning IEEE balloting 2006 October IEEE Project...019 published 2008 June Edinborough, UK Held in conjunction with 2008 Euro- SIW 2008 September Laurel, MD, US Work on Composite Model 2008 December

  17. The Role of Structural Models in the Solar Sail Flight Validation Process

    NASA Technical Reports Server (NTRS)

    Johnston, John D.

    2004-01-01

    NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.

  18. Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation

    NASA Technical Reports Server (NTRS)

    Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna

    2000-01-01

    This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.

  19. FDA perspective on specifications for biotechnology products--from IND to PLA.

    PubMed

    Murano, G

    1997-01-01

    Quality standards are obligatory throughout development, approval and post-marketing phases of biotechnology-derived products, thus assuring product identity, purity, and potency/strength. The process of developing and setting specifications should be based on sound science and should represent a logical progression of actions based on the use of experiential data spanning manufacturing process validation, consistency in production, and characterization of relevant product properties/attributes, by multiple analytical means. This interactive process occurs in phases, varying in rigour. It is best described as encompassing a framework which starts with the implementation of realistic/practical operational quality limits, progressing to the establishment/adoption of more stringent specifications. The historical database is generated from preclinical, toxicology and early clinical lots. This supports the clinical development programme which, as it progresses, allows for further assay method validation/refinement, adoption/addition due to relevant or newly recognized product attributes or rejection due to irrelevance. In the next phase, (licensing/approval) specifications are set through extended experience and validation of both the preparative and analytical processes, to include availability of suitable reference standards and extensive product characterization throughout its proposed dating period. Subsequent to product approval, the incremental database of test results serves as a natural continuum for further evolving/refining specifications. While there is considerable latitude in the kinds of testing modalities finally adopted to establish product quality on a routine basis, for both drugs and drug products, it is important that the selection takes into consideration relevant (significant) product characteristics that appropriately reflect on identity, purity and potency.

  20. Take-the-best and the influence of decision-inconsistent attributes on decision confidence and choices in memory-based decisions.

    PubMed

    Dummel, Sebastian; Rummel, Jan

    2016-11-01

    Take-the-best (TTB) is a decision strategy according to which attributes about choice options are sequentially processed in descending order of validity, and attribute processing is stopped once an attribute discriminates between options. Consequently, TTB-decisions rely on only one, the best discriminating, attribute, and lower-valid attributes need not be processed because they are TTB-irrelevant. Recent research suggests, however, that when attribute information is visually present during decision-making, TTB-irrelevant attributes are processed and integrated into decisions nonetheless. To examine whether TTB-irrelevant attributes are retrieved and integrated when decisions are made memory-based, we tested whether the consistency of a TTB-irrelevant attribute affects TTB-users' decision behaviour in a memory-based decision task. Participants first learned attribute configurations of several options. Afterwards, they made several decisions between two of the options, and we manipulated conflict between the second-best attribute and the TTB-decision. We assessed participants' decision confidence and the proportion of TTB-inconsistent choices. According to TTB, TTB-irrelevant attributes should not affect confidence and choices, because these attributes should not be retrieved. Results showed, however, that TTB-users were less confident and made more TTB-inconsistent choices when TTB-irrelevant information was in conflict with the TTB-decision than when it was not, suggesting that TTB-users retrieved and integrated TTB-irrelevant information.

  1. Multisite evaluation of APEX for water quality: II. Regional parameterization

    USDA-ARS?s Scientific Manuscript database

    Phosphorus (P) index assessment requires independent estimates of long-term average annual P loss from multiple locations, management practices, soils, and landscape positions. Because currently available measured data are insufficient, calibrated and validated process-based models have been propos...

  2. PERCLOS: A Valid Psychophysiological Measure of Alertness As Assessed by Psychomotor Vigilance

    DOT National Transportation Integrated Search

    2002-04-01

    The Logical Architecture is based on a Computer Aided Systems Engineering (CASE) model of the requirements for the flow of data and control through the various functions included in Intelligent Transportation Systems (ITS). Process Specifications pro...

  3. Smartphone based automatic organ validation in ultrasound video.

    PubMed

    Vaish, Pallavi; Bharath, R; Rajalakshmi, P

    2017-07-01

    Telesonography involves transmission of ultrasound video from remote areas to the doctors for getting diagnosis. Due to the lack of trained sonographers in remote areas, the ultrasound videos scanned by these untrained persons do not contain the proper information that is required by a physician. As compared to standard methods for video transmission, mHealth driven systems need to be developed for transmitting valid medical videos. To overcome this problem, we are proposing an organ validation algorithm to evaluate the ultrasound video based on the content present. This will guide the semi skilled person to acquire the representative data from patient. Advancement in smartphone technology allows us to perform high medical image processing on smartphone. In this paper we have developed an Application (APP) for a smartphone which can automatically detect the valid frames (which consist of clear organ visibility) in an ultrasound video and ignores the invalid frames (which consist of no-organ visibility), and produces a compressed sized video. This is done by extracting the GIST features from the Region of Interest (ROI) of the frame and then classifying the frame using SVM classifier with quadratic kernel. The developed application resulted with the accuracy of 94.93% in classifying valid and invalid images.

  4. Surface smoothness: cartilage biomarkers for knee OA beyond the radiologist

    NASA Astrophysics Data System (ADS)

    Tummala, Sudhakar; Dam, Erik B.

    2010-03-01

    Fully automatic imaging biomarkers may allow quantification of patho-physiological processes that a radiologist would not be able to assess reliably. This can introduce new insight but is problematic to validate due to lack of meaningful ground truth expert measurements. Rather than quantification accuracy, such novel markers must therefore be validated against clinically meaningful end-goals such as the ability to allow correct diagnosis. We present a method for automatic cartilage surface smoothness quantification in the knee joint. The quantification is based on a curvature flow method used on tibial and femoral cartilage compartments resulting from an automatic segmentation scheme. These smoothness estimates are validated for their ability to diagnose osteoarthritis and compared to smoothness estimates based on manual expert segmentations and to conventional cartilage volume quantification. We demonstrate that the fully automatic markers eliminate the time required for radiologist annotations, and in addition provide a diagnostic marker superior to the evaluated semi-manual markers.

  5. The Development and Validation of an Instrument to Measure Preservice Teachers' Self-Efficacy in Regard to The Teaching of Science as Inquiry

    NASA Astrophysics Data System (ADS)

    Dira Smolleck, Lori; Zembal-Saul, Carla; Yoder, Edgar P.

    2006-06-01

    The purpose of this study was to develop, validate, and establish the reliability of an instrument that measures preservice teachers' self-efficacy in regard to the teaching of science as inquiry. The instrument, Teaching Science as Inquiry (TSI), is based upon the work of Bandura (1977, 1981, 1982, 1986, 1989, 1995, 1997), Riggs (1988), and Enochs and Riggs (1990). Self-efficacy in regard to the teaching of science as inquiry was measured through the use of a 69-item Likert-type scale instrument designed by the author of the study. Based on the standardized development processes used and the associated evidence, the TSI appears to be a content and construct valid instrument with high internal reliability for use with preservice elementary teachers to assess self-efficacy beliefs in regard to the teaching of science as inquiry.

  6. The Development and Validation of an Instrument to Measure Preservice Teachers' Self-Efficacy in Regard to The Teaching of Science as Inquiry

    NASA Astrophysics Data System (ADS)

    Smolleck, Lori Dira; Zembal-Saul, Carla; Yoder, Edgar P.

    2006-06-01

    The purpose of this study was to develop, validate, and establish the reliability of an instrument that measures preservice teachers' self-efficacy in regard to the teaching of science as inquiry. The instrument, Teaching Science as Inquiry (TSI), is based upon the work of Bandura (1977, 1981, 1982, 1986, 1989, 1995, 1997), Riggs (1988), and Enochs and Riggs (1990). Self-efficacy in regard to the teaching of science as inquiry was measured through the use of a 69-item Likert-type scale instrument designed by the author of the study. Based on the standardized development processes used and the associated evidence, the TSI appears to be a content and construct valid instrument with high internal reliability for use with preservice elementary teachers to assess self-efficacy beliefs in regard to the teaching of science as inquiry.

  7. Using Model Replication to Improve the Reliability of Agent-Based Models

    NASA Astrophysics Data System (ADS)

    Zhong, Wei; Kim, Yushim

    The basic presupposition of model replication activities for a computational model such as an agent-based model (ABM) is that, as a robust and reliable tool, it must be replicable in other computing settings. This assumption has recently gained attention in the community of artificial society and simulation due to the challenges of model verification and validation. Illustrating the replication of an ABM representing fraudulent behavior in a public service delivery system originally developed in the Java-based MASON toolkit for NetLogo by a different author, this paper exemplifies how model replication exercises provide unique opportunities for model verification and validation process. At the same time, it helps accumulate best practices and patterns of model replication and contributes to the agenda of developing a standard methodological protocol for agent-based social simulation.

  8. Genetic programming-based mathematical modeling of influence of weather parameters in BOD5 removal by Lemna minor.

    PubMed

    Chandrasekaran, Sivapragasam; Sankararajan, Vanitha; Neelakandhan, Nampoothiri; Ram Kumar, Mahalakshmi

    2017-11-04

    This study, through extensive experiments and mathematical modeling, reveals that other than retention time and wastewater temperature (T w ), atmospheric parameters also play important role in the effective functioning of aquatic macrophyte-based treatment system. Duckweed species Lemna minor is considered in this study. It is observed that the combined effect of atmospheric temperature (T atm ), wind speed (U w ), and relative humidity (RH) can be reflected through one parameter, namely the "apparent temperature" (T a ). A total of eight different models are considered based on the combination of input parameters and the best mathematical model is arrived at which is validated through a new experimental set-up outside the modeling period. The validation results are highly encouraging. Genetic programming (GP)-based models are found to reveal deeper understandings of the wetland process.

  9. Validation of a 30-year-old process for the manufacture of L-asparaginase from Erwinia chrysanthemi.

    PubMed

    Gervais, David; Allison, Nigel; Jennings, Alan; Jones, Shane; Marks, Trevor

    2013-04-01

    A 30-year-old manufacturing process for the biologic product L-asparaginase from the plant pathogen Erwinia chrysanthemi was rigorously qualified and validated, with a high level of agreement between validation data and the 6-year process database. L-Asparaginase exists in its native state as a tetrameric protein and is used as a chemotherapeutic agent in the treatment regimen for Acute Lymphoblastic Leukaemia (ALL). The manufacturing process involves fermentation of the production organism, extraction and purification of the L-asparaginase to make drug substance (DS), and finally formulation and lyophilisation to generate drug product (DP). The extensive manufacturing experience with the product was used to establish ranges for all process parameters and product quality attributes. The product and in-process intermediates were rigorously characterised, and new assays, such as size-exclusion and reversed-phase UPLC, were developed, validated, and used to analyse several pre-validation batches. Finally, three prospective process validation batches were manufactured and product quality data generated using both the existing and the new analytical methods. These data demonstrated the process to be robust, highly reproducible and consistent, and the validation was successful, contributing to the granting of an FDA product license in November, 2011.

  10. ANIMAL BEHAVIOR AND WELL-BEING SYMPOSIUM: The Common Swine Industry Audit: Future steps to assure positive on-farm animal welfare utilizing validated, repeatable and feasible animal-based measures.

    PubMed

    Pairis-Garcia, M; Moeller, S J

    2017-03-01

    The Common Swine Industry Audit (CSIA) was developed and scientifically evaluated through the combined efforts of a task force consisting of university scientists, veterinarians, pork producers, packers, processers, and retail and food service personnel to provide stakeholders throughout the pork chain with a consistent, reliable, and verifiable system to ensure on-farm swine welfare and food safety. The CSIA tool was built from the framework of the Pork Quality Assurance Plus (PQA Plus) site assessment program with the purpose of developing a single, common audit platform for the U.S. swine industry. Twenty-seven key aspects of swine care are captured and evaluated in CSIA and cover the specific focal areas of animal records, animal observations, facilities, and caretakers. Animal-based measures represent approximately 50% of CSIA evaluation criteria and encompass critical failure criteria, including observation of willful acts of abuse and determination of timely euthanasia. Objective, science-based measures of animal well-being parameters (e.g., BCS, lameness, lesions, hernias) are assessed within CSIA using statistically validated sample sizes providing a detection ability of 1% with 95% confidence. The common CSIA platform is used to identify care issues and facilitate continuous improvement in animal care through a validated, repeatable, and feasible animal-based audit process. Task force members provide continual updates to the CSIA tool with a specific focus toward 1) identification and interpretation of appropriate animal-based measures that provide inherent value to pig welfare, 2) establishment of acceptability thresholds for animal-based measures, and 3) interpretation of CSIA data for use and improvement of welfare within the U.S. swine industry.

  11. Good pharmacovigilance practices: technology enabled.

    PubMed

    Nelson, Robert C; Palsulich, Bruce; Gogolak, Victor

    2002-01-01

    The assessment of spontaneous reports is most effective it is conducted within a defined and rigorous process. The framework for good pharmacovigilance process (GPVP) is proposed as a subset of good postmarketing surveillance process (GPMSP), a functional structure for both a public health and corporate risk management strategy. GPVP has good practices that implement each step within a defined process. These practices are designed to efficiently and effectively detect and alert the drug safety professional to new and potentially important information on drug-associated adverse reactions. These practices are enabled by applied technology designed specifically for the review and assessment of spontaneous reports. Specific practices include rules-based triage, active query prompts for severe organ insults, contextual single case evaluation, statistical proportionality and correlational checks, case-series analyses, and templates for signal work-up and interpretation. These practices and the overall GPVP are supported by state-of-the-art web-based systems with powerful analytical engines, workflow and audit trials to allow validated systems support for valid drug safety signalling efforts. It is also important to understand that a process has a defined set of steps and any one cannot stand independently. Specifically, advanced use of technical alerting methods in isolation can mislead and allow one to misunderstand priorities and relative value. In the end, pharmacovigilance is a clinical art and a component process to the science of pharmacoepidemiology and risk management.

  12. Accelerated Aging in Electrolytic Capacitors for Prognostics

    NASA Technical Reports Server (NTRS)

    Celaya, Jose R.; Kulkarni, Chetan; Saha, Sankalita; Biswas, Gautam; Goebel, Kai Frank

    2012-01-01

    The focus of this work is the analysis of different degradation phenomena based on thermal overstress and electrical overstress accelerated aging systems and the use of accelerated aging techniques for prognostics algorithm development. Results on thermal overstress and electrical overstress experiments are presented. In addition, preliminary results toward the development of physics-based degradation models are presented focusing on the electrolyte evaporation failure mechanism. An empirical degradation model based on percentage capacitance loss under electrical overstress is presented and used in: (i) a Bayesian-based implementation of model-based prognostics using a discrete Kalman filter for health state estimation, and (ii) a dynamic system representation of the degradation model for forecasting and remaining useful life (RUL) estimation. A leave-one-out validation methodology is used to assess the validity of the methodology under the small sample size constrain. The results observed on the RUL estimation are consistent through the validation tests comparing relative accuracy and prediction error. It has been observed that the inaccuracy of the model to represent the change in degradation behavior observed at the end of the test data is consistent throughout the validation tests, indicating the need of a more detailed degradation model or the use of an algorithm that could estimate model parameters on-line. Based on the observed degradation process under different stress intensity with rest periods, the need for more sophisticated degradation models is further supported. The current degradation model does not represent the capacitance recovery over rest periods following an accelerated aging stress period.

  13. A CSP-Based Agent Modeling Framework for the Cougaar Agent-Based Architecture

    NASA Technical Reports Server (NTRS)

    Gracanin, Denis; Singh, H. Lally; Eltoweissy, Mohamed; Hinchey, Michael G.; Bohner, Shawn A.

    2005-01-01

    Cognitive Agent Architecture (Cougaar) is a Java-based architecture for large-scale distributed agent-based applications. A Cougaar agent is an autonomous software entity with behaviors that represent a real-world entity (e.g., a business process). A Cougaar-based Model Driven Architecture approach, currently under development, uses a description of system's functionality (requirements) to automatically implement the system in Cougaar. The Communicating Sequential Processes (CSP) formalism is used for the formal validation of the generated system. Two main agent components, a blackboard and a plugin, are modeled as CSP processes. A set of channels represents communications between the blackboard and individual plugins. The blackboard is represented as a CSP process that communicates with every agent in the collection. The developed CSP-based Cougaar modeling framework provides a starting point for a more complete formal verification of the automatically generated Cougaar code. Currently it is used to verify the behavior of an individual agent in terms of CSP properties and to analyze the corresponding Cougaar society.

  14. Developing evaluation instrument based on CIPP models on the implementation of portfolio assessment

    NASA Astrophysics Data System (ADS)

    Kurnia, Feni; Rosana, Dadan; Supahar

    2017-08-01

    This study aimed to develop an evaluation instrument constructed by CIPP model on the implementation of portfolio assessment in science learning. This study used research and development (R & D) method; adapting 4-D by the development of non-test instrument, and the evaluation instrument constructed by CIPP model. CIPP is the abbreviation of Context, Input, Process, and Product. The techniques of data collection were interviews, questionnaires, and observations. Data collection instruments were: 1) the interview guidelines for the analysis of the problems and the needs, 2) questionnaire to see level of accomplishment of portfolio assessment instrument, and 3) observation sheets for teacher and student to dig up responses to the portfolio assessment instrument. The data obtained was quantitative data obtained from several validators. The validators consist of two lecturers as the evaluation experts, two practitioners (science teachers), and three colleagues. This paper shows the results of content validity obtained from the validators and the analysis result of the data obtained by using Aikens' V formula. The results of this study shows that the evaluation instrument based on CIPP models is proper to evaluate the implementation of portfolio assessment instruments. Based on the experts' judgments, practitioners, and colleagues, the Aikens' V coefficient was between 0.86-1,00 which means that it is valid and can be used in the limited trial and operational field trial.

  15. MO-C-17A-03: A GPU-Based Method for Validating Deformable Image Registration in Head and Neck Radiotherapy Using Biomechanical Modeling

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Neylon, J; Min, Y; Qi, S

    2014-06-15

    Purpose: Deformable image registration (DIR) plays a pivotal role in head and neck adaptive radiotherapy but a systematic validation of DIR algorithms has been limited by a lack of quantitative high-resolution groundtruth. We address this limitation by developing a GPU-based framework that provides a systematic DIR validation by generating (a) model-guided synthetic CTs representing posture and physiological changes, and (b) model-guided landmark-based validation. Method: The GPU-based framework was developed to generate massive mass-spring biomechanical models from patient simulation CTs and contoured structures. The biomechanical model represented soft tissue deformations for known rigid skeletal motion. Posture changes were simulated by articulatingmore » skeletal anatomy, which subsequently applied elastic corrective forces upon the soft tissue. Physiological changes such as tumor regression and weight loss were simulated in a biomechanically precise manner. Synthetic CT data was then generated from the deformed anatomy. The initial and final positions for one hundred randomly-chosen mass elements inside each of the internal contoured structures were recorded as ground truth data. The process was automated to create 45 synthetic CT datasets for a given patient CT. For instance, the head rotation was varied between +/− 4 degrees along each axis, and tumor volumes were systematically reduced up to 30%. Finally, the original CT and deformed synthetic CT were registered using an optical flow based DIR. Results: Each synthetic data creation took approximately 28 seconds of computation time. The number of landmarks per data set varied between two and three thousand. The validation method is able to perform sub-voxel analysis of the DIR, and report the results by structure, giving a much more in depth investigation of the error. Conclusions: We presented a GPU based high-resolution biomechanical head and neck model to validate DIR algorithms by generating CT equivalent 3D volumes with simulated posture changes and physiological regression.« less

  16. A unified approach to validation, reliability, and education study design for surgical technical skills training.

    PubMed

    Sweet, Robert M; Hananel, David; Lawrenz, Frances

    2010-02-01

    To present modern educational psychology theory and apply these concepts to validity and reliability of surgical skills training and assessment. In a series of cross-disciplinary meetings, we applied a unified approach of behavioral science principles and theory to medical technical skills education given the recent advances in the theories in the field of behavioral psychology and statistics. While validation of the individual simulation tools is important, it is only one piece of a multimodal curriculum that in and of itself deserves examination and study. We propose concurrent validation throughout the design of simulation-based curriculum rather than once it is complete. We embrace the concept that validity and curriculum development are interdependent, ongoing processes that are never truly complete. Individual predictive, construct, content, and face validity aspects should not be considered separately but as interdependent and complementary toward an end application. Such an approach could help guide our acceptance and appropriate application of these exciting new training and assessment tools for technical skills training in medicine.

  17. Elaboration and Validation of the Medication Prescription Safety Checklist 1

    PubMed Central

    Pires, Aline de Oliveira Meireles; Ferreira, Maria Beatriz Guimarães; do Nascimento, Kleiton Gonçalves; Felix, Márcia Marques dos Santos; Pires, Patrícia da Silva; Barbosa, Maria Helena

    2017-01-01

    ABSTRACT Objective: to elaborate and validate a checklist to identify compliance with the recommendations for the structure of medication prescriptions, based on the Protocol of the Ministry of Health and the Brazilian Health Surveillance Agency. Method: methodological research, conducted through the validation and reliability analysis process, using a sample of 27 electronic prescriptions. Results: the analyses confirmed the content validity and reliability of the tool. The content validity, obtained by expert assessment, was considered satisfactory as it covered items that represent the compliance with the recommendations regarding the structure of the medication prescriptions. The reliability, assessed through interrater agreement, was excellent (ICC=1.00) and showed perfect agreement (K=1.00). Conclusion: the Medication Prescription Safety Checklist showed to be a valid and reliable tool for the group studied. We hope that this study can contribute to the prevention of adverse events, as well as to the improvement of care quality and safety in medication use. PMID:28793128

  18. Marshall Space Flight Center's Virtual Reality Applications Program 1993

    NASA Technical Reports Server (NTRS)

    Hale, Joseph P., II

    1993-01-01

    A Virtual Reality (VR) applications program has been under development at the Marshall Space Flight Center (MSFC) since 1989. Other NASA Centers, most notably Ames Research Center (ARC), have contributed to the development of the VR enabling technologies and VR systems. This VR technology development has now reached a level of maturity where specific applications of VR as a tool can be considered. The objectives of the MSFC VR Applications Program are to develop, validate, and utilize VR as a Human Factors design and operations analysis tool and to assess and evaluate VR as a tool in other applications (e.g., training, operations development, mission support, teleoperations planning, etc.). The long-term goals of this technology program is to enable specialized Human Factors analyses earlier in the hardware and operations development process and develop more effective training and mission support systems. The capability to perform specialized Human Factors analyses earlier in the hardware and operations development process is required to better refine and validate requirements during the requirements definition phase. This leads to a more efficient design process where perturbations caused by late-occurring requirements changes are minimized. A validated set of VR analytical tools must be developed to enable a more efficient process for the design and development of space systems and operations. Similarly, training and mission support systems must exploit state-of-the-art computer-based technologies to maximize training effectiveness and enhance mission support. The approach of the VR Applications Program is to develop and validate appropriate virtual environments and associated object kinematic and behavior attributes for specific classes of applications. These application-specific environments and associated simulations will be validated, where possible, through empirical comparisons with existing, accepted tools and methodologies. These validated VR analytical tools will then be available for use in the design and development of space systems and operations and in training and mission support systems.

  19. Bimodal fuzzy analytic hierarchy process (BFAHP) for coronary heart disease risk assessment.

    PubMed

    Sabahi, Farnaz

    2018-04-04

    Rooted deeply in medical multiple criteria decision-making (MCDM), risk assessment is very important especially when applied to the risk of being affected by deadly diseases such as coronary heart disease (CHD). CHD risk assessment is a stochastic, uncertain, and highly dynamic process influenced by various known and unknown variables. In recent years, there has been a great interest in fuzzy analytic hierarchy process (FAHP), a popular methodology for dealing with uncertainty in MCDM. This paper proposes a new FAHP, bimodal fuzzy analytic hierarchy process (BFAHP) that augments two aspects of knowledge, probability and validity, to fuzzy numbers to better deal with uncertainty. In BFAHP, fuzzy validity is computed by aggregating the validities of relevant risk factors based on expert knowledge and collective intelligence. By considering both soft and statistical data, we compute the fuzzy probability of risk factors using the Bayesian formulation. In BFAHP approach, these fuzzy validities and fuzzy probabilities are used to construct a reciprocal comparison matrix. We then aggregate fuzzy probabilities and fuzzy validities in a pairwise manner for each risk factor and each alternative. BFAHP decides about being affected and not being affected by ranking of high and low risks. For evaluation, the proposed approach is applied to the risk of being affected by CHD using a real dataset of 152 patients of Iranian hospitals. Simulation results confirm that adding validity in a fuzzy manner can accrue more confidence of results and clinically useful especially in the face of incomplete information when compared with actual results. Applying the proposed BFAHP on CHD risk assessment of the dataset, it yields high accuracy rate above 85% for correct prediction. In addition, this paper recognizes that the risk factors of diastolic blood pressure in men and high-density lipoprotein in women are more important in CHD than other risk factors. Copyright © 2018 Elsevier Inc. All rights reserved.

  20. A Cross-Validation Study of Sequential-Simultaneous Processing at Ages 2 1/2-12 1/2 Using the Kaufman Assessment Battery for Children (K-ABC).

    ERIC Educational Resources Information Center

    Kamphaus, Randy W.; And Others

    The development of two types of mental processing (sequential and simultaneous) in preschool and elementary children was examined in this study. Specifically, the aims of the study were to develop a revised set of tasks based upon previous findings (Naglieri, Kaufman, Kaufman, & Kamphaus, 1981; Kaufman, Kaufman, Kamphaus, & Naglieri, in…

  1. Factor analysis methods and validity evidence: A systematic review of instrument development across the continuum of medical education

    NASA Astrophysics Data System (ADS)

    Wetzel, Angela Payne

    Previous systematic reviews indicate a lack of reporting of reliability and validity evidence in subsets of the medical education literature. Psychology and general education reviews of factor analysis also indicate gaps between current and best practices; yet, a comprehensive review of exploratory factor analysis in instrument development across the continuum of medical education had not been previously identified. Therefore, the purpose for this study was critical review of instrument development articles employing exploratory factor or principal component analysis published in medical education (2006--2010) to describe and assess the reporting of methods and validity evidence based on the Standards for Educational and Psychological Testing and factor analysis best practices. Data extraction of 64 articles measuring a variety of constructs that have been published throughout the peer-reviewed medical education literature indicate significant errors in the translation of exploratory factor analysis best practices to current practice. Further, techniques for establishing validity evidence tend to derive from a limited scope of methods including reliability statistics to support internal structure and support for test content. Instruments reviewed for this study lacked supporting evidence based on relationships with other variables and response process, and evidence based on consequences of testing was not evident. Findings suggest a need for further professional development within the medical education researcher community related to (1) appropriate factor analysis methodology and reporting and (2) the importance of pursuing multiple sources of reliability and validity evidence to construct a well-supported argument for the inferences made from the instrument. Medical education researchers and educators should be cautious in adopting instruments from the literature and carefully review available evidence. Finally, editors and reviewers are encouraged to recognize this gap in best practices and subsequently to promote instrument development research that is more consistent through the peer-review process.

  2. Educational Milestone Development in the First 7 Specialties to Enter the Next Accreditation System

    PubMed Central

    Swing, Susan R.; Beeson, Michael S.; Carraccio, Carol; Coburn, Michael; Iobst, William; Selden, Nathan R.; Stern, Peter J.; Vydareny, Kay

    2013-01-01

    Background The Accreditation Council for Graduate Medical Education (ACGME) Outcome Project introduced 6 general competencies relevant to medical practice but fell short of its goal to create a robust assessment system that would allow program accreditation based on outcomes. In response, the ACGME, the specialty boards, and other stakeholders collaborated to develop educational milestones, observable steps in residents' professional development that describe progress from entry to graduation and beyond. Objectives We summarize the development of the milestones, focusing on 7 specialties, moving to the next accreditation system in July 2013, and offer evidence of their validity. Methods Specialty workgroups with broad representation used a 5-level developmental framework and incorporated information from literature reviews, specialty curricula, dialogue with constituents, and pilot testing. Results The workgroups produced richly diverse sets of milestones that reflect the community's consideration of attributes of competence relevant to practice in the given specialty. Both their development process and the milestones themselves establish a validity argument, when contemporary views of validity for complex performance assessment are used. Conclusions Initial evidence for validity emerges from the development processes and the resulting milestones. Further advancing a validity argument will require research on the use of milestone data in resident assessment and program accreditation. PMID:24404235

  3. A protocol for validating Land Surface Temperature from Sentinel-3

    NASA Astrophysics Data System (ADS)

    Ghent, D.

    2015-12-01

    One of the main objectives of the Sentinel-3 mission is to measure sea- and land-surface temperature with high-end accuracy and reliability in support of environmental and climate monitoring in an operational context. Calibration and validation are thus key criteria for operationalization within the framework of the Sentinel-3 Mission Performance Centre (S3MPC).Land surface temperature (LST) has a long heritage of satellite observations which have facilitated our understanding of land surface and climate change processes, such as desertification, urbanization, deforestation and land/atmosphere coupling. These observations have been acquired from a variety of satellite instruments on platforms in both low-earth orbit and in geostationary orbit. Retrieval accuracy can be a challenge though; surface emissivities can be highly variable owing to the heterogeneity of the land, and atmospheric effects caused by the presence of aerosols and by water vapour absorption can give a bias to the underlying LST. As such, a rigorous validation is critical in order to assess the quality of the data and the associated uncertainties. The Sentinel-3 Cal-Val Plan for evaluating the level-2 SL_2_LST product builds on an established validation protocol for satellite-based LST. This set of guidelines provides a standardized framework for structuring LST validation activities, and is rapidly gaining international recognition. The protocol introduces a four-pronged approach which can be summarised thus: i) in situ validation where ground-based observations are available; ii) radiance-based validation over sites that are homogeneous in emissivity; iii) intercomparison with retrievals from other satellite sensors; iv) time-series analysis to identify artefacts on an interannual time-scale. This multi-dimensional approach is a necessary requirement for assessing the performance of the LST algorithm for SLSTR which is designed around biome-based coefficients, thus emphasizing the importance of non-traditional forms of validation such as radiance-based techniques. Here we present examples of the application of the protocol to data produced within the ESA DUE GlobTemperature Project. The lessons learnt here are helping to fine-tune the methodology in preparation for Sentinel-3 commissioning.

  4. Validation of Immunohistochemical Assays for Integral Biomarkers in the NCI-MATCH EAY131 Clinical Trial.

    PubMed

    Khoury, Joseph D; Wang, Wei-Lien; Prieto, Victor G; Medeiros, L Jeffrey; Kalhor, Neda; Hameed, Meera; Broaddus, Russell; Hamilton, Stanley R

    2018-02-01

    Biomarkers that guide therapy selection are gaining unprecedented importance as targeted therapy options increase in scope and complexity. In conjunction with high-throughput molecular techniques, therapy-guiding biomarker assays based upon immunohistochemistry (IHC) have a critical role in cancer care in that they inform about the expression status of a protein target. Here, we describe the validation procedures for four clinical IHC biomarker assays-PTEN, RB, MLH1, and MSH2-for use as integral biomarkers in the nationwide NCI-Molecular Analysis for Therapy Choice (NCI-MATCH) EAY131 clinical trial. Validation procedures were developed through an iterative process based on collective experience and adaptation of broad guidelines from the FDA. The steps included primary antibody selection; assay optimization; development of assay interpretation criteria incorporating biological considerations; and expected staining patterns, including indeterminate results, orthogonal validation, and tissue validation. Following assay lockdown, patient samples and cell lines were used for analytic and clinical validation. The assays were then approved as laboratory-developed tests and used for clinical trial decisions for treatment selection. Calculations of sensitivity and specificity were undertaken using various definitions of gold-standard references, and external validation was required for the PTEN IHC assay. In conclusion, validation of IHC biomarker assays critical for guiding therapy in clinical trials is feasible using comprehensive preanalytic, analytic, and postanalytic steps. Implementation of standardized guidelines provides a useful framework for validating IHC biomarker assays that allow for reproducibility across institutions for routine clinical use. Clin Cancer Res; 24(3); 521-31. ©2017 AACR . ©2017 American Association for Cancer Research.

  5. The Perceived Leadership Communication Questionnaire (PLCQ): Development and Validation.

    PubMed

    Schneider, Frank M; Maier, Michaela; Lovrekovic, Sara; Retzbach, Andrea

    2015-01-01

    The Perceived Leadership Communication Questionnaire (PLCQ) is a short, reliable, and valid instrument for measuring leadership communication from both perspectives of the leader and the follower. Drawing on a communication-based approach to leadership and following a theoretical framework of interpersonal communication processes in organizations, this article describes the development and validation of a one-dimensional 6-item scale in four studies (total N = 604). Results from Study 1 and 2 provide evidence for the internal consistency and factorial validity of the PLCQ's self-rating version (PLCQ-SR)-a version for measuring how leaders perceive their own communication with their followers. Results from Study 3 and 4 show internal consistency, construct validity, and criterion validity of the PLCQ's other-rating version (PLCQ-OR)-a version for measuring how followers perceive the communication of their leaders. Cronbach's α had an average of.80 over the four studies. All confirmatory factor analyses yielded good to excellent model fit indices. Convergent validity was established by average positive correlations of.69 with subdimensions of transformational leadership and leader-member exchange scales. Furthermore, nonsignificant correlations with socially desirable responding indicated discriminant validity. Last, criterion validity was supported by a moderately positive correlation with job satisfaction (r =.31).

  6. The development of learning materials based on core model to improve students’ learning outcomes in topic of Chemical Bonding

    NASA Astrophysics Data System (ADS)

    Avianti, R.; Suyatno; Sugiarto, B.

    2018-04-01

    This study aims to create an appropriate learning material based on CORE (Connecting, Organizing, Reflecting, Extending) model to improve students’ learning achievement in Chemical Bonding Topic. This study used 4-D models as research design and one group pretest-posttest as design of the material treatment. The subject of the study was teaching materials based on CORE model, conducted on 30 students of Science class grade 10. The collecting data process involved some techniques such as validation, observation, test, and questionnaire. The findings were that: (1) all the contents were valid, (2) the practicality and the effectiveness of all the contents were good. The conclusion of this research was that the CORE model is appropriate to improve students’ learning outcomes for studying Chemical Bonding.

  7. Comparing Cognitive Models of Domain Mastery and Task Performance in Algebra: Validity Evidence for a State Assessment

    ERIC Educational Resources Information Center

    Warner, Zachary B.

    2013-01-01

    This study compared an expert-based cognitive model of domain mastery with student-based cognitive models of task performance for Integrated Algebra. Interpretations of student test results are limited by experts' hypotheses of how students interact with the items. In reality, the cognitive processes that students use to solve each item may be…

  8. Development and Validation of a Measurement Scale to Analyze the Environment for Evidence-Based Medicine Learning and Practice by Medical Residents

    ERIC Educational Resources Information Center

    Mi, Fangqiong

    2010-01-01

    A growing number of residency programs are instituting curricula to include the component of evidence-based medicine (EBM) principles and process. However, these curricula may not be able to achieve the optimal learning outcomes, perhaps because various contextual factors are often overlooked when EBM training is being designed, developed, and…

  9. The Naïve Overfitting Index Selection (NOIS): A new method to optimize model complexity for hyperspectral data

    NASA Astrophysics Data System (ADS)

    Rocha, Alby D.; Groen, Thomas A.; Skidmore, Andrew K.; Darvishzadeh, Roshanak; Willemen, Louise

    2017-11-01

    The growing number of narrow spectral bands in hyperspectral remote sensing improves the capacity to describe and predict biological processes in ecosystems. But it also poses a challenge to fit empirical models based on such high dimensional data, which often contain correlated and noisy predictors. As sample sizes, to train and validate empirical models, seem not to be increasing at the same rate, overfitting has become a serious concern. Overly complex models lead to overfitting by capturing more than the underlying relationship, and also through fitting random noise in the data. Many regression techniques claim to overcome these problems by using different strategies to constrain complexity, such as limiting the number of terms in the model, by creating latent variables or by shrinking parameter coefficients. This paper is proposing a new method, named Naïve Overfitting Index Selection (NOIS), which makes use of artificially generated spectra, to quantify the relative model overfitting and to select an optimal model complexity supported by the data. The robustness of this new method is assessed by comparing it to a traditional model selection based on cross-validation. The optimal model complexity is determined for seven different regression techniques, such as partial least squares regression, support vector machine, artificial neural network and tree-based regressions using five hyperspectral datasets. The NOIS method selects less complex models, which present accuracies similar to the cross-validation method. The NOIS method reduces the chance of overfitting, thereby avoiding models that present accurate predictions that are only valid for the data used, and too complex to make inferences about the underlying process.

  10. When is good, good enough? Methodological pragmatism for sustainable guideline development.

    PubMed

    Browman, George P; Somerfield, Mark R; Lyman, Gary H; Brouwers, Melissa C

    2015-03-06

    Continuous escalation in methodological and procedural rigor for evidence-based processes in guideline development is associated with increasing costs and production delays that threaten sustainability. While health research methodologists are appropriately responsible for promoting increasing rigor in guideline development, guideline sponsors are responsible for funding such processes. This paper acknowledges that other stakeholders in addition to methodologists should be more involved in negotiating trade-offs between methodological procedures and efficiency in guideline production to produce guidelines that are 'good enough' to be trustworthy and affordable under specific circumstances. The argument for reasonable methodological compromise to meet practical circumstances is consistent with current implicit methodological practice. This paper proposes a conceptual tool as a framework to be used by different stakeholders in negotiating, and explicitly reporting, reasonable compromises for trustworthy as well as cost-worthy guidelines. The framework helps fill a transparency gap in how methodological choices in guideline development are made. The principle, 'when good is good enough' can serve as a basis for this approach. The conceptual tool 'Efficiency-Validity Methodological Continuum' acknowledges trade-offs between validity and efficiency in evidence-based guideline development and allows for negotiation, guided by methodologists, of reasonable methodological compromises among stakeholders. Collaboration among guideline stakeholders in the development process is necessary if evidence-based guideline development is to be sustainable.

  11. Understanding metallic bonding: Structure, process and interaction by Rasch analysis

    NASA Astrophysics Data System (ADS)

    Cheng, Maurice M. W.; Oon, Pey-Tee

    2016-08-01

    This paper reports the results of a survey of 3006 Year 10-12 students on their understandings of metallic bonding. The instrument was developed based on Chi's ontological categories of scientific concepts and students' understanding of metallic bonding as reported in the literature. The instrument has two parts. Part one probed into students' understanding of metallic bonding as (a) a submicro structure of metals, (b) a process in which individual metal atoms lose their outermost shell electrons to form a 'sea of electrons' and octet metal cations or (c) an all-directional electrostatic force between delocalized electrons and metal cations, that is, an interaction. Part two assessed students' explanation of malleability of metals, for example (a) as a submicro structural rearrangement of metal atoms/cations or (b) based on all-directional electrostatic force. The instrument was validated by the Rasch Model. Psychometric assessment showed that the instrument possessed reasonably good properties of measurement. Results revealed that it was reliable and valid for measuring students' understanding of metallic bonding. Analysis revealed that the structure, process and interaction understandings were unidimensional and in an increasing order of difficulty. Implications for the teaching of metallic bonding, particular through the use of diagrams, critiques and model-based learning, are discussed.

  12. The consultation and relational empathy (CARE) measure: development and preliminary validation and reliability of an empathy-based consultation process measure.

    PubMed

    Mercer, Stewart W; Maxwell, Margaret; Heaney, David; Watt, Graham Cm

    2004-12-01

    Empathy is a key aspect of the clinical encounter but there is a lack of patient-assessed measures suitable for general clinical settings. Our aim was to develop a consultation process measure based on a broad definition of empathy, which is meaningful to patients irrespective of their socio-economic background. Qualitative and quantitative approaches were used to develop and validate the new measure, which we have called the consultation and relational empathy (CARE) measure. Concurrent validity was assessed by correlational analysis against other validated measures in a series of three pilot studies in general practice (in areas of high or low socio-economic deprivation). Face and content validity was investigated by 43 interviews with patients from both types of areas, and by feedback from GPs and expert researchers in the field. The initial version of the new measure (pilot 1; high deprivation practice) correlated strongly (r = 0.85) with the Reynolds empathy measure (RES) and the Barrett-Lennard empathy subscale (BLESS) (r = 0.63), but had a highly skewed distribution (skew -1.879, kurtosis 3.563). Statistical analysis, and feedback from the 20 patients interviewed, the GPs and the expert researchers, led to a number of modifications. The revised, second version of the CARE measure, tested in an area of low deprivation (pilot 2) also correlated strongly with the established empathy measures (r = 0.84 versus RES and r = 0.77 versus BLESS) but had a less skewed distribution (skew -0.634, kurtosis -0.067). Internal reliability of the revised version was high (Cronbach's alpha 0.92). Patient feedback at interview (n = 13) led to only minor modification. The final version of the CARE measure, tested in pilot 3 (high deprivation practice) confirmed the validation with the other empathy measures (r = 0.85 versus RES and r = 0.84 versus BLESS) and the face validity (feedback from 10 patients). These preliminary results support the validity and reliability of the CARE measure as a tool for measuring patients' perceptions of relational empathy in the consultation.

  13. Verification and Validation of Adaptive and Intelligent Systems with Flight Test Results

    NASA Technical Reports Server (NTRS)

    Burken, John J.; Larson, Richard R.

    2009-01-01

    F-15 IFCS project goals are: a) Demonstrate Control Approaches that can Efficiently Optimize Aircraft Performance in both Normal and Failure Conditions [A] & [B] failures. b) Advance Neural Network-Based Flight Control Technology for New Aerospace Systems Designs with a Pilot in the Loop. Gen II objectives include; a) Implement and Fly a Direct Adaptive Neural Network Based Flight Controller; b) Demonstrate the Ability of the System to Adapt to Simulated System Failures: 1) Suppress Transients Associated with Failure; 2) Re-Establish Sufficient Control and Handling of Vehicle for Safe Recovery. c) Provide Flight Experience for Development of Verification and Validation Processes for Flight Critical Neural Network Software.

  14. Carbon Dynamics and Export from Flooded Wetlands: A Modeling Approach

    EPA Science Inventory

    Described in this article is development and validation of a process based model for carbon cycling in flooded wetlands, called WetQual-C. The model considers various biogeochemical interactions affecting C cycling, greenhouse gas emissions, organic carbon export and retention. ...

  15. Report: EPA Lacks Processes to Validate Whether Contractors Receive Specialized Role-Based Training for Network and Data Protection

    EPA Pesticide Factsheets

    Report #17-P-0344, July 31, 2017. The EPA is unaware whether information security contractors possess the skills and training needed to protect the agency’s information, data and network from security breaches.

  16. Error Estimation and Uncertainty Propagation in Computational Fluid Mechanics

    NASA Technical Reports Server (NTRS)

    Zhu, J. Z.; He, Guowei; Bushnell, Dennis M. (Technical Monitor)

    2002-01-01

    Numerical simulation has now become an integral part of engineering design process. Critical design decisions are routinely made based on the simulation results and conclusions. Verification and validation of the reliability of the numerical simulation is therefore vitally important in the engineering design processes. We propose to develop theories and methodologies that can automatically provide quantitative information about the reliability of the numerical simulation by estimating numerical approximation error, computational model induced errors and the uncertainties contained in the mathematical models so that the reliability of the numerical simulation can be verified and validated. We also propose to develop and implement methodologies and techniques that can control the error and uncertainty during the numerical simulation so that the reliability of the numerical simulation can be improved.

  17. The development and validation of an instrument to measure preservice teachers' self-efficacy in regard to the teaching of science as inquiry

    NASA Astrophysics Data System (ADS)

    Dira-Smolleck, Lori

    The purpose of this study was to develop, validate and establish the reliability of an instrument that measures preservice teachers' self-efficacy in regard to the teaching of science as inquiry. The instrument (TSI) is based upon the work of Bandura, Riggs, and Enochs & Riggs (1990). The study used Bandura's theoretical framework in that the instrument uses the self-efficacy construct to explore the beliefs of prospective elementary science teachers with regards to the teaching of science through inquiry: specifically, the two dimensions of self-efficacy beliefs defined by Bandura: personal self-efficacy and outcome expectancy. Self-efficacy in regard to the teaching of science as inquiry was measured through the use of a 69-item Likert scale instrument designed by the author of the study. A 13-step plan was designed and followed in the process of developing the instrument. Using the results from Chronbach Alpha and Analysis of Variance, a 69-item instrument was found to achieve the greatest balance across the construct validity, reliability and item balance with the Essential Elements of Classroom Inquiry content matrix. Based on the standardized development processes used and the associated evidence, the TSI appears to be a content and construct valid instrument, with high internal reliability for use with prospective elementary teachers to assess self-efficacy beliefs in regard to the teaching of science as inquiry. Implications for research, policy and practice are also discussed.

  18. Mastering algebra retrains the visual system to perceive hierarchical structure in equations.

    PubMed

    Marghetis, Tyler; Landy, David; Goldstone, Robert L

    2016-01-01

    Formal mathematics is a paragon of abstractness. It thus seems natural to assume that the mathematical expert should rely more on symbolic or conceptual processes, and less on perception and action. We argue instead that mathematical proficiency relies on perceptual systems that have been retrained to implement mathematical skills. Specifically, we investigated whether the visual system-in particular, object-based attention-is retrained so that parsing algebraic expressions and evaluating algebraic validity are accomplished by visual processing. Object-based attention occurs when the visual system organizes the world into discrete objects, which then guide the deployment of attention. One classic signature of object-based attention is better perceptual discrimination within, rather than between, visual objects. The current study reports that object-based attention occurs not only for simple shapes but also for symbolic mathematical elements within algebraic expressions-but only among individuals who have mastered the hierarchical syntax of algebra. Moreover, among these individuals, increased object-based attention within algebraic expressions is associated with a better ability to evaluate algebraic validity. These results suggest that, in mastering the rules of algebra, people retrain their visual system to represent and evaluate abstract mathematical structure. We thus argue that algebraic expertise involves the regimentation and reuse of evolutionarily ancient perceptual processes. Our findings implicate the visual system as central to learning and reasoning in mathematics, leading us to favor educational approaches to mathematics and related STEM fields that encourage students to adapt, not abandon, their use of perception.

  19. Administering Cognitive Tests Through Touch Screen Tablet Devices: Potential Issues.

    PubMed

    Jenkins, Amy; Lindsay, Stephen; Eslambolchilar, Parisa; Thornton, Ian M; Tales, Andrea

    2016-10-04

    Mobile technologies, such as tablet devices, open up new possibilities for health-related diagnosis, monitoring, and intervention for older adults and healthcare practitioners. Current evaluations of cognitive integrity typically occur within clinical settings, such as memory clinics, using pen and paper or computer-based tests. In the present study, we investigate the challenges associated with transferring such tests to touch-based, mobile technology platforms from an older adult perspective. Problems may include individual variability in technical familiarity and acceptance; various factors influencing usability; acceptability; response characteristics and thus validity per se of a given test. For the results of mobile technology-based tests of reaction time to be valid and related to disease status rather than extraneous variables, it is imperative the whole test process is investigated in order to determine potential effects before the test is fully developed. Researchers have emphasized the importance of including the 'user' in the evaluation of such devices; thus we performed a focus group-based qualitative assessment of the processes involved in the administration and performance of a tablet-based version of a typical test of attention and information processing speed (a multi-item localization task), to younger and older adults. We report that although the test was regarded positively, indicating that using a tablet for the delivery of such tests is feasible, it is important for developers to consider factors surrounding user expectations, performance feedback, and physical response requirements and to use this information to inform further research into such applications.

  20. A methodology for collecting valid software engineering data

    NASA Technical Reports Server (NTRS)

    Basili, Victor R.; Weiss, David M.

    1983-01-01

    An effective data collection method for evaluating software development methodologies and for studying the software development process is described. The method uses goal-directed data collection to evaluate methodologies with respect to the claims made for them. Such claims are used as a basis for defining the goals of the data collection, establishing a list of questions of interest to be answered by data analysis, defining a set of data categorization schemes, and designing a data collection form. The data to be collected are based on the changes made to the software during development, and are obtained when the changes are made. To insure accuracy of the data, validation is performed concurrently with software development and data collection. Validation is based on interviews with those people supplying the data. Results from using the methodology show that data validation is a necessary part of change data collection. Without it, as much as 50% of the data may be erroneous. Feasibility of the data collection methodology was demonstrated by applying it to five different projects in two different environments. The application showed that the methodology was both feasible and useful.

  1. Process modeling of an advanced NH₃ abatement and recycling technology in the ammonia-based CO₂ capture process.

    PubMed

    Li, Kangkang; Yu, Hai; Tade, Moses; Feron, Paul; Yu, Jingwen; Wang, Shujuan

    2014-06-17

    An advanced NH3 abatement and recycling process that makes great use of the waste heat in flue gas was proposed to solve the problems of ammonia slip, NH3 makeup, and flue gas cooling in the ammonia-based CO2 capture process. The rigorous rate-based model, RateFrac in Aspen Plus, was thermodynamically and kinetically validated by experimental data from open literature and CSIRO pilot trials at Munmorah Power Station, Australia, respectively. After a thorough sensitivity analysis and process improvement, the NH3 recycling efficiency reached as high as 99.87%, and the NH3 exhaust concentration was only 15.4 ppmv. Most importantly, the energy consumption of the NH3 abatement and recycling system was only 59.34 kJ/kg CO2 of electricity. The evaluation of mass balance and temperature steady shows that this NH3 recovery process was technically effective and feasible. This process therefore is a promising prospect toward industrial application.

  2. Developing a Workflow Composite Score to Measure Clinical Information Logistics. A Top-down Approach.

    PubMed

    Liebe, J D; Hübner, U; Straede, M C; Thye, J

    2015-01-01

    Availability and usage of individual IT applications have been studied intensively in the past years. Recently, IT support of clinical processes is attaining increasing attention. The underlying construct that describes the IT support of clinical workflows is clinical information logistics. This construct needs to be better understood, operationalised and measured. It is therefore the aim of this study to propose and develop a workflow composite score (WCS) for measuring clinical information logistics and to examine its quality based on reliability and validity analyses. We largely followed the procedural model of MacKenzie and colleagues (2011) for defining and conceptualising the construct domain, for developing the measurement instrument, assessing the content validity, pretesting the instrument, specifying the model, capturing the data and computing the WCS and testing the reliability and validity. Clinical information logistics was decomposed into the descriptors data and information, function, integration and distribution, which embraced the framework validated by an analysis of the international literature. This framework was refined selecting representative clinical processes. We chose ward rounds, pre- and post-surgery processes and discharge as sample processes that served as concrete instances for the measurements. They are sufficiently complex, represent core clinical processes and involve different professions, departments and settings. The score was computed on the basis of data from 183 hospitals of different size, ownership, location and teaching status. Testing the reliability and validity yielded encouraging results: the reliability was high with r(split-half) = 0.89, the WCS discriminated between groups; the WCS correlated significantly and moderately with two EHR models and the WCS received good evaluation results by a sample of chief information officers (n = 67). These findings suggest the further utilisation of the WCS. As the WCS does not assume ideal workflows as a gold standard but measures IT support of clinical workflows according to validated descriptors a high portability of the WCS to other hospitals in other countries is very likely. The WCS will contribute to a better understanding of the construct clinical information logistics.

  3. Collection of process data after cardiac surgery: initial implementation with a Java-based intranet applet.

    PubMed

    Ratcliffe, M B; Khan, J H; Magee, K M; McElhinney, D B; Hubner, C

    2000-06-01

    Using a Java-based intranet program (applet), we collected postoperative process data after coronary artery bypass grafting. A Java-based applet was developed and deployed on a hospital intranet. Briefly, the nurse entered patient process data using a point and click interface. The applet generated a nursing note, and process data were saved in a Microsoft Access database. In 10 patients, this method was validated by comparison with a retrospective chart review. In 45 consecutive patients, weekly control charts were generated from the data. When aberrations from the pathway occurred, feedback was initiated to restore the goals of the critical pathway. The intranet process data collection method was verified by a manual chart review with 98% sensitivity. The control charts for time to extubation, intensive care unit stay, and hospital stay showed a deviation from critical pathway goals after the first 20 patients. Feedback modulation was associated with a return to critical pathway goals. Java-based applets are inexpensive and can collect accurate postoperative process data, identify critical pathway deviations, and allow timely feedback of process data.

  4. Spanish translation, cross-cultural adaptation, and validation of the Questionnaire for Diabetes-Related Foot Disease (Q-DFD)

    PubMed Central

    Castillo-Tandazo, Wilson; Flores-Fortty, Adolfo; Feraud, Lourdes; Tettamanti, Daniel

    2013-01-01

    Purpose To translate, cross-culturally adapt, and validate the Questionnaire for Diabetes-Related Foot Disease (Q-DFD), originally created and validated in Australia, for its use in Spanish-speaking patients with diabetes mellitus. Patients and methods The translation and cross-cultural adaptation were based on international guidelines. The Spanish version of the survey was applied to a community-based (sample A) and a hospital clinic-based sample (samples B and C). Samples A and B were used to determine criterion and construct validity comparing the survey findings with clinical evaluation and medical records, respectively; while sample C was used to determine intra- and inter-rater reliability. Results After completing the rigorous translation process, only four items were considered problematic and required a new translation. In total, 127 patients were included in the validation study: 76 to determine criterion and construct validity and 41 to establish intra- and inter-rater reliability. For an overall diagnosis of diabetes-related foot disease, a substantial level of agreement was obtained when we compared the Q-DFD with the clinical assessment (kappa 0.77, sensitivity 80.4%, specificity 91.5%, positive likelihood ratio [LR+] 9.46, negative likelihood ratio [LR−] 0.21); while an almost perfect level of agreement was obtained when it was compared with medical records (kappa 0.88, sensitivity 87%, specificity 97%, LR+ 29.0, LR− 0.13). Survey reliability showed substantial levels of agreement, with kappa scores of 0.63 and 0.73 for intra- and inter-rater reliability, respectively. Conclusion The translated and cross-culturally adapted Q-DFD showed good psychometric properties (validity, reproducibility, and reliability) that allow its use in Spanish-speaking diabetic populations. PMID:24039434

  5. Performance of Landslide-HySEA tsunami model for NTHMP benchmarking validation process

    NASA Astrophysics Data System (ADS)

    Macias, Jorge

    2017-04-01

    In its FY2009 Strategic Plan, the NTHMP required that all numerical tsunami inundation models be verified as accurate and consistent through a model benchmarking process. This was completed in 2011, but only for seismic tsunami sources and in a limited manner for idealized solid underwater landslides. Recent work by various NTHMP states, however, has shown that landslide tsunami hazard may be dominant along significant parts of the US coastline, as compared to hazards from other tsunamigenic sources. To perform the above-mentioned validation process, a set of candidate benchmarks were proposed. These benchmarks are based on a subset of available laboratory date sets for solid slide experiments and deformable slide experiments, and include both submarine and subaerial slides. A benchmark based on a historic field event (Valdez, AK, 1964) close the list of proposed benchmarks. The Landslide-HySEA model has participated in the workshop that was organized at Texas A&M University - Galveston, on January 9-11, 2017. The aim of this presentation is to show some of the numerical results obtained for Landslide-HySEA in the framework of this benchmarking validation/verification effort. Acknowledgements. This research has been partially supported by the Junta de Andalucía research project TESELA (P11-RNM7069), the Spanish Government Research project SIMURISK (MTM2015-70490-C02-01-R) and Universidad de Málaga, Campus de Excelencia Internacional Andalucía Tech. The GPU computations were performed at the Unit of Numerical Methods (University of Malaga).

  6. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals

    PubMed Central

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method. PMID:27382610

  7. Joint Estimation of Time-Frequency Signature and DOA Based on STFD for Multicomponent Chirp Signals.

    PubMed

    Zhao, Ziyue; Liu, Congfeng

    2014-01-01

    In the study of the joint estimation of time-frequency signature and direction of arrival (DOA) for multicomponent chirp signals, an estimation method based on spatial time-frequency distributions (STFDs) is proposed in this paper. Firstly, array signal model for multicomponent chirp signals is presented and then array processing is applied in time-frequency analysis to mitigate cross-terms. According to the results of the array processing, Hough transform is performed and the estimation of time-frequency signature is obtained. Subsequently, subspace method for DOA estimation based on STFD matrix is achieved. Simulation results demonstrate the validity of the proposed method.

  8. Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN

    NASA Astrophysics Data System (ADS)

    Peter, Josephine; Doloi, B.; Bhattacharyya, B.

    2011-01-01

    The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actual experimental observations.

  9. Independent Validation and Verification of automated information systems in the Department of Energy

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Hunteman, W.J.; Caldwell, R.

    1994-07-01

    The Department of Energy (DOE) has established an Independent Validation and Verification (IV&V) program for all classified automated information systems (AIS) operating in compartmented or multi-level modes. The IV&V program was established in DOE Order 5639.6A and described in the manual associated with the Order. This paper describes the DOE IV&V program, the IV&V process and activities, the expected benefits from an IV&V, and the criteria and methodologies used during an IV&V. The first IV&V under this program was conducted on the Integrated Computing Network (ICN) at Los Alamos National Laboratory and several lessons learned are presented. The DOE IV&Vmore » program is based on the following definitions. An IV&V is defined as the use of expertise from outside an AIS organization to conduct validation and verification studies on a classified AIS. Validation is defined as the process of applying the specialized security test and evaluation procedures, tools, and equipment needed to establish acceptance for joint usage of an AIS by one or more departments or agencies and their contractors. Verification is the process of comparing two levels of an AIS specification for proper correspondence (e.g., security policy model with top-level specifications, top-level specifications with source code, or source code with object code).« less

  10. Collecting the data but missing the point: validity of hand hygiene audit data.

    PubMed

    Jeanes, A; Coen, P G; Wilson, A P; Drey, N S; Gould, D J

    2015-06-01

    Monitoring of hand hygiene compliance (HHC) by observation has been used in healthcare for more than a decade to provide assurance of infection control practice. The validity of this information is rarely tested. To examine the process and validity of collecting and reporting HHC data based on direct observation of compliance. Five years of HHC data routinely collected in one large National Health Service hospital trust were examined. The data collection process was reviewed by survey and interview of the auditors. HHC data collected for other research purposes undertaken during this period were compared with the organizational data set. After an initial increase, the reported HHC remained unchanged close to its intended target throughout this period. Examination of the data collection process revealed changes, including local interpretations of the data collection system, which invalidated the results. A minority of auditors had received formal training in observation and feedback of results. Whereas observation of HHC is the current gold standard, unless data collection definitions and methods are unambiguous, published, carefully supervised, and regularly monitored, variations may occur which affect the validity of the data. If the purpose of HHC monitoring is to improve practice and minimize transmission of infection, then a focus on progressively improving performance rather than on achieving a target may offer greater opportunities to achieve this. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  11. Priming nanoparticle-guided diagnostics and therapeutics towards human organs-on-chips microphysiological system

    NASA Astrophysics Data System (ADS)

    Choi, Jin-Ha; Lee, Jaewon; Shin, Woojung; Choi, Jeong-Woo; Kim, Hyun Jung

    2016-10-01

    Nanotechnology and bioengineering have converged over the past decades, by which the application of multi-functional nanoparticles (NPs) has been emerged in clinical and biomedical fields. The NPs primed to detect disease-specific biomarkers or to deliver biopharmaceutical compounds have beena validated in conventional in vitro culture models including two dimensional (2D) cell cultures or 3D organoid models. However, a lack of experimental models that have strong human physiological relevance has hampered accurate validation of the safety and functionality of NPs. Alternatively, biomimetic human "Organs-on-Chips" microphysiological systems have recapitulated the mechanically dynamic 3D tissue interface of human organ microenvironment, in which the transport, cytotoxicity, biocompatibility, and therapeutic efficacy of NPs and their conjugates may be more accurately validated. Finally, integration of NP-guided diagnostic detection and targeted nanotherapeutics in conjunction with human organs-on-chips can provide a novel avenue to accelerate the NP-based drug development process as well as the rapid detection of cellular secretomes associated with pathophysiological processes.

  12. Infinite hidden conditional random fields for human behavior analysis.

    PubMed

    Bousmalis, Konstantinos; Zafeiriou, Stefanos; Morency, Louis-Philippe; Pantic, Maja

    2013-01-01

    Hidden conditional random fields (HCRFs) are discriminative latent variable models that have been shown to successfully learn the hidden structure of a given classification problem (provided an appropriate validation of the number of hidden states). In this brief, we present the infinite HCRF (iHCRF), which is a nonparametric model based on hierarchical Dirichlet processes and is capable of automatically learning the optimal number of hidden states for a classification task. We show how we learn the model hyperparameters with an effective Markov-chain Monte Carlo sampling technique, and we explain the process that underlines our iHCRF model with the Restaurant Franchise Rating Agencies analogy. We show that the iHCRF is able to converge to a correct number of represented hidden states, and outperforms the best finite HCRFs--chosen via cross-validation--for the difficult tasks of recognizing instances of agreement, disagreement, and pain. Moreover, the iHCRF manages to achieve this performance in significantly less total training, validation, and testing time.

  13. Colour-cueing in visual search.

    PubMed

    Laarni, J

    2001-02-01

    Several studies have shown that people can selectively attend to stimulus colour, e.g., in visual search, and that preknowledge of a target colour can improve response speed/accuracy. The purpose was to use a form-identification task to determine whether valid colour precues can produce benefits and invalid cues costs. The subject had to identify the orientation of a "T"-shaped element in a ring of randomly-oriented "L"s when either two or four of the elements were differently coloured. Contrary to Moore and Egeth's (1998) recent findings, colour-based attention did affect performance under data-limited conditions: Colour cues produced benefits when processing load was high; when the load was reduced, they incurred only costs. Surprisingly, a valid colour cue succeeded in improving performance in the high-load condition even when its validity was reduced to the chance level. Overall, the results suggest that knowledge of a target colour does not facilitate the processing of the target, but makes it possible to prioritize it.

  14. Delta, theta, beta, and gamma brain oscillations index levels of auditory sentence processing.

    PubMed

    Mai, Guangting; Minett, James W; Wang, William S-Y

    2016-06-01

    A growing number of studies indicate that multiple ranges of brain oscillations, especially the delta (δ, <4Hz), theta (θ, 4-8Hz), beta (β, 13-30Hz), and gamma (γ, 30-50Hz) bands, are engaged in speech and language processing. It is not clear, however, how these oscillations relate to functional processing at different linguistic hierarchical levels. Using scalp electroencephalography (EEG), the current study tested the hypothesis that phonological and the higher-level linguistic (semantic/syntactic) organizations during auditory sentence processing are indexed by distinct EEG signatures derived from the δ, θ, β, and γ oscillations. We analyzed specific EEG signatures while subjects listened to Mandarin speech stimuli in three different conditions in order to dissociate phonological and semantic/syntactic processing: (1) sentences comprising valid disyllabic words assembled in a valid syntactic structure (real-word condition); (2) utterances with morphologically valid syllables, but not constituting valid disyllabic words (pseudo-word condition); and (3) backward versions of the real-word and pseudo-word conditions. We tested four signatures: band power, EEG-acoustic entrainment (EAE), cross-frequency coupling (CFC), and inter-electrode renormalized partial directed coherence (rPDC). The results show significant effects of band power and EAE of δ and θ oscillations for phonological, rather than semantic/syntactic processing, indicating the importance of tracking δ- and θ-rate phonetic patterns during phonological analysis. We also found significant β-related effects, suggesting tracking of EEG to the acoustic stimulus (high-β EAE), memory processing (θ-low-β CFC), and auditory-motor interactions (20-Hz rPDC) during phonological analysis. For semantic/syntactic processing, we obtained a significant effect of γ power, suggesting lexical memory retrieval or processing grammatical word categories. Based on these findings, we confirm that scalp EEG signatures relevant to δ, θ, β, and γ oscillations can index phonological and semantic/syntactic organizations separately in auditory sentence processing, compatible with the view that phonological and higher-level linguistic processing engage distinct neural networks. Copyright © 2016 Elsevier Inc. All rights reserved.

  15. Methodological challenges when doing research that includes ethnic minorities: a scoping review.

    PubMed

    Morville, Anne-Le; Erlandsson, Lena-Karin

    2016-11-01

    There are challenging methodological issues in obtaining valid and reliable results on which to base occupational therapy interventions for ethnic minorities. The aim of this scoping review is to describe the methodological problems within occupational therapy research, when ethnic minorities are included. A thorough literature search yielded 21 articles obtained from the scientific databases PubMed, Cinahl, Web of Science and PsychInfo. Analysis followed Arksey and O'Malley's framework for scoping reviews, applying content analysis. The results showed methodological issues concerning the entire research process from defining and recruiting samples, the conceptual understanding, lack of appropriate instruments, data collection using interpreters to analyzing data. In order to avoid excluding the ethnic minorities from adequate occupational therapy research and interventions, development of methods for the entire research process is needed. It is a costly and time-consuming process, but the results will be valid and reliable, and therefore more applicable in clinical practice.

  16. Advanced control of dissolved oxygen concentration in fed batch cultures during recombinant protein production.

    PubMed

    Kuprijanov, A; Gnoth, S; Simutis, R; Lübbert, A

    2009-02-01

    Design and experimental validation of advanced pO(2) controllers for fermentation processes operated in the fed-batch mode are described. In most situations, the presented controllers are able to keep the pO(2) in fermentations for recombinant protein productions exactly on the desired value. The controllers are based on the gain-scheduling approach to parameter-adaptive proportional-integral controllers. In order to cope with the most often appearing distortions, the basic gain-scheduling feedback controller was complemented with a feedforward control component. This feedforward/feedback controller significantly improved pO(2) control. By means of numerical simulations, the controller behavior was tested and its parameters were determined. Validation runs were performed with three Escherichia coli strains producing different recombinant proteins. It is finally shown that the new controller leads to significant improvements in the signal-to-noise ratio of other key process variables and, thus, to a higher process quality.

  17. Scenario-based design: A method for connecting information system design with public health operations and emergency management

    PubMed Central

    Reeder, Blaine; Turner, Anne M

    2011-01-01

    Responding to public health emergencies requires rapid and accurate assessment of workforce availability under adverse and changing circumstances. However, public health information systems to support resource management during both routine and emergency operations are currently lacking. We applied scenario-based design as an approach to engage public health practitioners in the creation and validation of an information design to support routine and emergency public health activities. Methods: Using semi-structured interviews we identified the information needs and activities of senior public health managers of a large municipal health department during routine and emergency operations. Results: Interview analysis identified twenty-five information needs for public health operations management. The identified information needs were used in conjunction with scenario-based design to create twenty-five scenarios of use and a public health manager persona. Scenarios of use and persona were validated and modified based on follow-up surveys with study participants. Scenarios were used to test and gain feedback on a pilot information system. Conclusion: The method of scenario-based design was applied to represent the resource management needs of senior-level public health managers under routine and disaster settings. Scenario-based design can be a useful tool for engaging public health practitioners in the design process and to validate an information system design. PMID:21807120

  18. Development of a Tablet-based symbol digit modalities test for reliably assessing information processing speed in patients with stroke.

    PubMed

    Tung, Li-Chen; Yu, Wan-Hui; Lin, Gong-Hong; Yu, Tzu-Ying; Wu, Chien-Te; Tsai, Chia-Yin; Chou, Willy; Chen, Mei-Hsiang; Hsieh, Ching-Lin

    2016-09-01

    To develop a Tablet-based Symbol Digit Modalities Test (T-SDMT) and to examine the test-retest reliability and concurrent validity of the T-SDMT in patients with stroke. The study had two phases. In the first phase, six experts, nine college students and five outpatients participated in the development and testing of the T-SDMT. In the second phase, 52 outpatients were evaluated twice (2 weeks apart) with the T-SDMT and SDMT to examine the test-retest reliability and concurrent validity of the T-SDMT. The T-SDMT was developed via expert input and college student/patient feedback. Regarding test-retest reliability, the practise effects of the T-SDMT and SDMT were both trivial (d=0.12) but significant (p≦0.015). The improvement in the T-SDMT (4.7%) was smaller than that in the SDMT (5.6%). The minimal detectable changes (MDC%) of the T-SDMT and SDMT were 6.7 (22.8%) and 10.3 (32.8%), respectively. The T-SDMT and SDMT were highly correlated with each other at the two time points (Pearson's r=0.90-0.91). The T-SDMT demonstrated good concurrent validity with the SDMT. Because the T-SDMT had a smaller practise effect and less random measurement error (superior test-retest reliability), it is recommended over the SDMT for assessing information processing speed in patients with stroke. Implications for Rehabilitation The Symbol Digit Modalities Test (SDMT), a common measure of information processing speed, showed a substantial practise effect and considerable random measurement error in patients with stroke. The Tablet-based SDMT (T-SDMT) has been developed to reduce the practise effect and random measurement error of the SDMT in patients with stroke. The T-SDMT had smaller practise effect and random measurement error than the SDMT, which can provide more reliable assessments of information processing speed.

  19. [Numerical simulation and operation optimization of biological filter].

    PubMed

    Zou, Zong-Sen; Shi, Han-Chang; Chen, Xiang-Qiang; Xie, Xiao-Qing

    2014-12-01

    BioWin software and two sensitivity analysis methods were used to simulate the Denitrification Biological Filter (DNBF) + Biological Aerated Filter (BAF) process in Yuandang Wastewater Treatment Plant. Based on the BioWin model of DNBF + BAF process, the operation data of September 2013 were used for sensitivity analysis and model calibration, and the operation data of October 2013 were used for model validation. The results indicated that the calibrated model could accurately simulate practical DNBF + BAF processes, and the most sensitive parameters were the parameters related to biofilm, OHOs and aeration. After the validation and calibration of model, it was used for process optimization with simulating operation results under different conditions. The results showed that, the best operation condition for discharge standard B was: reflux ratio = 50%, ceasing methanol addition, influent C/N = 4.43; while the best operation condition for discharge standard A was: reflux ratio = 50%, influent COD = 155 mg x L(-1) after methanol addition, influent C/N = 5.10.

  20. P-8A Poseidon strategy for modeling & simulation verification validation & accreditation (VV&A)

    NASA Astrophysics Data System (ADS)

    Kropp, Derek L.

    2009-05-01

    One of the first challenges in addressing the need for Modeling & Simulation (M&S) Verification, Validation, & Accreditation (VV&A) is to develop an approach for applying structured and formalized VV&A processes. The P-8A Poseidon Multi-Mission Maritime Aircraft (MMA) Program Modeling and Simulation Accreditation Strategy documents the P-8A program's approach to VV&A. The P-8A strategy tailors a risk-based approach and leverages existing bodies of knowledge, such as the Defense Modeling and Simulation Office Recommended Practice Guide (DMSO RPG), to make the process practical and efficient. As the program progresses, the M&S team must continue to look for ways to streamline the process, add supplemental steps to enhance the process, and identify and overcome procedural, organizational, and cultural challenges. This paper includes some of the basics of the overall strategy, examples of specific approaches that have worked well, and examples of challenges that the M&S team has faced.

  1. Process Simulation of Aluminium Sheet Metal Deep Drawing at Elevated Temperatures

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Winklhofer, Johannes; Trattnig, Gernot; Lind, Christoph

    Lightweight design is essential for an economic and environmentally friendly vehicle. Aluminium sheet metal is well known for its ability to improve the strength to weight ratio of lightweight structures. One disadvantage of aluminium is that it is less formable than steel. Therefore complex part geometries can only be realized by expensive multi-step production processes. One method for overcoming this disadvantage is deep drawing at elevated temperatures. In this way the formability of aluminium sheet metal can be improved significantly, and the number of necessary production steps can thereby be reduced. This paper introduces deep drawing of aluminium sheet metalmore » at elevated temperatures, a corresponding simulation method, a characteristic process and its optimization. The temperature and strain rate dependent material properties of a 5xxx series alloy and their modelling are discussed. A three dimensional thermomechanically coupled finite element deep drawing simulation model and its validation are presented. Based on the validated simulation model an optimised process strategy regarding formability, time and cost is introduced.« less

  2. Empiric validation of a process for behavior change.

    PubMed

    Elliot, Diane L; Goldberg, Linn; MacKinnon, David P; Ranby, Krista W; Kuehl, Kerry S; Moe, Esther L

    2016-09-01

    Most behavior change trials focus on outcomes rather than deconstructing how those outcomes related to programmatic theoretical underpinnings and intervention components. In this report, the process of change is compared for three evidence-based programs' that shared theories, intervention elements and potential mediating variables. Each investigation was a randomized trial that assessed pre- and post- intervention variables using survey constructs with established reliability. Each also used mediation analyses to define relationships. The findings were combined using a pattern matching approach. Surprisingly, knowledge was a significant mediator in each program (a and b path effects [p<0.01]). Norms, perceived control abilities, and self-monitoring were confirmed in at least two studies (p<0.01 for each). Replication of findings across studies with a common design but varied populations provides a robust validation of the theory and processes of an effective intervention. Combined findings also demonstrate a means to substantiate process aspects and theoretical models to advance understanding of behavior change.

  3. THz spectroscopy: An emerging technology for pharmaceutical development and pharmaceutical Process Analytical Technology (PAT) applications

    NASA Astrophysics Data System (ADS)

    Wu, Huiquan; Khan, Mansoor

    2012-08-01

    As an emerging technology, THz spectroscopy has gained increasing attention in the pharmaceutical area during the last decade. This attention is due to the fact that (1) it provides a promising alternative approach for in-depth understanding of both intermolecular interaction among pharmaceutical molecules and pharmaceutical product quality attributes; (2) it provides a promising alternative approach for enhanced process understanding of certain pharmaceutical manufacturing processes; and (3) the FDA pharmaceutical quality initiatives, most noticeably, the Process Analytical Technology (PAT) initiative. In this work, the current status and progress made so far on using THz spectroscopy for pharmaceutical development and pharmaceutical PAT applications are reviewed. In the spirit of demonstrating the utility of first principles modeling approach for addressing model validation challenge and reducing unnecessary model validation "burden" for facilitating THz pharmaceutical PAT applications, two scientific case studies based on published THz spectroscopy measurement results are created and discussed. Furthermore, other technical challenges and opportunities associated with adapting THz spectroscopy as a pharmaceutical PAT tool are highlighted.

  4. Often Asked but Rarely Answered: Can Asians Meet DSM-5/ICD-10 Autism Spectrum Disorder Criteria?

    PubMed Central

    Kim, So Hyun; Koh, Yun-Joo; Lim, Eun-Chung; Kim, Soo-Jeong; Leventhal, Bennett L.

    2016-01-01

    Abstract Objectives: To evaluate whether Asian (Korean children) populations can be validly diagnosed with autism spectrum disorder (ASD) using Western-based diagnostic instruments and criteria based on Diagnostic and Statistical Manual on Mental Disorders, 5th edition (DSM-5). Methods: Participants included an epidemiologically ascertained 7–14-year-old (N = 292) South Korean cohort from a larger prevalence study (N = 55,266). Main outcomes were based on Western-based diagnostic methods for Korean children using gold standard instruments, Autism Diagnostic Interview-Revised, and Autism Diagnostic Observation Schedule. Factor analysis and ANOVAs were performed to examine factor structure of autism symptoms and identify phenotypic differences between Korean children with ASD and non-ASD diagnoses. Results: Using Western-based diagnostic methods, Korean children with ASD were successfully identified with moderate-to-high diagnostic validity (sensitivities/specificities ranging 64%–93%), strong internal consistency, and convergent/concurrent validity. The patterns of autism phenotypes in a Korean population were similar to those observed in a Western population with two symptom domains (social communication and restricted and repetitive behavior factors). Statistically significant differences in the use of socially acceptable communicative behaviors (e.g., direct gaze, range of facial expressions) emerged between ASD versus non-ASD cases (mostly p < 0.001), ensuring that these can be a similarly valid part of the ASD phenotype in both Asian and Western populations. Conclusions: Despite myths, biases, and stereotypes about Asian social behavior, Asians (at least Korean children) typically use elements of reciprocal social interactions similar to those in the West. Therefore, standardized diagnostic methods widely used for ASD in Western culture can be validly used as part of the assessment process and research with Koreans and, possibly, other Asians. PMID:27315155

  5. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  6. Continental-scale Validation of MODIS-based and LEDAPS Landsat ETM+ Atmospheric Correction Methods

    NASA Technical Reports Server (NTRS)

    Ju, Junchang; Roy, David P.; Vermote, Eric; Masek, Jeffrey; Kovalskyy, Valeriy

    2012-01-01

    The potential of Landsat data processing to provide systematic continental scale products has been demonstrated by several projects including the NASA Web-enabled Landsat Data (WELD) project. The recent free availability of Landsat data increases the need for robust and efficient atmospheric correction algorithms applicable to large volume Landsat data sets. This paper compares the accuracy of two Landsat atmospheric correction methods: a MODIS-based method and the Landsat Ecosystem Disturbance Adaptive Processing System (LEDAPS) method. Both methods are based on the 6SV radiative transfer code but have different atmospheric characterization approaches. The MODIS-based method uses the MODIS Terra derived dynamic aerosol type, aerosol optical thickness, and water vapor to atmospherically correct ETM+ acquisitions in each coincident orbit. The LEDAPS method uses aerosol characterizations derived independently from each Landsat acquisition and assumes a fixed continental aerosol type and uses ancillary water vapor. Validation results are presented comparing ETM+ atmospherically corrected data generated using these two methods with AERONET corrected ETM+ data for 95 10 km×10 km 30 m subsets, a total of nearly 8 million 30 m pixels, located across the conterminous United States. The results indicate that the MODIS-based method has better accuracy than the LEDAPS method for the ETM+ red and longer wavelength bands.

  7. Analytical Modeling and Performance Prediction of Remanufactured Gearbox Components

    NASA Astrophysics Data System (ADS)

    Pulikollu, Raja V.; Bolander, Nathan; Vijayakar, Sandeep; Spies, Matthew D.

    Gearbox components operate in extreme environments, often leading to premature removal or overhaul. Though worn or damaged, these components still have the ability to function given the appropriate remanufacturing processes are deployed. Doing so reduces a significant amount of resources (time, materials, energy, manpower) otherwise required to produce a replacement part. Unfortunately, current design and analysis approaches require extensive testing and evaluation to validate the effectiveness and safety of a component that has been used in the field then processed outside of original OEM specification. To test all possible combination of component coupled with various levels of potential damage repaired through various options of processing would be an expensive and time consuming feat, thus prohibiting a broad deployment of remanufacturing processes across industry. However, such evaluation and validation can occur through Integrated Computational Materials Engineering (ICME) modeling and simulation. Sentient developed a microstructure-based component life prediction (CLP) tool to quantify and assist gearbox components remanufacturing process. This was achieved by modeling the design-manufacturing-microstructure-property relationship. The CLP tool assists in remanufacturing of high value, high demand rotorcraft, automotive and wind turbine gears and bearings. This paper summarizes the CLP models development, and validation efforts by comparing the simulation results with rotorcraft spiral bevel gear physical test data. CLP analyzes gear components and systems for safety, longevity, reliability and cost by predicting (1) New gearbox component performance, and optimal time-to-remanufacture (2) Qualification of used gearbox components for remanufacturing process (3) Predicting the remanufactured component performance.

  8. Green Pea and Garlic Puree Model Food Development for Thermal Pasteurization Process Quality Evaluation.

    PubMed

    Bornhorst, Ellen R; Tang, Juming; Sablani, Shyam S; Barbosa-Cánovas, Gustavo V; Liu, Fang

    2017-07-01

    Development and selection of model foods is a critical part of microwave thermal process development, simulation validation, and optimization. Previously developed model foods for pasteurization process evaluation utilized Maillard reaction products as the time-temperature integrators, which resulted in similar temperature sensitivity among the models. The aim of this research was to develop additional model foods based on different time-temperature integrators, determine their dielectric properties and color change kinetics, and validate the optimal model food in hot water and microwave-assisted pasteurization processes. Color, quantified using a * value, was selected as the time-temperature indicator for green pea and garlic puree model foods. Results showed 915 MHz microwaves had a greater penetration depth into the green pea model food than the garlic. a * value reaction rates for the green pea model were approximately 4 times slower than in the garlic model food; slower reaction rates were preferred for the application of model food in this study, that is quality evaluation for a target process of 90 °C for 10 min at the cold spot. Pasteurization validation used the green pea model food and results showed that there were quantifiable differences between the color of the unheated control, hot water pasteurization, and microwave-assisted thermal pasteurization system. Both model foods developed in this research could be utilized for quality assessment and optimization of various thermal pasteurization processes. © 2017 Institute of Food Technologists®.

  9. Multiple reaction monitoring (MRM) of plasma proteins in cardiovascular proteomics.

    PubMed

    Dardé, Verónica M; Barderas, Maria G; Vivanco, Fernando

    2013-01-01

    Different methodologies have been used through years to discover new potential biomarkers related with cardiovascular risk. The conventional proteomic strategy involves a discovery phase that requires the use of mass spectrometry (MS) and a validation phase, usually on an alternative platform such as immunoassays that can be further implemented in clinical practice. This approach is suitable for a single biomarker, but when large panels of biomarkers must be validated, the process becomes inefficient and costly. Therefore, it is essential to find an alternative methodology to perform the biomarker discovery, validation, and -quantification. The skills provided by quantitative MS turn it into an extremely attractive alternative to antibody-based technologies. Although it has been traditionally used for quantification of small molecules in clinical chemistry, MRM is now emerging as an alternative to traditional immunoassays for candidate protein biomarker validation.

  10. Using process elicitation and validation to understand and improve chemotherapy ordering and delivery.

    PubMed

    Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L

    2012-11-01

    Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.

  11. Quantifying uncertainty of remotely sensed topographic surveys for ephemeral gully channel monitoring

    USDA-ARS?s Scientific Manuscript database

    Spatio-temporal measurements of landform evolution provide the basis for process-based theory formulation and validation. Overtime, field measurement of landforms has increased significantly worldwide, driven primarily by the availability of new surveying technologies. However, there is not a standa...

  12. Validation of the NASA Dryden X-31 simulation and evaluation of mechanization techniques

    NASA Technical Reports Server (NTRS)

    Dickes, Edward; Kay, Jacob; Ralston, John

    1994-01-01

    This paper shall discuss the evaluation of the original Dryden X-31 aerodynamic math model, processes involved in the justification and creation of the modified data base, and comparison time history results of the model response with flight test.

  13. Computer-Based Linguistic Analysis.

    ERIC Educational Resources Information Center

    Wright, James R.

    Noam Chomsky's transformational-generative grammar model may effectively be translated into an equivalent computer model. Phrase-structure rules and transformations are tested as to their validity and ordering by the computer via the process of random lexical substitution. Errors appearing in the grammar are detected and rectified, and formal…

  14. Development of a validation process for parameters utilized in optimizing construction quality management of pavements.

    DOT National Transportation Integrated Search

    2006-01-01

    The implementation of an effective performance-based construction quality management requires a tool for determining impacts of construction quality on the life-cycle performance of pavements. This report presents an update on the efforts in the deve...

  15. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do.

    PubMed

    Zhao, Linlin; Wang, Wenyi; Sedykh, Alexander; Zhu, Hao

    2017-06-30

    Numerous chemical data sets have become available for quantitative structure-activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting.

  16. Experimental Errors in QSAR Modeling Sets: What We Can Do and What We Cannot Do

    PubMed Central

    2017-01-01

    Numerous chemical data sets have become available for quantitative structure–activity relationship (QSAR) modeling studies. However, the quality of different data sources may be different based on the nature of experimental protocols. Therefore, potential experimental errors in the modeling sets may lead to the development of poor QSAR models and further affect the predictions of new compounds. In this study, we explored the relationship between the ratio of questionable data in the modeling sets, which was obtained by simulating experimental errors, and the QSAR modeling performance. To this end, we used eight data sets (four continuous endpoints and four categorical endpoints) that have been extensively curated both in-house and by our collaborators to create over 1800 various QSAR models. Each data set was duplicated to create several new modeling sets with different ratios of simulated experimental errors (i.e., randomizing the activities of part of the compounds) in the modeling process. A fivefold cross-validation process was used to evaluate the modeling performance, which deteriorates when the ratio of experimental errors increases. All of the resulting models were also used to predict external sets of new compounds, which were excluded at the beginning of the modeling process. The modeling results showed that the compounds with relatively large prediction errors in cross-validation processes are likely to be those with simulated experimental errors. However, after removing a certain number of compounds with large prediction errors in the cross-validation process, the external predictions of new compounds did not show improvement. Our conclusion is that the QSAR predictions, especially consensus predictions, can identify compounds with potential experimental errors. But removing those compounds by the cross-validation procedure is not a reasonable means to improve model predictivity due to overfitting. PMID:28691113

  17. Assessment and certification of neonatal incubator sensors through an inferential neural network.

    PubMed

    de Araújo, José Medeiros; de Menezes, José Maria Pires; Moura de Albuquerque, Alberto Alexandre; da Mota Almeida, Otacílio; Ugulino de Araújo, Fábio Meneghetti

    2013-11-15

    Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission) 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU). Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal.

  18. Assessment and Certification of Neonatal Incubator Sensors through an Inferential Neural Network

    PubMed Central

    de Araújo Júnior, José Medeiros; de Menezes Júnior, José Maria Pires; de Albuquerque, Alberto Alexandre Moura; Almeida, Otacílio da Mota; de Araújo, Fábio Meneghetti Ugulino

    2013-01-01

    Measurement and diagnostic systems based on electronic sensors have been increasingly essential in the standardization of hospital equipment. The technical standard IEC (International Electrotechnical Commission) 60601-2-19 establishes requirements for neonatal incubators and specifies the calibration procedure and validation tests for such devices using sensors systems. This paper proposes a new procedure based on an inferential neural network to evaluate and calibrate a neonatal incubator. The proposal presents significant advantages over the standard calibration process, i.e., the number of sensors is drastically reduced, and it runs with the incubator under operation. Since the sensors used in the new calibration process are already installed in the commercial incubator, no additional hardware is necessary; and the calibration necessity can be diagnosed in real time without the presence of technical professionals in the neonatal intensive care unit (NICU). Experimental tests involving the aforementioned calibration system are carried out in a commercial incubator in order to validate the proposal. PMID:24248278

  19. A new comprehension and communication tool: a valuable resource for internationally educated occupational therapists.

    PubMed

    Nguyen, Tram; Baptiste, Sue; Jung, Bonny; Wilkins, Seanne

    2014-06-01

    The need was identified for a way to assess internationally educated occupational therapists’ skills in understanding and communicating professional terminology used in occupational therapy practice. The project aim was to develop and validate such a resource. A scenario-based assessment was developed using a three-phase process for tool development. The development process involved completion of a literature scan of professional terminology used in occupational therapy practice; selection of terms and concepts commonly used in occupational therapy practice; and, creation of practice-based scenarios illustrating key concepts complete with rating rubrics. An advisory group provided oversight, and a sample of internationally educated occupational therapists completed pilot and validity testing. The initial findings showed the assessment to be easy to complete and sensitive to testing understanding of the defined terms. The final outcome is an assessment tool that has broad application for occupational therapists wishing to enter professional practice in a new country. © 2013 Occupational Therapy Australia.

  20. Survey Instrument Validity Part I: Principles of Survey Instrument Development and Validation in Athletic Training Education Research

    ERIC Educational Resources Information Center

    Burton, Laura J.; Mazerolle, Stephanie M.

    2011-01-01

    Context: Instrument validation is an important facet of survey research methods and athletic trainers must be aware of the important underlying principles. Objective: To discuss the process of survey development and validation, specifically the process of construct validation. Background: Athletic training researchers frequently employ the use of…

Top