Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-25
... elements of process validation for the manufacture of human and animal drug and biological products... process validation for the manufacture of human and animal drug and biological products, including APIs. This guidance describes process validation activities in three stages: In Stage 1, Process Design, the...
21 CFR 1271.230 - Process validation.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...
21 CFR 1271.230 - Process validation.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...
21 CFR 1271.230 - Process validation.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...
21 CFR 1271.230 - Process validation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...
21 CFR 1271.230 - Process validation.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 1271.230 Section 1271.230 Food..., AND CELLULAR AND TISSUE-BASED PRODUCTS Current Good Tissue Practice § 1271.230 Process validation. (a... validation activities and results must be documented, including the date and signature of the individual(s...
Earth Science Enterprise Scientific Data Purchase Project: Verification and Validation
NASA Technical Reports Server (NTRS)
Jenner, Jeff; Policelli, Fritz; Fletcher, Rosea; Holecamp, Kara; Owen, Carolyn; Nicholson, Lamar; Dartez, Deanna
2000-01-01
This paper presents viewgraphs on the Earth Science Enterprise Scientific Data Purchase Project's verification,and validation process. The topics include: 1) What is Verification and Validation? 2) Why Verification and Validation? 3) Background; 4) ESE Data Purchas Validation Process; 5) Data Validation System and Ingest Queue; 6) Shipment Verification; 7) Tracking and Metrics; 8) Validation of Contract Specifications; 9) Earth Watch Data Validation; 10) Validation of Vertical Accuracy; and 11) Results of Vertical Accuracy Assessment.
NASA Astrophysics Data System (ADS)
See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz
2016-04-01
The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.
USING CFD TO ANALYZE NUCLEAR SYSTEMS BEHAVIOR: DEFINING THE VALIDATION REQUIREMENTS
DOE Office of Scientific and Technical Information (OSTI.GOV)
Richard Schultz
2012-09-01
A recommended protocol to formulate numeric tool specifications and validation needs in concert with practices accepted by regulatory agencies for advanced reactors is described. The protocol is based on the plant type and perceived transient and accident envelopes that translates to boundary conditions for a process that gives the: (a) key phenomena and figures-of-merit which must be analyzed to ensure that the advanced plant can be licensed, (b) specification of the numeric tool capabilities necessary to perform the required analyses—including bounding calculational uncertainties, and (c) specification of the validation matrices and experiments--including the desired validation data. The result of applyingmore » the process enables a complete program to be defined, including costs, for creating and benchmarking transient and accident analysis methods for advanced reactors. By following a process that is in concert with regulatory agency licensing requirements from the start to finish, based on historical acceptance of past licensing submittals, the methods derived and validated have a high probability of regulatory agency acceptance.« less
An assessment of space shuttle flight software development processes
NASA Technical Reports Server (NTRS)
1993-01-01
In early 1991, the National Aeronautics and Space Administration's (NASA's) Office of Space Flight commissioned the Aeronautics and Space Engineering Board (ASEB) of the National Research Council (NRC) to investigate the adequacy of the current process by which NASA develops and verifies changes and updates to the Space Shuttle flight software. The Committee for Review of Oversight Mechanisms for Space Shuttle Flight Software Processes was convened in Jan. 1992 to accomplish the following tasks: (1) review the entire flight software development process from the initial requirements definition phase to final implementation, including object code build and final machine loading; (2) review and critique NASA's independent verification and validation process and mechanisms, including NASA's established software development and testing standards; (3) determine the acceptability and adequacy of the complete flight software development process, including the embedded validation and verification processes through comparison with (1) generally accepted industry practices, and (2) generally accepted Department of Defense and/or other government practices (comparing NASA's program with organizations and projects having similar volumes of software development, software maturity, complexity, criticality, lines of code, and national standards); (4) consider whether independent verification and validation should continue. An overview of the study, independent verification and validation of critical software, and the Space Shuttle flight software development process are addressed. Findings and recommendations are presented.
Moore, Amy Lawson; Miller, Terissa M
2018-01-01
The purpose of the current study is to evaluate the validity and reliability of the revised Gibson Test of Cognitive Skills, a computer-based battery of tests measuring short-term memory, long-term memory, processing speed, logic and reasoning, visual processing, as well as auditory processing and word attack skills. This study included 2,737 participants aged 5-85 years. A series of studies was conducted to examine the validity and reliability using the test performance of the entire norming group and several subgroups. The evaluation of the technical properties of the test battery included content validation by subject matter experts, item analysis and coefficient alpha, test-retest reliability, split-half reliability, and analysis of concurrent validity with the Woodcock Johnson III Tests of Cognitive Abilities and Tests of Achievement. Results indicated strong sources of evidence of validity and reliability for the test, including internal consistency reliability coefficients ranging from 0.87 to 0.98, test-retest reliability coefficients ranging from 0.69 to 0.91, split-half reliability coefficients ranging from 0.87 to 0.91, and concurrent validity coefficients ranging from 0.53 to 0.93. The Gibson Test of Cognitive Skills-2 is a reliable and valid tool for assessing cognition in the general population across the lifespan.
Folmsbee, Martha
2015-01-01
Approximately 97% of filter validation tests result in the demonstration of absolute retention of the test bacteria, and thus sterile filter validation failure is rare. However, while Brevundimonas diminuta (B. diminuta) penetration of sterilizing-grade filters is rarely detected, the observation that some fluids (such as vaccines and liposomal fluids) may lead to an increased incidence of bacterial penetration of sterilizing-grade filters by B. diminuta has been reported. The goal of the following analysis was to identify important drivers of filter validation failure in these rare cases. The identification of these drivers will hopefully serve the purpose of assisting in the design of commercial sterile filtration processes with a low risk of filter validation failure for vaccine, liposomal, and related fluids. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to the effect of bacterial load (CFU/cm(2)), bacterial load rate (CFU/min/cm(2)), volume throughput (mL/cm(2)), and maximum filter flux (mL/min/cm(2)) on bacterial penetration. The data set (∼1162 individual filtrations) included all instances of process-specific filter validation failures performed at Pall Corporation, including those using other filter media, but did not include all successful retentive filter validation bacterial challenges. It was neither practical nor necessary to include all filter validation successes worldwide (Pall Corporation) to achieve the goals of this analysis. The percentage of failed filtration events for the selected total master data set was 27% (310/1162). Because it is heavily weighted with penetration events, this percentage is considerably higher than the actual rate of failed filter validations, but, as such, facilitated a close examination of the conditions that lead to filter validation failure. In agreement with our previous reports, two of the significant drivers of bacterial penetration identified were the total bacterial load and the bacterial load rate. In addition to these parameters, another three possible drivers of failure were also identified: volume throughput, maximum filter flux, and pressure. Of the data for which volume throughput information was available, 24% (249/1038) of the filtrations resulted in penetration. However, for the volume throughput range of 680-2260 mL/cm(2), only 9 out of 205 bacterial challenges (∼4%) resulted in penetration. Of the data for which flux information was available, 22% (212/946) resulted in bacterial penetration. However, in the maximum filter flux range from 7 to 18 mL/min/cm(2), only one out of 121 filtrations (0.6%) resulted in penetration. A slight increase in filter failure was observed in filter bacterial challenges with a differential pressure greater than 30 psid. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other potentially high-risk fluid), targeting the volume throughput range of 680-2260 mL/cm(2) or flux range of 7-18 mL/min/cm(2), and maintaining the differential pressure below 30 psid, could significantly decrease the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful process-specific filter validation of low-surface-tension fluids. An overwhelming majority of process-specific filter validation (qualification) tests result in the demonstration of absolute retention of test bacteria by sterilizing-grade membrane filters. As such, process-specific filter validation failure is rare. However, while bacterial penetration of sterilizing-grade filters during process-specific filter validation is rarely detected, some fluids (such as vaccines and liposomal fluids) have been associated with an increased incidence of bacterial penetration. The goal of the following analysis was to identify important drivers of process-specific filter validation failure. The identification of these drivers will possibly serve to assist in the design of commercial sterile filtration processes with a low risk of filter validation failure. Filter validation data for low-surface-tension fluids was collected and evaluated with regard to bacterial concentration and rates, as well as filtered fluid volume and rate (Pall Corporation). The master data set (∼1160 individual filtrations) included all recorded instances of process-specific filter validation failures but did not include all successful filter validation bacterial challenge tests. This allowed for a close examination of the conditions that lead to process-specific filter validation failure. As previously reported, two significant drivers of bacterial penetration were identified: the total bacterial load (the total number of bacteria per filter) and the bacterial load rate (the rate at which bacteria were applied to the filter). In addition to these parameters, another three possible drivers of failure were also identified: volumetric throughput, filter flux, and pressure. When designing a commercial process for the sterile filtration of a low-surface-tension fluid (or any other penetrative-risk fluid), targeting the identified bacterial challenge loads, volume throughput, and corresponding flux rates could decrease, and possibly eliminate, the risk of validation filter failure. However, it is important to keep in mind that these are general trends described in this study and some test fluids may not conform to the general trends described here. Ultimately, it is important to evaluate both filterability and bacterial retention of the test fluid under proposed process conditions prior to finalizing the manufacturing process to ensure successful filter validation of low-surface-tension fluids. © PDA, Inc. 2015.
ERIC Educational Resources Information Center
Lindle, Jane Clark; Stalion, Nancy; Young, Lu
2005-01-01
Kentucky's accountability system includes a school-processes audit known as Standards and Indicators for School Improvement (SISI), which is in a nascent stage of validation. Content validity methods include comparison to instruments measuring similar constructs as well as other techniques such as job analysis. This study used a two-phase process…
Preparing for the Validation Visit--Guidelines for Optimizing the Experience.
ERIC Educational Resources Information Center
Osborn, Hazel A.
2003-01-01
Urges child care programs to seek accreditation from NAEYC's National Academy of Early Childhood Programs to increase program quality and provides information on the validation process. Includes information on the validation visit and the validator's role and background. Offers suggestions for preparing the director, staff, children, and families…
Development and Validation of the Physics Anxiety Rating Scale
ERIC Educational Resources Information Center
Sahin, Mehmet; Caliskan, Serap; Dilek, Ufuk
2015-01-01
This study reports the development and validation process for an instrument to measure university students' anxiety in physics courses. The development of the Physics Anxiety Rating Scale (PARS) included the following steps: Generation of scale items, content validation, construct validation, and reliability calculation. The results of construct…
Guidance on validation and qualification of processes and operations involving radiopharmaceuticals.
Todde, S; Peitl, P Kolenc; Elsinga, P; Koziorowski, J; Ferrari, V; Ocak, E M; Hjelstuen, O; Patt, M; Mindt, T L; Behe, M
2017-01-01
Validation and qualification activities are nowadays an integral part of the day by day routine work in a radiopharmacy. This document is meant as an Appendix of Part B of the EANM "Guidelines on Good Radiopharmacy Practice (GRPP)" issued by the Radiopharmacy Committee of the EANM, covering the qualification and validation aspects related to the small-scale "in house" preparation of radiopharmaceuticals. The aim is to provide more detailed and practice-oriented guidance to those who are involved in the small-scale preparation of radiopharmaceuticals which are not intended for commercial purposes or distribution. The present guideline covers the validation and qualification activities following the well-known "validation chain", that begins with editing the general Validation Master Plan document, includes all the required documentation (e.g. User Requirement Specification, Qualification protocols, etc.), and leads to the qualification of the equipment used in the preparation and quality control of radiopharmaceuticals, until the final step of Process Validation. A specific guidance to the qualification and validation activities specifically addressed to small-scale hospital/academia radiopharmacies is here provided. Additional information, including practical examples, are also available.
NASA Technical Reports Server (NTRS)
Hooker, Stanford B.; McClain, Charles R.; Mannino, Antonio
2007-01-01
The primary objective of this planning document is to establish a long-term capability and validating oceanic biogeochemical satellite data. It is a pragmatic solution to a practical problem based primarily o the lessons learned from prior satellite missions. All of the plan's elements are seen to be interdependent, so a horizontal organizational scheme is anticipated wherein the overall leadership comes from the NASA Ocean Biology and Biogeochemistry (OBB) Program Manager and the entire enterprise is split into two components of equal sature: calibration and validation plus satellite data processing. The detailed elements of the activity are based on the basic tasks of the two main components plus the current objectives of the Carbon Cycle and Ecosystems Roadmap. The former is distinguished by an internal core set of responsibilities and the latter is facilitated through an external connecting-core ring of competed or contracted activities. The core elements for the calibration and validation component include a) publish protocols and performance metrics; b) verify uncertainty budgets; c) manage the development and evaluation of instrumentation; and d) coordinate international partnerships. The core elements for the satellite data processing component are e) process and reprocess multisensor data; f) acquire, distribute, and archive data products; and g) implement new data products. Both components have shared responsibilities for initializing and temporally monitoring satellite calibration. Connecting-core elements include (but are not restricted to) atmospheric correction and characterization, standards and traceability, instrument and analysis round robins, field campaigns and vicarious calibration sites, in situ database, bio-optical algorithm (and product) validation, satellite characterization and vicarious calibration, and image processing software. The plan also includes an accountability process, creating a Calibration and Validation Team (to help manage the activity), and a discussion of issues associated with the plan's scientific focus.
Mertens, Wilson C; Christov, Stefan C; Avrunin, George S; Clarke, Lori A; Osterweil, Leon J; Cassells, Lucinda J; Marquard, Jenna L
2012-11-01
Chemotherapy ordering and administration, in which errors have potentially severe consequences, was quantitatively and qualitatively evaluated by employing process formalism (or formal process definition), a technique derived from software engineering, to elicit and rigorously describe the process, after which validation techniques were applied to confirm the accuracy of the described process. The chemotherapy ordering and administration process, including exceptional situations and individuals' recognition of and responses to those situations, was elicited through informal, unstructured interviews with members of an interdisciplinary team. The process description (or process definition), written in a notation developed for software quality assessment purposes, guided process validation (which consisted of direct observations and semistructured interviews to confirm the elicited details for the treatment plan portion of the process). The overall process definition yielded 467 steps; 207 steps (44%) were dedicated to handling 59 exceptional situations. Validation yielded 82 unique process events (35 new expected but not yet described steps, 16 new exceptional situations, and 31 new steps in response to exceptional situations). Process participants actively altered the process as ambiguities and conflicts were discovered by the elicitation and validation components of the study. Chemotherapy error rates declined significantly during and after the project, which was conducted from October 2007 through August 2008. Each elicitation method and the subsequent validation discussions contributed uniquely to understanding the chemotherapy treatment plan review process, supporting rapid adoption of changes, improved communication regarding the process, and ensuing error reduction.
"La Clave Profesional": Validation of a Vocational Guidance Instrument
ERIC Educational Resources Information Center
Mudarra, Maria J.; Lázaro Martínez, Ángel
2014-01-01
Introduction: The current study demonstrates empirical and cultural validity of "La Clave Profesional" (Spanish adaptation of Career Key, Jones's test based Holland's RIASEC model). The process of providing validity evidence also includes a reflection on personal and career development and examines the relationahsips between RIASEC…
Construct Validity in TOEFL iBT Speaking Tasks: Insights from Natural Language Processing
ERIC Educational Resources Information Center
Kyle, Kristopher; Crossley, Scott A.; McNamara, Danielle S.
2016-01-01
This study explores the construct validity of speaking tasks included in the TOEFL iBT (e.g., integrated and independent speaking tasks). Specifically, advanced natural language processing (NLP) tools, MANOVA difference statistics, and discriminant function analyses (DFA) are used to assess the degree to which and in what ways responses to these…
ERIC Educational Resources Information Center
Setari, Anthony Philip
2016-01-01
The purpose of this study was to construct a holistic education school evaluation tool using Montessori Erdkinder principles, and begin the validation process of examining the proposed tool. This study addresses a vital need in the holistic education community for a school evaluation tool. The tool construction process included using Erdkinder…
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-31
... intended for transfusion, including recommendations for validation and quality control monitoring of the..., including recommendations for validation and quality control monitoring of the leukocyte reduction process... control number 0910-0052; the collections of information in 21 CFR 606.100(b), 606.100(c), and 606.121...
McAllister, Sue; Lincoln, Michelle; Ferguson, Allison; McAllister, Lindy
2013-01-01
Valid assessment of health science students' ability to perform in the real world of workplace practice is critical for promoting quality learning and ultimately certifying students as fit to enter the world of professional practice. Current practice in performance assessment in the health sciences field has been hampered by multiple issues regarding assessment content and process. Evidence for the validity of scores derived from assessment tools are usually evaluated against traditional validity categories with reliability evidence privileged over validity, resulting in the paradoxical effect of compromising the assessment validity and learning processes the assessments seek to promote. Furthermore, the dominant statistical approaches used to validate scores from these assessments fall under the umbrella of classical test theory approaches. This paper reports on the successful national development and validation of measures derived from an assessment of Australian speech pathology students' performance in the workplace. Validation of these measures considered each of Messick's interrelated validity evidence categories and included using evidence generated through Rasch analyses to support score interpretation and related action. This research demonstrated that it is possible to develop an assessment of real, complex, work based performance of speech pathology students, that generates valid measures without compromising the learning processes the assessment seeks to promote. The process described provides a model for other health professional education programs to trial.
2018-01-01
Background Any system applied to the control of parenteral nutrition (PN) ought to prove that the process meets the established requirements and include a repository of records to allow evaluation of the information about PN processes at any time. Objective The goal of the research was to evaluate the mobile health (mHealth) app and validate its effectiveness in monitoring the management of the PN process. Methods We studied the evaluation and validation of the general process of PN using an mHealth app. The units of analysis were the PN bags prepared and administered at the Son Espases University Hospital, Palma, Spain, from June 1 to September 6, 2016. For the evaluation of the app, we used the Poststudy System Usability Questionnaire and subsequent analysis with the Cronbach alpha coefficient. Validation was performed by checking the compliance of control for all operations on each of the stages (validation and transcription of the prescription, preparation, conservation, and administration) and by monitoring the operative control points and critical control points. Results The results obtained from 387 bags were analyzed, with 30 interruptions of administration. The fulfillment of stages was 100%, including noncritical nonconformities in the storage control. The average deviation in the weight of the bags was less than 5%, and the infusion time did not present deviations greater than 1 hour. Conclusions The developed app successfully passed the evaluation and validation tests and was implemented to perform the monitoring procedures for the overall PN process. A new mobile solution to manage the quality and traceability of sensitive medicines such as blood-derivative drugs and hazardous drugs derived from this project is currently being deployed. PMID:29615389
Egea-Valenzuela, Juan; González Suárez, Begoña; Sierra Bernal, Cristian; Juanmartiñena Fernández, José Francisco; Luján-Sanchís, Marisol; San Juan Acosta, Mileidis; Martínez Andrés, Blanca; Pons Beltrán, Vicente; Sastre Lozano, Violeta; Carretero Ribón, Cristina; de Vera Almenar, Félix; Sánchez Cuenca, Joaquín; Alberca de Las Parras, Fernando; Rodríguez de Miguel, Cristina; Valle Muñoz, Julio; Férnandez-Urién Sainz, Ignacio; Torres González, Carolina; Borque Barrera, Pilar; Pérez-Cuadrado Robles, Enrique; Alonso Lázaro, Noelia; Martínez García, Pilar; Prieto de Frías, César; Carballo Álvarez, Fernando
2018-05-01
Capsule endoscopy (CE) is the first-line investigation in cases of suspected Crohn's disease (CD) of the small bowel, but the factors associated with a higher diagnostic yield remain unclear. Our aim is to develop and validate a scoring index to assess the risk of the patients in this setting on the basis of biomarkers. Data on fecal calprotectin, C-reactive protein, and other biomarkers from a population of 124 patients with suspected CD of the small bowel studied by CE and included in a PhD study were used to build a scoring index. This was first used on this population (internal validation process) and after that on a different set of patients from a multicenter study (external validation process). An index was designed in which every biomarker is assigned a score. Three risk groups have been established (low, intermediate, and high). In the internal validation analysis (124 individuals), patients had a 10, 46.5, and 81% probability of showing inflammatory lesions in CE in the low-risk, intermediate-risk, and high-risk groups, respectively. In the external validation analysis, including 410 patients from 12 Spanish hospitals, this probability was 15.8, 49.7, and 80.6% for the low-risk, intermediate-risk, and high-risk groups, respectively. Results from the internal validation process show that the scoring index is coherent, and results from the external validation process confirm its reliability. This index can be a useful tool for selecting patients before CE studies in cases of suspected CD of the small bowel.
ERIC Educational Resources Information Center
Oreck, Barry A.; Owen, Steven V.; Baum, Susan M.
2003-01-01
The lack of valid, research-based methods to identify potential artistic talent hampers the inclusion of the arts in programs for the gifted and talented. The Talent Assessment Process in Dance, Music, and Theater (D/M/T TAP) was designed to identify potential performing arts talent in diverse populations, including bilingual and special education…
A Framework for Performing Verification and Validation in Reuse Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1997-01-01
Verification and Validation (V&V) is currently performed during application development for many systems, especially safety-critical and mission- critical systems. The V&V process is intended to discover errors, especially errors related to critical processing, as early as possible during the development process. The system application provides the context under which the software artifacts are validated. This paper describes a framework that extends V&V from an individual application system to a product line of systems that are developed within an architecture-based software engineering environment. This framework includes the activities of traditional application-level V&V, and extends these activities into domain engineering and into the transition between domain engineering and application engineering. The framework includes descriptions of the types of activities to be performed during each of the life-cycle phases, and provides motivation for the activities.
Atmospheric Science Data Center
2013-03-21
... The "Beta" designation means particle microphysical property validation is in progress, uncertainty envelopes on particle size distribution, ... UAE-2 campaign activities are part of the validation process, so two versions of the MISR aerosol products are included in this ...
On the validation of a code and a turbulence model appropriate to circulation control airfoils
NASA Technical Reports Server (NTRS)
Viegas, J. R.; Rubesin, M. W.; Maccormack, R. W.
1988-01-01
A computer code for calculating flow about a circulation control airfoil within a wind tunnel test section has been developed. This code is being validated for eventual use as an aid to design such airfoils. The concept of code validation being used is explained. The initial stages of the process have been accomplished. The present code has been applied to a low-subsonic, 2-D flow about a circulation control airfoil for which extensive data exist. Two basic turbulence models and variants thereof have been successfully introduced into the algorithm, the Baldwin-Lomax algebraic and the Jones-Launder two-equation models of turbulence. The variants include adding a history of the jet development for the algebraic model and adding streamwise curvature effects for both models. Numerical difficulties and difficulties in the validation process are discussed. Turbulence model and code improvements to proceed with the validation process are also discussed.
NASA Astrophysics Data System (ADS)
Irwanto, Rohaeti, Eli; LFX, Endang Widjajanti; Suyanta
2017-05-01
This research aims to develop instrument and determine the characteristics of an integrated assessment instrument. This research uses 4-D model, which includes define, design, develop, and disseminate. The primary product is validated by expert judgment, tested it's readability by students, and assessed it's feasibility by chemistry teachers. This research involved 246 students of grade XI of four senior high schools in Yogyakarta, Indonesia. Data collection techniques include interview, questionnaire, and test. Data collection instruments include interview guideline, item validation sheet, users' response questionnaire, instrument readability questionnaire, and essay test. The results show that the integrated assessment instrument has Aiken validity value of 0.95. Item reliability was 0.99 and person reliability was 0.69. Teachers' response to the integrated assessment instrument is very good. Therefore, the integrated assessment instrument is feasible to be applied to measure the students' analytical thinking and science process skills.
ERIC Educational Resources Information Center
Dornan, Tim; Muijtjens, Arno; Graham, Jennifer; Scherpbier, Albert; Boshuizen, Henny
2012-01-01
The drive to quality-manage medical education has created a need for valid measurement instruments. Validity evidence includes the theoretical and contextual origin of items, choice of response processes, internal structure, and interrelationship of a measure's variables. This research set out to explore the validity and potential utility of an…
Validation of the Chinese Expanded Euthanasia Attitude Scale
ERIC Educational Resources Information Center
Chong, Alice Ming-Lin; Fok, Shiu-Yeu
2013-01-01
This article reports the validation of the Chinese version of an expanded 31-item Euthanasia Attitude Scale. A 4-stage validation process included a pilot survey of 119 college students and a randomized household survey with 618 adults in Hong Kong. Confirmatory factor analysis confirmed a 4-factor structure of the scale, which can therefore be…
Mathew, S N; Field, W E; French, B F
2011-07-01
This article reports the use of an expert panel to perform content validation of an experimental assessment process for the safety of assistive technology (AT) adopted by farmers with disabilities. The validation process was conducted by a panel of six experts experienced in the subject matter, i.e., design, use, and assessment of AT for farmers with disabilities. The exercise included an evaluation session and two focus group sessions. The evaluation session consisted of using the assessment process under consideration by the panel to evaluate a set of nine ATs fabricated by a farmer on his farm site. The expert panel also participated in the focus group sessions conducted immediately before and after the evaluation session. The resulting data were analyzed using discursive analysis, and the results were incorporated into the final assessment process. The method and the results are presented with recommendations for the use of expert panels in research projects and validation of assessment tools.
Zamanzadeh, Vahid; Ghahramanian, Akram; Rassouli, Maryam; Abbaszadeh, Abbas; Alavi-Majd, Hamid; Nikanfar, Ali-Reza
2015-01-01
Introduction: The importance of content validity in the instrument psychometric and its relevance with reliability, have made it an essential step in the instrument development. This article attempts to give an overview of the content validity process and to explain the complexity of this process by introducing an example. Methods: We carried out a methodological study conducted to examine the content validity of the patient-centered communication instrument through a two-step process (development and judgment). At the first step, domain determination, sampling (item generation) and instrument formation and at the second step, content validity ratio, content validity index and modified kappa statistic was performed. Suggestions of expert panel and item impact scores are used to examine the instrument face validity. Results: From a set of 188 items, content validity process identified seven dimensions includes trust building (eight items), informational support (seven items), emotional support (five items), problem solving (seven items), patient activation (10 items), intimacy/friendship (six items) and spirituality strengthening (14 items). Content validity study revealed that this instrument enjoys an appropriate level of content validity. The overall content validity index of the instrument using universal agreement approach was low; however, it can be advocated with respect to the high number of content experts that makes consensus difficult and high value of the S-CVI with the average approach, which was equal to 0.93. Conclusion: This article illustrates acceptable quantities indices for content validity a new instrument and outlines them during design and psychometrics of patient-centered communication measuring instrument. PMID:26161370
Rodriguez, Hayley; Kissell, Kellie; Lucas, Lloyd; Fisak, Brian
2017-11-01
Although negative beliefs have been found to be associated with worry symptoms and depressive rumination, negative beliefs have yet to be examined in relation to post-event processing and social anxiety symptoms. The purpose of the current study was to examine the psychometric properties of the Negative Beliefs about Post-Event Processing Questionnaire (NB-PEPQ). A large, non-referred undergraduate sample completed the NB-PEPQ along with validation measures, including a measure of post-event processing and social anxiety symptoms. Based on factor analysis, a single-factor model was obtained, and the NB-PEPQ was found to exhibit good validity, including positive associations with measures of post-event processing and social anxiety symptoms. These findings add to the literature on the metacognitive variables that may lead to the development and maintenance of post-event processing and social anxiety symptoms, and have relevant clinical applications.
Verifying and Validating Proposed Models for FSW Process Optimization
NASA Technical Reports Server (NTRS)
Schneider, Judith
2008-01-01
This slide presentation reviews Friction Stir Welding (FSW) and the attempts to model the process in order to optimize and improve the process. The studies are ongoing to validate and refine the model of metal flow in the FSW process. There are slides showing the conventional FSW process, a couple of weld tool designs and how the design interacts with the metal flow path. The two basic components of the weld tool are shown, along with geometries of the shoulder design. Modeling of the FSW process is reviewed. Other topics include (1) Microstructure features, (2) Flow Streamlines, (3) Steady-state Nature, and (4) Grain Refinement Mechanisms
Johnson, Brittany J; Zarnowiecki, Dorota; Hendrie, Gilly A; Golley, Rebecca K
2018-02-21
Children's intake of discretionary choices is excessive. This study aimed to develop a questionnaire measuring parents' attitudes and beliefs towards limiting provision of discretionary choices, using the Health Action Process Approach model. The questionnaire items were informed by the Health Action Process Approach model, which extends the Theory of Planned Behaviour to include both motivational (intention) and volitional (post-intention) factors that influence behaviour change. The questionnaire was piloted for content and face validity (expert panel, n = 5; parents, n = 4). Construct and predictive validity were examined in a sample of 178 parents of 4-7-year-old children who completed the questionnaire online. Statistical analyses included exploratory factor analyses, Cronbach's alpha and multiple linear regression. Pilot testing supported content and face validity. Principal component analyses identified constructs that aligned with the eight constructs of the Health Action Process Approach model. Internal consistencies were high for all subscales, in both the motivational (Cronbach's alpha 0.77-0.88) and volitional phase (Cronbach's alpha 0.85-0.92). Initial results from validation tests support the development of a new questionnaire for measuring parent attitudes and beliefs regarding provision of discretionary choices to their 4-7-year-old children within the home. This new questionnaire can be used to gain greater insight into parents' attitudes and beliefs that influence ability to limit discretionary choices provision to children. Further research to expand understanding of the questionnaires' psychometric properties would be valuable, including confirmatory factor analysis and reproducibility. © 2018 Dietitians Association of Australia.
Assessment Methodology for Process Validation Lifecycle Stage 3A.
Sayeed-Desta, Naheed; Pazhayattil, Ajay Babu; Collins, Jordan; Chen, Shu; Ingram, Marzena; Spes, Jana
2017-07-01
The paper introduces evaluation methodologies and associated statistical approaches for process validation lifecycle Stage 3A. The assessment tools proposed can be applied to newly developed and launched small molecule as well as bio-pharma products, where substantial process and product knowledge has been gathered. The following elements may be included in Stage 3A: number of 3A batch determination; evaluation of critical material attributes, critical process parameters, critical quality attributes; in vivo in vitro correlation; estimation of inherent process variability (IPV) and PaCS index; process capability and quality dashboard (PCQd); and enhanced control strategy. US FDA guidance on Process Validation: General Principles and Practices, January 2011 encourages applying previous credible experience with suitably similar products and processes. A complete Stage 3A evaluation is a valuable resource for product development and future risk mitigation of similar products and processes. Elements of 3A assessment were developed to address industry and regulatory guidance requirements. The conclusions made provide sufficient information to make a scientific and risk-based decision on product robustness.
Development and validation of an educational booklet for healthy eating during pregnancy1
de Oliveira, Sheyla Costa; Lopes, Marcos Venícios de Oliveira; Fernandes, Ana Fátima Carvalho
2014-01-01
OBJECTIVE: to describe the validation process of an educational booklet for healthy eating in pregnancy using local and regional food. METHODS: methodological study, developed in three steps: construction of the educational booklet, validation of the educational material by judges, and by pregnant women. The validation process was conducted by 22 judges and 20 pregnant women, by convenience selection. We considered a p-value<0.85 to validate the booklet compliance and relevance, according to the six items of the instrument. As for content validation, the item-level Content Validity Index (I-CVI) was considered when a minimum score of at least 0.80 was obtained. RESULTS: five items were considered relevant by the judges. The mean I-CVI was 0.91. The pregnant women evaluated positively the booklet. The suggestions were accepted and included in the final version of the material. CONCLUSION: the booklet was validated in terms of content and relevance, and should be used by nurses for advice on healthy eating during pregnancy. PMID:25296145
21 CFR 211.113 - Control of microbiological contamination.
Code of Federal Regulations, 2011 CFR
2011-04-01
... shall include validation of all aseptic and sterilization processes. [43 FR 45077, Sept. 29, 1978, as... Process Controls § 211.113 Control of microbiological contamination. (a) Appropriate written procedures...
21 CFR 211.113 - Control of microbiological contamination.
Code of Federal Regulations, 2012 CFR
2012-04-01
... shall include validation of all aseptic and sterilization processes. [43 FR 45077, Sept. 29, 1978, as... Process Controls § 211.113 Control of microbiological contamination. (a) Appropriate written procedures...
21 CFR 211.113 - Control of microbiological contamination.
Code of Federal Regulations, 2014 CFR
2014-04-01
... shall include validation of all aseptic and sterilization processes. [43 FR 45077, Sept. 29, 1978, as... Process Controls § 211.113 Control of microbiological contamination. (a) Appropriate written procedures...
21 CFR 211.113 - Control of microbiological contamination.
Code of Federal Regulations, 2013 CFR
2013-04-01
... shall include validation of all aseptic and sterilization processes. [43 FR 45077, Sept. 29, 1978, as... Process Controls § 211.113 Control of microbiological contamination. (a) Appropriate written procedures...
Validity evidence as a key marker of quality of technical skill assessment in OTL-HNS.
Labbé, Mathilde; Young, Meredith; Nguyen, Lily H P
2018-01-13
Quality monitoring of assessment practices should be a priority in all residency programs. Validity evidence is one of the main hallmarks of assessment quality and should be collected to support the interpretation and use of assessment data. Our objective was to identify, synthesize, and present the validity evidence reported supporting different technical skill assessment tools in otolaryngology-head and neck surgery (OTL-HNS). We performed a secondary analysis of data generated through a systematic review of all published tools for assessing technical skills in OTL-HNS (n = 16). For each tool, we coded validity evidence according to the five types of evidence described by the American Educational Research Association's interpretation of Messick's validity framework. Descriptive statistical analyses were conducted. All 16 tools included in our analysis were supported by internal structure and relationship to variables validity evidence. Eleven articles presented evidence supporting content. Response process was discussed only in one article, and no study reported on evidence exploring consequences. We present the validity evidence reported for 16 rater-based tools that could be used for work-based assessment of OTL-HNS residents in the operating room. The articles included in our review were consistently deficient in evidence for response process and consequences. Rater-based assessment tools that support high-stakes decisions that impact the learner and programs should include several sources of validity evidence. Thus, use of any assessment should be done with careful consideration of the context-specific validity evidence supporting score interpretation, and we encourage deliberate continual assessment quality-monitoring. NA. Laryngoscope, 2018. © 2018 The American Laryngological, Rhinological and Otological Society, Inc.
NASA Technical Reports Server (NTRS)
Generazio, Edward R. (Inventor)
2012-01-01
A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.
Validation of contractor HMA testing data in the materials acceptance process.
DOT National Transportation Integrated Search
2010-08-01
"This study conducted an analysis of the SCDOT HMA specification. A Research Steering Committee comprised of SCDOT, FHWA, and Industry representatives provided oversight of the process. The research process included a literature review, a brief surve...
Cervera Peris, Mercedes; Alonso Rorís, Víctor Manuel; Santos Gago, Juan Manuel; Álvarez Sabucedo, Luis; Wanden-Berghe, Carmina; Sanz-Valero, Javier
2018-04-03
Any system applied to the control of parenteral nutrition (PN) ought to prove that the process meets the established requirements and include a repository of records to allow evaluation of the information about PN processes at any time. The goal of the research was to evaluate the mobile health (mHealth) app and validate its effectiveness in monitoring the management of the PN process. We studied the evaluation and validation of the general process of PN using an mHealth app. The units of analysis were the PN bags prepared and administered at the Son Espases University Hospital, Palma, Spain, from June 1 to September 6, 2016. For the evaluation of the app, we used the Poststudy System Usability Questionnaire and subsequent analysis with the Cronbach alpha coefficient. Validation was performed by checking the compliance of control for all operations on each of the stages (validation and transcription of the prescription, preparation, conservation, and administration) and by monitoring the operative control points and critical control points. The results obtained from 387 bags were analyzed, with 30 interruptions of administration. The fulfillment of stages was 100%, including noncritical nonconformities in the storage control. The average deviation in the weight of the bags was less than 5%, and the infusion time did not present deviations greater than 1 hour. The developed app successfully passed the evaluation and validation tests and was implemented to perform the monitoring procedures for the overall PN process. A new mobile solution to manage the quality and traceability of sensitive medicines such as blood-derivative drugs and hazardous drugs derived from this project is currently being deployed. ©Mercedes Cervera Peris, Víctor Manuel Alonso Rorís, Juan Manuel Santos Gago, Luis Álvarez Sabucedo, Carmina Wanden-Berghe, Javier Sanz-Valero. Originally published in JMIR Mhealth and Uhealth (http://mhealth.jmir.org), 03.04.2018.
Beno, Sarah M; Stasiewicz, Matthew J; Andrus, Alexis D; Ralyea, Robert D; Kent, David J; Martin, Nicole H; Wiedmann, Martin; Boor, Kathryn J
2016-12-01
Pathogen environmental monitoring programs (EMPs) are essential for food processing facilities of all sizes that produce ready-to-eat food products exposed to the processing environment. We developed, implemented, and evaluated EMPs targeting Listeria spp. and Salmonella in nine small cheese processing facilities, including seven farmstead facilities. Individual EMPs with monthly sample collection protocols were designed specifically for each facility. Salmonella was detected in only one facility, with likely introduction from the adjacent farm indicated by pulsed-field gel electrophoresis data. Listeria spp. were isolated from all nine facilities during routine sampling. The overall Listeria spp. (other than Listeria monocytogenes ) and L. monocytogenes prevalences in the 4,430 environmental samples collected were 6.03 and 1.35%, respectively. Molecular characterization and subtyping data suggested persistence of a given Listeria spp. strain in seven facilities and persistence of L. monocytogenes in four facilities. To assess routine sampling plans, validation sampling for Listeria spp. was performed in seven facilities after at least 6 months of routine sampling. This validation sampling was performed by independent individuals and included collection of 50 to 150 samples per facility, based on statistical sample size calculations. Two of the facilities had a significantly higher frequency of detection of Listeria spp. during the validation sampling than during routine sampling, whereas two other facilities had significantly lower frequencies of detection. This study provides a model for a science- and statistics-based approach to developing and validating pathogen EMPs.
Larrabee, Glenn J
2014-11-01
Literature on test validity and performance validity is reviewed to propose a framework for specification of an ability-focused battery (AFB). Factor analysis supports six domains of ability: first, verbal symbolic; secondly, visuoperceptual and visuospatial judgment and problem solving; thirdly, sensorimotor skills; fourthly, attention/working memory; fifthly, processing speed; finally, learning and memory (which can be divided into verbal and visual subdomains). The AFB should include at least three measures for each of the six domains, selected based on various criteria for validity including sensitivity to presence of disorder, sensitivity to severity of disorder, correlation with important activities of daily living, and containing embedded/derived measures of performance validity. Criterion groups should include moderate and severe traumatic brain injury, and Alzheimer's disease. Validation groups should also include patients with left and right hemisphere stroke, to determine measures sensitive to lateralized cognitive impairment and so that the moderating effects of auditory comprehension impairment and neglect can be analyzed on AFB measures. © The Author 2014. Published by Oxford University Press. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.
Using 'big data' to validate claims made in the pharmaceutical approval process.
Wasser, Thomas; Haynes, Kevin; Barron, John; Cziraky, Mark
2015-01-01
Big Data in the healthcare setting refers to the storage, assimilation, and analysis of large quantities of information regarding patient care. These data can be collected and stored in a wide variety of ways including electronic medical records collected at the patient bedside, or through medical records that are coded and passed to insurance companies for reimbursement. When these data are processed it is possible to validate claims as a part of the regulatory review process regarding the anticipated performance of medications and devices. In order to analyze properly claims by manufacturers and others, there is a need to express claims in terms that are testable in a timeframe that is useful and meaningful to formulary committees. Claims for the comparative benefits and costs, including budget impact, of products and devices need to be expressed in measurable terms, ideally in the context of submission or validation protocols. Claims should be either consistent with accessible Big Data or able to support observational studies where Big Data identifies target populations. Protocols should identify, in disaggregated terms, key variables that would lead to direct or proxy validation. Once these variables are identified, Big Data can be used to query massive quantities of data in the validation process. Research can be passive or active in nature. Passive, where the data are collected retrospectively; active where the researcher is prospectively looking for indicators of co-morbid conditions, side-effects or adverse events, testing these indicators to determine if claims are within desired ranges set forth by the manufacturer. Additionally, Big Data can be used to assess the effectiveness of therapy through health insurance records. This, for example, could indicate that disease or co-morbid conditions cease to be treated. Understanding the basic strengths and weaknesses of Big Data in the claim validation process provides a glimpse of the value that this research can provide to industry. Big Data can support a research agenda that focuses on the process of claims validation to support formulary submissions as well as inputs to ongoing disease area and therapeutic class reviews.
System design from mission definition to flight validation
NASA Technical Reports Server (NTRS)
Batill, S. M.
1992-01-01
Considerations related to the engineering systems design process and an approach taken to introduce undergraduate students to that process are presented. The paper includes details on a particular capstone design course. This course is a team oriented aircraft design project which requires the students to participate in many phases of the system design process, from mission definition to validation of their design through flight testing. To accomplish this in a single course requires special types of flight vehicles. Relatively small-scale, remotely piloted vehicles have provided the class of aircraft considered in this course.
Reeves, Todd D.; Marbach-Ad, Gili
2016-01-01
Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology—either quantitative or qualitative—on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. PMID:26903498
42 CFR 422.310 - Risk adjustment data.
Code of Federal Regulations, 2013 CFR
2013-10-01
... include financial penalties for failure to submit complete data. (e) Validation of risk adjustment data... records for the validation of risk adjustment data, as required by CMS. There may be penalties for... the prior December 31. (2) CMS allows a reconciliation process to account for late data submissions...
48 CFR 1852.234-1 - Notice of Earned Value Management System.
Code of Federal Regulations, 2014 CFR
2014-10-01
... according to paragraph (a); (vi) Provide documentation describing the process and results, including..., provide a schedule of events leading up to formal validation and Government acceptance of the Contractor's...) outlines the requirements for conducting a progress assistance visit and validation compliance review. (2...
48 CFR 1852.234-1 - Notice of Earned Value Management System.
Code of Federal Regulations, 2013 CFR
2013-10-01
... according to paragraph (a); (vi) Provide documentation describing the process and results, including..., provide a schedule of events leading up to formal validation and Government acceptance of the Contractor's...) outlines the requirements for conducting a progress assistance visit and validation compliance review. (2...
48 CFR 1852.234-1 - Notice of Earned Value Management System.
Code of Federal Regulations, 2012 CFR
2012-10-01
... according to paragraph (a); (vi) Provide documentation describing the process and results, including..., provide a schedule of events leading up to formal validation and Government acceptance of the Contractor's...) outlines the requirements for conducting a progress assistance visit and validation compliance review. (2...
48 CFR 1852.234-1 - Notice of Earned Value Management System.
Code of Federal Regulations, 2011 CFR
2011-10-01
... according to paragraph (a); (vi) Provide documentation describing the process and results, including..., provide a schedule of events leading up to formal validation and Government acceptance of the Contractor's...) outlines the requirements for conducting a progress assistance visit and validation compliance review. (2...
42 CFR 422.310 - Risk adjustment data.
Code of Federal Regulations, 2012 CFR
2012-10-01
... include financial penalties for failure to submit complete data. (e) Validation of risk adjustment data... records for the validation of risk adjustment data, as required by CMS. There may be penalties for... the prior December 31. (2) CMS allows a reconciliation process to account for late data submissions...
González-Chordá, Víctor M; Mena-Tudela, Desirée; Salas-Medina, Pablo; Cervera-Gasch, Agueda; Orts-Cortés, Isabel; Maciá-Soler, Loreto
2016-02-01
Writing a bachelor thesis (BT) is the last step to obtain a nursing degree. In order to perform an effective assessment of a nursing BT, certain reliable and valid tools are required. To develop and validate a 3-rubric system (drafting process, dissertation, and viva) to assess final year nursing students' BT. A multi-disciplinary study of content validity and psychometric properties. The study was carried out between December 2014 and July 2015. Nursing Degree at Universitat Jaume I. Spain. Eleven experts (9 nursing professors and 2 education professors from 6 different universities) took part in the development and content validity stages. Fifty-two theses presented during the 2014-2015 academic year were included by consecutive sampling of cases in order to study the psychometric properties. First, a group of experts was created to validate the content of the assessment system based on three rubrics (drafting process, dissertation, and viva). Subsequently, a reliability and validity study of the rubrics was carried out on the 52 theses presented during the 2014-2015 academic year. The BT drafting process rubric has 8 criteria (S-CVI=0.93; α=0.837; ICC=0.614), the dissertation rubric has 7 criteria (S-CVI=0.9; α=0.893; ICC=0.74), and the viva rubric has 4 criteria (S-CVI=0.86; α=8.16; ICC=0.895). A nursing BT assessment system based on three rubrics (drafting process, dissertation, and viva) has been validated. This system may be transferred to other nursing degrees or degrees from other academic areas. It is necessary to continue with the validation process taking into account factors that may affect the results obtained. Copyright © 2015 Elsevier Ltd. All rights reserved.
Positron annihilation processes update
NASA Technical Reports Server (NTRS)
Guessoum, Nidhal; Skibo, Jeffrey G.; Ramaty, Reuven
1997-01-01
The present knowledge concerning the positron annihilation processes is reviewed, with emphasis on the data of the cross sections of the various processes of interest in astrophysical applications. Recent results are presented including results on reaction rates and line widths, the validity of which is verified.
Elfenbein, Hillary Anger; Jang, Daisung; Sharma, Sudeep; Sanchez-Burks, Jeffrey
2017-03-01
Emotional intelligence (EI) has captivated researchers and the public alike, but it has been challenging to establish its components as objective abilities. Self-report scales lack divergent validity from personality traits, and few ability tests have objectively correct answers. We adapt the Stroop task to introduce a new facet of EI called emotional attention regulation (EAR), which involves focusing emotion-related attention for the sake of information processing rather than for the sake of regulating one's own internal state. EAR includes 2 distinct components. First, tuning in to nonverbal cues involves identifying nonverbal cues while ignoring alternate content, that is, emotion recognition under conditions of distraction by competing stimuli. Second, tuning out of nonverbal cues involves ignoring nonverbal cues while identifying alternate content, that is, the ability to interrupt emotion recognition when needed to focus attention elsewhere. An auditory test of valence included positive and negative words spoken in positive and negative vocal tones. A visual test of approach-avoidance included green- and red-colored facial expressions depicting happiness and anger. The error rates for incongruent trials met the key criteria for establishing the validity of an EI test, in that the measure demonstrated test-retest reliability, convergent validity with other EI measures, divergent validity from factors such as general processing speed and mostly personality, and predictive validity in this case for well-being. By demonstrating that facets of EI can be validly theorized and empirically assessed, results also speak to the validity of EI more generally. (PsycINFO Database Record (c) 2017 APA, all rights reserved).
Nair, Rahul; Ishaque, Sana; Spencer, Andrew John; Luzzi, Liana; Do, Loc Giang
2018-03-30
Review the validation process reported for oral healthcare satisfaction scales that intended to measure general oral health care that is not restricted to specific subspecialties or interventions. After preliminary searches, PUBMED and EMBASE were searched using a broad search strategy, followed by a snowball strategy using the references of the publications included from database searches. Title and abstract were screened for assessing inclusion, followed by a full-text screening of these publications. English language publications on multi-item questionnaires that report on a scale measuring patient satisfaction for oral health care were included. Publications were excluded when they did not report on any psychometric validation, or the scales were addressing specific treatments or subspecialities in oral health care. Fourteen instruments were identified from as many publications that report on their initial validation, while five more publications reported on further testing of the validity of these instruments. Number of items (range: 8-42) and dimension reported (range: 2-13) were often dissimilar between the assessed measurement instruments. There was also a lack of methodologies to incorporate patient's subjective perspective. Along with a limited reporting of psychometric properties of instruments, cross-cultural adaptations were limited to translation processes. The extent of validity and reliability of the included instruments was largely unassessed, and appropriate instruments for populations outside of those belonging to general adult populations were not present. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2013 CFR
2013-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the... through the review and oversight process. ...
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2014 CFR
2014-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the... through the review and oversight process. ...
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2011 CFR
2011-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the... through the review and oversight process. ...
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2012 CFR
2012-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the... through the review and oversight process. ...
Standards for Title VII Evaluations: Accommodation for Reality Constraints.
ERIC Educational Resources Information Center
Yap, Kim Onn
Two separate sets of minimum standards designed to guide the evaluation of bilingual projects are proposed. The first set relates to the process in which the evaluation activities are conducted. They include: validity of assessment procedures, validity and reliability of evaluation instruments, representativeness of findings, use of procedures for…
ERIC Educational Resources Information Center
Schneller, A. J.; Johnson, B.; Bogner, F. X.
2015-01-01
This paper describes the validation process of measuring children's attitudes and values toward the environment within a Mexican sample. We applied the Model of Ecological Values (2-MEV), which has been shown to be valid and reliable in 20 countries, including one Spanish speaking culture. Items were initially modified to fit the regional dialect,…
Validating the Inactivation Effectiveness of Chemicals on Ebola Virus.
Haddock, Elaine; Feldmann, Friederike
2017-01-01
While viruses such as Ebola virus must be handled in high-containment laboratories, there remains the need to process virus-infected samples for downstream research testing. This processing often includes removal to lower containment areas and therefore requires assurance of complete viral inactivation within the sample before removal from high-containment. Here we describe methods for the removal of chemical reagents used in inactivation procedures, allowing for validation of the effectiveness of various inactivation protocols.
Kismödi, Eszter; Kiragu, Karusa; Sawicki, Olga; Smith, Sally; Brion, Sophie; Sharma, Aditi; Mworeko, Lilian; Iovita, Alexandrina
2017-12-01
In 2014, the World Health Organization (WHO) initiated a process for validation of the elimination of mother-to-child transmission (EMTCT) of HIV and syphilis by countries. For the first time in such a process for the validation of disease elimination, WHO introduced norms and approaches that are grounded in human rights, gender equality, and community engagement. This human rights-based validation process can serve as a key opportunity to enhance accountability for human rights protection by evaluating EMTCT programs against human rights norms and standards, including in relation to gender equality and by ensuring the provision of discrimination-free quality services. The rights-based validation process also involves the assessment of participation of affected communities in EMTCT program development, implementation, and monitoring and evaluation. It brings awareness to the types of human rights abuses and inequalities faced by women living with, at risk of, or affected by HIV and syphilis, and commits governments to eliminate those barriers. This process demonstrates the importance and feasibility of integrating human rights, gender, and community into key public health interventions in a manner that improves health outcomes, legitimizes the participation of affected communities, and advances the human rights of women living with HIV.
A multi-site cognitive task analysis for biomedical query mediation.
Hruby, Gregory W; Rasmussen, Luke V; Hanauer, David; Patel, Vimla L; Cimino, James J; Weng, Chunhua
2016-09-01
To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: "Identify potential index phenotype," "If needed, request EHR database access rights," and "Perform query and present output to medical researcher", and 8 are invalid. We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.
A Multi-Site Cognitive Task Analysis for Biomedical Query Mediation
Hruby, Gregory W.; Rasmussen, Luke V.; Hanauer, David; Patel, Vimla; Cimino, James J.; Weng, Chunhua
2016-01-01
Objective To apply cognitive task analyses of the Biomedical query mediation (BQM) processes for EHR data retrieval at multiple sites towards the development of a generic BQM process model. Materials and Methods We conducted semi-structured interviews with eleven data analysts from five academic institutions and one government agency, and performed cognitive task analyses on their BQM processes. A coding schema was developed through iterative refinement and used to annotate the interview transcripts. The annotated dataset was used to reconstruct and verify each BQM process and to develop a harmonized BQM process model. A survey was conducted to evaluate the face and content validity of this harmonized model. Results The harmonized process model is hierarchical, encompassing tasks, activities, and steps. The face validity evaluation concluded the model to be representative of the BQM process. In the content validity evaluation, out of the 27 tasks for BQM, 19 meet the threshold for semi-valid, including 3 fully valid: “Identify potential index phenotype,” “If needed, request EHR database access rights,” and “Perform query and present output to medical researcher”, and 8 are invalid. Discussion We aligned the goals of the tasks within the BQM model with the five components of the reference interview. The similarity between the process of BQM and the reference interview is promising and suggests the BQM tasks are powerful for eliciting implicit information needs. Conclusions We contribute a BQM process model based on a multi-site study. This model promises to inform the standardization of the BQM process towards improved communication efficiency and accuracy. PMID:27435950
Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.
1998-01-01
A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.
Validity of juvenile idiopathic arthritis diagnoses using administrative health data.
Stringer, Elizabeth; Bernatsky, Sasha
2015-03-01
Administrative health databases are valuable sources of data for conducting research including disease surveillance, outcomes research, and processes of health care at the population level. There has been limited use of administrative data to conduct studies of pediatric rheumatic conditions and no studies validating case definitions in Canada. We report a validation study of incident cases of juvenile idiopathic arthritis in the Canadian province of Nova Scotia. Cases identified through administrative data algorithms were compared to diagnoses in a clinical database. The sensitivity of algorithms that included pediatric rheumatology specialist claims was 81-86%. However, 35-48% of cases that were identified could not be verified in the clinical database depending on the algorithm used. Our case definitions would likely lead to overestimates of disease burden. Our findings may be related to issues pertaining to the non-fee-for-service remuneration model in Nova Scotia, in particular, systematic issues related to the process of submitting claims.
NASA Technical Reports Server (NTRS)
Williams, Daniel M.
2006-01-01
Described is the research process that NASA researchers used to validate the Small Aircraft Transportation System (SATS) Higher Volume Operations (HVO) concept. The four phase building-block validation and verification process included multiple elements ranging from formal analysis of HVO procedures to flight test, to full-system architecture prototype that was successfully shown to the public at the June 2005 SATS Technical Demonstration in Danville, VA. Presented are significant results of each of the four research phases that extend early results presented at ICAS 2004. HVO study results have been incorporated into the development of the Next Generation Air Transportation System (NGATS) vision and offer a validated concept to provide a significant portion of the 3X capacity improvement sought after in the United States National Airspace System (NAS).
DOT National Transportation Integrated Search
2016-08-01
This study conducted an analysis of the SCDOT HMA specification. A Research Steering Committee provided oversight : of the process. The research process included extensive statistical analyses of test data supplied by SCDOT. : A total of 2,789 AC tes...
42 CFR 493.557 - Additional submission requirements.
Code of Federal Regulations, 2013 CFR
2013-10-01
... CMS determines are necessary for validation and assessment of the State's inspection process... the information specified in §§ 493.553 and 493.555, as part of the approval and review process, an... process, including, but not limited to the following: (i) The size and composition of individual...
42 CFR 493.557 - Additional submission requirements.
Code of Federal Regulations, 2014 CFR
2014-10-01
... CMS determines are necessary for validation and assessment of the State's inspection process... the information specified in §§ 493.553 and 493.555, as part of the approval and review process, an... process, including, but not limited to the following: (i) The size and composition of individual...
42 CFR 493.557 - Additional submission requirements.
Code of Federal Regulations, 2012 CFR
2012-10-01
... CMS determines are necessary for validation and assessment of the State's inspection process... the information specified in §§ 493.553 and 493.555, as part of the approval and review process, an... process, including, but not limited to the following: (i) The size and composition of individual...
42 CFR 493.557 - Additional submission requirements.
Code of Federal Regulations, 2011 CFR
2011-10-01
... CMS determines are necessary for validation and assessment of the State's inspection process... the information specified in §§ 493.553 and 493.555, as part of the approval and review process, an... process, including, but not limited to the following: (i) The size and composition of individual...
Validation of the TTM processes of change measure for physical activity in an adult French sample.
Bernard, Paquito; Romain, Ahmed-Jérôme; Trouillet, Raphael; Gernigon, Christophe; Nigg, Claudio; Ninot, Gregory
2014-04-01
Processes of change (POC) are constructs from the transtheoretical model that propose to examine how people engage in a behavior. However, there is no consensus about a leading model explaining POC and there is no validated French POC scale in physical activity This study aimed to compare the different existing models to validate a French POC scale. Three studies, with 748 subjects included, were carried out to translate the items and evaluate their clarity (study 1, n = 77), to assess the factorial validity (n = 200) and invariance/equivalence (study 2, n = 471), and to analyze the concurrent validity by stage × process analyses (study 3, n = 671). Two models displayed adequate fit to the data; however, based on the Akaike information criterion, the fully correlated five-factor model appeared as the most appropriate to measure POC in physical activity. The invariance/equivalence was also confirmed across genders and student status. Four of the five existing factors discriminated pre-action and post-action stages. These data support the validation of the POC questionnaire in physical activity among a French sample. More research is needed to explore the longitudinal properties of this scale.
NASA Astrophysics Data System (ADS)
Anaperta, M.; Helendra, H.; Zulva, R.
2018-04-01
This study aims to describe the validity of physics module with Character Oriented Values Using Process Approach Skills at Dynamic Electrical Material in high school physics / MA and SMK. The type of research is development research. The module development model uses the development model proposed by Plomp which consists of (1) preliminary research phase, (2) the prototyping phase, and (3) assessment phase. In this research is done is initial investigation phase and designing. Data collecting technique to know validation is observation and questionnaire. In the initial investigative phase, curriculum analysis, student analysis, and concept analysis were conducted. In the design phase and the realization of module design for SMA / MA and SMK subjects in dynamic electrical materials. After that, the formative evaluation which include self evaluation, prototyping (expert reviews, one-to-one, and small group. At this stage validity is performed. This research data is obtained through the module validation sheet, which then generates a valid module.
SeaSat-A Satellite Scatterometer (SASS) Validation and Experiment Plan
NASA Technical Reports Server (NTRS)
Schroeder, L. C. (Editor)
1978-01-01
This plan was generated by the SeaSat-A satellite scatterometer experiment team to define the pre-and post-launch activities necessary to conduct sensor validation and geophysical evaluation. Details included are an instrument and experiment description/performance requirements, success criteria, constraints, mission requirements, data processing requirement and data analysis responsibilities.
An Examination of the Construct Validity of the Inventory of Classroom Management Style.
ERIC Educational Resources Information Center
Martin, Nancy K.; Baldwin, Beatrice
Confirmatory factor analysis was used to examine the construct validity of a new instrument measuring perceptions toward classroom management: the Inventory of Classroom Management Style (ICMS). Classroom management was defined as a multifaceted process that includes three broad dimensions: (1) what teachers believe about students as persons; (2)…
48 CFR 1852.245-73 - Financial reporting of NASA property in the custody of contractors.
Code of Federal Regulations, 2013 CFR
2013-10-01
... due. However, contractors' procedures must document the process for developing these estimates based... shall have formal policies and procedures, which address the validation of NF 1018 data, including data... validation is to ensure that information reported is accurate and in compliance with the NASA FAR Supplement...
48 CFR 1852.245-73 - Financial reporting of NASA property in the custody of contractors.
Code of Federal Regulations, 2012 CFR
2012-10-01
... due. However, contractors' procedures must document the process for developing these estimates based... shall have formal policies and procedures, which address the validation of NF 1018 data, including data... validation is to ensure that information reported is accurate and in compliance with the NASA FAR Supplement...
Tsimicalis, Argerie; Le May, Sylvie; Stinson, Jennifer; Rennick, Janet; Vachon, Marie-France; Louli, Julie; Bérubé, Sarah; Treherne, Stephanie; Yoon, Sunmoo; Nordby Bøe, Trude; Ruland, Cornelia
Sisom is an interactive tool designed to help children communicate their cancer symptoms. Important design issues relevant to other cancer populations remain unexplored. This single-site, descriptive, qualitative study was conducted to linguistically validate Sisom with a group of French-speaking children with cancer, their parents, and health care professionals. The linguistic validation process included 6 steps: (1) forward translation, (2) backward translation, (3) patient testing, (4) production of a Sisom French version, (5) patient testing this version, and (6) production of the final Sisom French prototype. Five health care professionals and 10 children and their parents participated in the study. Health care professionals oversaw the translation process providing clinically meaningful suggestions. Two rounds of patient testing, which included parental participation, resulted in the following themes: (1) comprehension, (2) suggestions for improving the translations, (3) usability, (4) parental engagement, and (5) overall impression. Overall, Sisom was well received by participants who were forthcoming with input and suggestions for improving the French translations. Our proposed methodology may be replicated for the linguistic validation of other e-health tools.
Dobecki, Marek
2012-01-01
This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.
Burkey, Matthew D.; Ghimire, Lajina; Adhikari, Ramesh P.; Kohrt, Brandon A.; Jordans, Mark J. D.; Haroz, Emily; Wissow, Lawrence
2017-01-01
Systematic processes are needed to develop valid measurement instruments for disruptive behavior disorders (DBDs) in cross-cultural settings. We employed a four-step process in Nepal to identify and select items for a culturally valid assessment instrument: 1) We extracted items from validated scales and local free-list interviews. 2) Parents, teachers, and peers (n=30) rated the perceived relevance and importance of behavior problems. 3) Highly rated items were piloted with children (n=60) in Nepal. 4) We evaluated internal consistency of the final scale. We identified 49 symptoms from 11 scales, and 39 behavior problems from free-list interviews (n=72). After dropping items for low ratings of relevance and severity and for poor item-test correlation, low frequency, and/or poor acceptability in pilot testing, 16 items remained for the Disruptive Behavior International Scale—Nepali version (DBIS-N). The final scale had good internal consistency (α=0.86). A 4-step systematic approach to scale development including local participation yielded an internally consistent scale that included culturally relevant behavior problems. PMID:28093575
Issues in developing valid assessments of speech pathology students' performance in the workplace.
McAllister, Sue; Lincoln, Michelle; Ferguson, Alison; McAllister, Lindy
2010-01-01
Workplace-based learning is a critical component of professional preparation in speech pathology. A validated assessment of this learning is seen to be 'the gold standard', but it is difficult to develop because of design and validation issues. These issues include the role and nature of judgement in assessment, challenges in measuring quality, and the relationship between assessment and learning. Valid assessment of workplace-based performance needs to capture the development of competence over time and account for both occupation specific and generic competencies. This paper reviews important conceptual issues in the design of valid and reliable workplace-based assessments of competence including assessment content, process, impact on learning, measurement issues, and validation strategies. It then goes on to share what has been learned about quality assessment and validation of a workplace-based performance assessment using competency-based ratings. The outcomes of a four-year national development and validation of an assessment tool are described. A literature review of issues in conceptualizing, designing, and validating workplace-based assessments was conducted. Key factors to consider in the design of a new tool were identified and built into the cycle of design, trialling, and data analysis in the validation stages of the development process. This paper provides an accessible overview of factors to consider in the design and validation of workplace-based assessment tools. It presents strategies used in the development and national validation of a tool COMPASS, used in an every speech pathology programme in Australia, New Zealand, and Singapore. The paper also describes Rasch analysis, a model-based statistical approach which is useful for establishing validity and reliability of assessment tools. Through careful attention to conceptual and design issues in the development and trialling of workplace-based assessments, it has been possible to develop the world's first valid and reliable national assessment tool for the assessment of performance in speech pathology.
An adaptive management process for forest soil conservation.
Michael P. Curran; Douglas G. Maynard; Ronald L. Heninger; Thomas A. Terry; Steven W. Howes; Douglas M. Stone; Thomas Niemann; Richard E. Miller; Robert F. Powers
2005-01-01
Soil disturbance guidelines should be based on comparable disturbance categories adapted to specific local soil conditions, validated by monitoring and research. Guidelines, standards, and practices should be continually improved based on an adaptive management process, which is presented in this paper. Core components of this process include: reliable monitoring...
Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein
2015-04-08
The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.
Where Public Health Meets Human Rights
Kiragu, Karusa; Sawicki, Olga; Smith, Sally; Brion, Sophie; Sharma, Aditi; Mworeko, Lilian; Iovita, Alexandrina
2017-01-01
Abstract In 2014, the World Health Organization (WHO) initiated a process for validation of the elimination of mother-to-child transmission (EMTCT) of HIV and syphilis by countries. For the first time in such a process for the validation of disease elimination, WHO introduced norms and approaches that are grounded in human rights, gender equality, and community engagement. This human rights-based validation process can serve as a key opportunity to enhance accountability for human rights protection by evaluating EMTCT programs against human rights norms and standards, including in relation to gender equality and by ensuring the provision of discrimination-free quality services. The rights-based validation process also involves the assessment of participation of affected communities in EMTCT program development, implementation, and monitoring and evaluation. It brings awareness to the types of human rights abuses and inequalities faced by women living with, at risk of, or affected by HIV and syphilis, and commits governments to eliminate those barriers. This process demonstrates the importance and feasibility of integrating human rights, gender, and community into key public health interventions in a manner that improves health outcomes, legitimizes the participation of affected communities, and advances the human rights of women living with HIV. PMID:29302179
Kim, Jeong-Eon; Park, Eun-Jun
2015-04-01
The purpose of this study was to validate the Korean version of the Ethical Leadership at Work questionnaire (K-ELW) that measures RNs' perceived ethical leadership of their nurse managers. The strong validation process suggested by Benson (1998), including translation and cultural adaptation stage, structural stage, and external stage, was used. Participants were 241 RNs who reported their perceived ethical leadership using both the pre-version of K-ELW and a previously known Ethical Leadership Scale, and interactional justice of their managers, as well as their own demographics, organizational commitment and organizational citizenship behavior. Data analyses included descriptive statistics, Pearson correlation coefficients, reliability coefficients, exploratory factor analysis, and confirmatory factor analysis. SPSS 19.0 and Amos 18.0 versions were used. A modified K-ELW was developed from construct validity evidence and included 31 items in 7 domains: People orientation, task responsibility fairness, relationship fairness, power sharing, concern for sustainability, ethical guidance, and integrity. Convergent validity, discriminant validity, and concurrent validity were supported according to the correlation coefficients of the 7 domains with other measures. The results of this study provide preliminary evidence that the modified K-ELW can be adopted in Korean nursing organizations, and reliable and valid ethical leadership scores can be expected.
Magnetic Field Satellite (Magsat) data processing system specifications
NASA Technical Reports Server (NTRS)
Berman, D.; Gomez, R.; Miller, A.
1980-01-01
The software specifications for the MAGSAT data processing system (MDPS) are presented. The MDPS is divided functionally into preprocessing of primary input data, data management, chronicle processing, and postprocessing. Data organization and validity, and checks of spacecraft and instrumentation are dicussed. Output products of the MDPS, including various plots and data tapes, are described. Formats for important tapes are presented. Dicussions and mathematical formulations for coordinate transformations and field model coefficients are included.
Examining the validity of self-reports on scales measuring students' strategic processing.
Samuelstuen, Marit S; Bråten, Ivar
2007-06-01
Self-report inventories trying to measure strategic processing at a global level have been much used in both basic and applied research. However, the validity of global strategy scores is open to question because such inventories assess strategy perceptions outside the context of specific task performance. The primary aim was to examine the criterion-related and construct validity of the global strategy data obtained with the Cross-Curricular Competencies (CCC) scale. Additionally, we wanted to compare the validity of these data with the validity of data obtained with a task-specific self-report inventory focusing on the same types of strategies. The sample included 269 10th-grade students from 12 different junior high schools. Global strategy use as assessed with the CCC was compared with task-specific strategy use reported in three different reading situations. Moreover, relationships between scores on the CCC and scores on measures of text comprehension were examined and compared with relationships between scores on the task-specific strategy measure and the same comprehension measures. The comparison between the CCC strategy scores and the task-specific strategy scores suggested only modest criterion-related validity for the data obtained with the global strategy inventory. The CCC strategy scores were also not related to the text comprehension measures, indicating poor construct validity. In contrast, the task-specific strategy scores were positively related to the comprehension measures, indicating good construct validity. Attempts to measure strategic processing at a global level seem to have limited validity and utility.
2009-12-01
and have provided validation of screening items included on the PDHRA (Bliese, Wright, Adler, & Thomas , 2004a; Bliese, Wright, Adler, Thomas , & Hoge...concordance and symptom reporting (Aziz & Kenford, 2004; Greenfield, Midanik, & Rogers, 2000; Rhode, Lewinsohn, & Seeley , 1997) Concordance, including...Bliese, Wright, Adler, Hoge, & Prayner, 2005; Bliese, Wright, Adler, & Thomas , 2004; Bliese, Wright, Adler, Thomas , & Hoge, 2004) reported
Development and validation of nonthermal and advanced thermal food safety intervention technologies
USDA-ARS?s Scientific Manuscript database
Alternative nonthermal and thermal food safety interventions are gaining acceptance by the food processing industry and consumers. These technologies include high pressure processing, ultraviolet and pulsed light, ionizing radiation, pulsed and radiofrequency electric fields, cold atmospheric plasm...
Lee, Shin-Young; Lee, Eunice E
2015-02-01
The purpose of this study was to report the instrument modification and validation processes to make existing health belief model scales culturally appropriate for Korean Americans (KAs) regarding colorectal cancer (CRC) screening utilization. Instrument translation, individual interviews using cognitive interviewing, and expert reviews were conducted during the instrument modification phase, and a pilot test and a cross-sectional survey were conducted during the instrument validation phase. Data analyses of the cross-sectional survey included internal consistency and construct validity using exploratory and confirmatory factor analysis. The main issues identified during the instrument modification phase were (a) cultural and linguistic translation issues and (b) newly developed items reflecting Korean cultural barriers. Cross-sectional survey analyses during the instrument validation phase revealed that all scales demonstrate good internal consistency reliability (Cronbach's alpha=.72~.88). Exploratory factor analysis showed that susceptibility and severity loaded on the same factor, which may indicate a threat variable. Items with low factor loadings in the confirmatory factor analysis may relate to (a) lack of knowledge about fecal occult blood testing and (b) multiple dimensions of the subscales. Methodological, sequential processes of instrument modification and validation, including translation, individual interviews, expert reviews, pilot testing and a cross-sectional survey, were provided in this study. The findings indicate that existing instruments need to be examined for CRC screening research involving KAs.
Validation of learning assessments: A primer.
Peeters, Michael J; Martin, Beth A
2017-09-01
The Accreditation Council for Pharmacy Education's Standards 2016 has placed greater emphasis on validating educational assessments. In this paper, we describe validity, reliability, and validation principles, drawing attention to the conceptual change that highlights one validity with multiple evidence sources; to this end, we recommend abandoning historical (confusing) terminology associated with the term validity. Further, we describe and apply Kane's framework (scoring, generalization, extrapolation, and implications) for the process of validation, with its inferences and conclusions from varied uses of assessment instruments by different colleges and schools of pharmacy. We then offer five practical recommendations that can improve reporting of validation evidence in pharmacy education literature. We describe application of these recommendations, including examples of validation evidence in the context of pharmacy education. After reading this article, the reader should be able to understand the current concept of validation, and use a framework as they validate and communicate their own institution's learning assessments. Copyright © 2017 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Smith, Kandler A; Santhanagopalan, Shriram; Yang, Chuanbo
Computer models are helping to accelerate the design and validation of next generation batteries and provide valuable insights not possible through experimental testing alone. Validated 3-D physics-based models exist for predicting electrochemical performance, thermal and mechanical response of cells and packs under normal and abuse scenarios. The talk describes present efforts to make the models better suited for engineering design, including improving their computation speed, developing faster processes for model parameter identification including under aging, and predicting the performance of a proposed electrode material recipe a priori using microstructure models.
Kurimoto, Shigeru; Suzuki, Mikako; Yamamoto, Michiro; Okui, Nobuyuki; Imaeda, Toshihiko; Hirata, Hitoshi
2011-11-01
The purpose of this study is to develop a short and valid measure for upper extremity disorders and to assess the effect of attached illustrations in item reduction of a self-administered disability questionnaire while retaining psychometric properties. A validated questionnaire used to assess upper extremity disorders, the Hand20, was reduced to ten items using two item-reduction techniques. The psychometric properties of the abbreviated form, the Hand10, were evaluated on an independent sample that was used for the shortening process. Validity, reliability, and responsiveness of the Hand10 were retained in the item reduction process. It was possible that the use of explanatory illustrations attached to the Hand10 helped with its reproducibility. The illustrations for the Hand10 promoted text comprehension and motivation to answer the items. These changes resulted in high acceptability; more than 99.3% of patients, including 98.5% of elderly patients, could complete the Hand10 properly. The illustrations had favorable effects on the item reduction process and made it possible to retain precision of the instrument. The Hand10 is a reliable and valid instrument for individual-level applications with the advantage of being compact and broadly applicable, even in elderly individuals.
Funding for the 2ND IAEA technical meeting on fusion data processing, validation and analysis
DOE Office of Scientific and Technical Information (OSTI.GOV)
Greenwald, Martin
The International Atomic Energy Agency (IAEA) will organize the second Technical Meeting on Fusion Da Processing, Validation and Analysis from 30 May to 02 June, 2017, in Cambridge, MA USA. The meeting w be hosted by the MIT Plasma Science and Fusion Center (PSFC). The objective of the meeting is to provide a platform where a set of topics relevant to fusion data processing, validation and analysis are discussed with the view of extrapolation needs to next step fusion devices such as ITER. The validation and analysis of experimental data obtained from diagnostics used to characterize fusion plasmas are crucialmore » for a knowledge based understanding of the physical processes governing the dynamics of these plasmas. The meeting will aim at fostering, in particular, discussions of research and development results that set out or underline trends observed in the current major fusion confinement devices. General information on the IAEA, including its mission and organization, can be found at the IAEA websit Uncertainty quantification (UQ) Model selection, validation, and verification (V&V) Probability theory and statistical analysis Inverse problems & equilibrium reconstru ction Integrated data analysis Real time data analysis Machine learning Signal/image proc essing & pattern recognition Experimental design and synthetic diagnostics Data management« less
Demonstration of automated proximity and docking technologies
NASA Astrophysics Data System (ADS)
Anderson, Robert L.; Tsugawa, Roy K.; Bryan, Thomas C.
An autodock was demonstrated using straightforward techniques and real sensor hardware. A simulation testbed was established and validated. The sensor design was refined with improved optical performance and image processing noise mitigation techniques, and the sensor is ready for production from off-the-shelf components. The autonomous spacecraft architecture is defined. The areas of sensors, docking hardware, propulsion, and avionics are included in the design. The Guidance Navigation and Control architecture and requirements are developed. Modular structures suitable for automated control are used. The spacecraft system manager functions including configuration, resource, and redundancy management are defined. The requirements for autonomous spacecraft executive are defined. High level decisionmaking, mission planning, and mission contingency recovery are a part of this. The next step is to do flight demonstrations. After the presentation the following question was asked. How do you define validation? There are two components to validation definition: software simulation with formal and vigorous validation, and hardware and facility performance validated with respect to software already validated against analytical profile.
Item validity vs. item discrimination index: a redundancy?
NASA Astrophysics Data System (ADS)
Panjaitan, R. L.; Irawati, R.; Sujana, A.; Hanifah, N.; Djuanda, D.
2018-03-01
In several literatures about evaluation and test analysis, it is common to find that there are calculations of item validity as well as item discrimination index (D) with different formula for each. Meanwhile, other resources said that item discrimination index could be obtained by calculating the correlation between the testee’s score in a particular item and the testee’s score on the overall test, which is actually the same concept as item validity. Some research reports, especially undergraduate theses tend to include both item validity and item discrimination index in the instrument analysis. It seems that these concepts might overlap for both reflect the test quality on measuring the examinees’ ability. In this paper, examples of some results of data processing on item validity and item discrimination index were compared. It would be discussed whether item validity and item discrimination index can be represented by one of them only or it should be better to present both calculations for simple test analysis, especially in undergraduate theses where test analyses were included.
DOT National Transportation Integrated Search
2010-03-01
Transportation corridor-planning processes are well understood, and consensus exists among practitioners : about common practices for stages and tasks included in traditional EIS approaches. However, traditional approaches do : not typically employ f...
Passos, Isis Pienta Batista Dias; Padoveze, Maria Clara; Roseira, Camila Eugênia; de Figueiredo, Rosely Moralez
2015-01-01
to adapt and validate, by expert consensus, a set of indicators used to assess the sterilization process of dental, medical and hospital supplies to be used in PHC services. qualitative methodological study performed in two stages. The first stage included a focal group composed of experts to adapt the indicators to be used in PHC. In the second stage, the indicators were validated using a 4-point Likert scale, which was completed by judges. A Content Validity Index of ≥ 0.75 was considered to show approval of the indicators. the adaptations implemented by the focal group mainly referred to the physical structure, inclusion of dental care professionals, inclusion of chemical disinfection, and replacement of the hot air and moist heat sterilization methods. The validation stage resulted in an index of 0.96, which ranged from 0.90 to 1.00, for the components of the indicators. the judges considered the indicators after adaptation to be validated. Even though there may be differences among items processed around the world, there certainly are common characteristics, especially in countries with economic and cultural environments similar to Brazil. The inclusion of these indicators to assess the safety of healthcare supplies used in PHC services should be considered.
Development of the Clinical Teaching Effectiveness Questionnaire in the United States.
Wormley, Michelle E; Romney, Wendy; Greer, Anna E
2017-01-01
The purpose of this study was to develop a valid measure for assessing clinical teaching effectiveness within the field of physical therapy. The Clinical Teaching Effectiveness Questionnaire (CTEQ) was developed via a 4-stage process, including (1) initial content development, (2) content analysis with 8 clinical instructors with over 5 years of clinical teaching experience, (3) pilot testing with 205 clinical instructors from 2 universities in the Northeast of the United States, and (4) psychometric evaluation, including principal component analysis. The scale development process resulted in a 30-item questionnaire with 4 sections that relate to clinical teaching: learning experiences, learning environment, communication, and evaluation. The CTEQ provides a preliminary valid measure for assessing clinical teaching effectiveness in physical therapy practice.
McCue, J; Osborne, D; Dumont, J; Peters, R; Mei, B; Pierce, G F; Kobayashi, K; Euwart, D
2014-01-01
Recombinant factor IX Fc (rFIXFc) fusion protein is the first of a new class of bioengineered long-acting factors approved for the treatment and prevention of bleeding episodes in haemophilia B. The aim of this work was to describe the manufacturing process for rFIXFc, to assess product quality and to evaluate the capacity of the process to remove impurities and viruses. This manufacturing process utilized a transferable and scalable platform approach established for therapeutic antibody manufacturing and adapted for production of the rFIXFc molecule. rFIXFc was produced using a process free of human- and animal-derived raw materials and a host cell line derived from human embryonic kidney (HEK) 293H cells. The process employed multi-step purification and viral clearance processing, including use of a protein A affinity capture chromatography step, which binds to the Fc portion of the rFIXFc molecule with high affinity and specificity, and a 15 nm pore size virus removal nanofilter. Process validation studies were performed to evaluate identity, purity, activity and safety. The manufacturing process produced rFIXFc with consistent product quality and high purity. Impurity clearance validation studies demonstrated robust and reproducible removal of process-related impurities and adventitious viruses. The rFIXFc manufacturing process produces a highly pure product, free of non-human glycan structures. Validation studies demonstrate that this product is produced with consistent quality and purity. In addition, the scalability and transferability of this process are key attributes to ensure consistent and continuous supply of rFIXFc. PMID:24811361
PIV Data Validation Software Package
NASA Technical Reports Server (NTRS)
Blackshire, James L.
1997-01-01
A PIV data validation and post-processing software package was developed to provide semi-automated data validation and data reduction capabilities for Particle Image Velocimetry data sets. The software provides three primary capabilities including (1) removal of spurious vector data, (2) filtering, smoothing, and interpolating of PIV data, and (3) calculations of out-of-plane vorticity, ensemble statistics, and turbulence statistics information. The software runs on an IBM PC/AT host computer working either under Microsoft Windows 3.1 or Windows 95 operating systems.
Dimension from covariance matrices.
Carroll, T L; Byers, J M
2017-02-01
We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.
Scattering of laser light - more than just smoke and mirrors
NASA Technical Reports Server (NTRS)
Davis, Anthony B.; Love, Stephen; Cahalan, Robert
2004-01-01
A short course on off-beam cloud lidar is given. Specific topics addressed include: motivation and goal of off-beam cloud lidar; diffusion physics; numeric amalysis; and validity of the diffusion approximation. A demo of the process is included.
Digital Fly-By-Wire Flight Control Validation Experience
NASA Technical Reports Server (NTRS)
Szalai, K. J.; Jarvis, C. R.; Krier, G. E.; Megna, V. A.; Brock, L. D.; Odonnell, R. N.
1978-01-01
The experience gained in digital fly-by-wire technology through a flight test program being conducted by the NASA Dryden Flight Research Center in an F-8C aircraft is described. The system requirements are outlined, along with the requirements for flight qualification. The system is described, including the hardware components, the aircraft installation, and the system operation. The flight qualification experience is emphasized. The qualification process included the theoretical validation of the basic design, laboratory testing of the hardware and software elements, systems level testing, and flight testing. The most productive testing was performed on an iron bird aircraft, which used the actual electronic and hydraulic hardware and a simulation of the F-8 characteristics to provide the flight environment. The iron bird was used for sensor and system redundancy management testing, failure modes and effects testing, and stress testing in many cases with the pilot in the loop. The flight test program confirmed the quality of the validation process by achieving 50 flights without a known undetected failure and with no false alarms.
Vagos, Paula; Rijo, Daniel; Santos, Isabel M
2016-04-01
Relatively little is known about measures used to investigate the validity and applications of social information processing theory. The Scenes for Social Information Processing in Adolescence includes items built using a participatory approach to evaluate the attribution of intent, emotion intensity, response evaluation, and response decision steps of social information processing. We evaluated a sample of 802 Portuguese adolescents (61.5% female; mean age = 16.44 years old) using this instrument. Item analysis and exploratory and confirmatory factor analytic procedures were used for psychometric examination. Two measures for attribution of intent were produced, including hostile and neutral; along with 3 emotion measures, focused on negative emotional states; 8 response evaluation measures; and 4 response decision measures, including prosocial and impaired social behavior. All of these measures achieved good internal consistency values and fit indicators. Boys seemed to favor and choose overt and relational aggression behaviors more often; girls conveyed higher levels of neutral attribution, sadness, and assertiveness and passiveness. The Scenes for Social Information Processing in Adolescence achieved adequate psychometric results and seems a valuable alternative for evaluating social information processing, even if it is essential to continue investigation into its internal and external validity. (c) 2016 APA, all rights reserved.
Draft Plan for Characterizing Commercial Data Products in Support of Earth Science Research
NASA Technical Reports Server (NTRS)
Ryan, Robert E.; Terrie, Greg; Berglund, Judith
2006-01-01
This presentation introduces a draft plan for characterizing commercial data products for Earth science research. The general approach to the commercial product verification and validation includes focused selection of a readily available commercial remote sensing products that support Earth science research. Ongoing product verification and characterization will question whether the product meets specifications and will examine its fundamental properties, potential and limitations. Validation will encourage product evaluation for specific science research and applications. Specific commercial products included in the characterization plan include high-spatial-resolution multispectral (HSMS) imagery and LIDAR data products. Future efforts in this process will include briefing NASA headquarters and modifying plans based on feedback, increased engagement with the science community and refinement of details, coordination with commercial vendors and The Joint Agency Commercial Imagery Evaluation (JACIE) for HSMS satellite acquisitions, acquiring waveform LIDAR data and performing verification and validation.
Methodological challenges when doing research that includes ethnic minorities: a scoping review.
Morville, Anne-Le; Erlandsson, Lena-Karin
2016-11-01
There are challenging methodological issues in obtaining valid and reliable results on which to base occupational therapy interventions for ethnic minorities. The aim of this scoping review is to describe the methodological problems within occupational therapy research, when ethnic minorities are included. A thorough literature search yielded 21 articles obtained from the scientific databases PubMed, Cinahl, Web of Science and PsychInfo. Analysis followed Arksey and O'Malley's framework for scoping reviews, applying content analysis. The results showed methodological issues concerning the entire research process from defining and recruiting samples, the conceptual understanding, lack of appropriate instruments, data collection using interpreters to analyzing data. In order to avoid excluding the ethnic minorities from adequate occupational therapy research and interventions, development of methods for the entire research process is needed. It is a costly and time-consuming process, but the results will be valid and reliable, and therefore more applicable in clinical practice.
How to Conduct Ethnographic Research
ERIC Educational Resources Information Center
Sangasubana, Nisaratana
2011-01-01
The purpose of this paper is to describe the process of conducting ethnographic research. Methodology definition and key characteristics are given. The stages of the research process are described including preparation, data gathering and recording, and analysis. Important issues such as reliability and validity are also discussed.
Measuring Advance Care Planning: Optimizing the Advance Care Planning Engagement Survey.
Sudore, Rebecca L; Heyland, Daren K; Barnes, Deborah E; Howard, Michelle; Fassbender, Konrad; Robinson, Carole A; Boscardin, John; You, John J
2017-04-01
A validated 82-item Advance Care Planning (ACP) Engagement Survey measures a broad range of behaviors. However, concise surveys are needed. The objective of this study was to validate shorter versions of the survey. The survey included 57 process (e.g., readiness) and 25 action items (e.g., discussions). For item reduction, we systematically eliminated questions based on face validity, item nonresponse, redundancy, ceiling effects, and factor analysis. We assessed internal consistency (Cronbach's alpha) and construct validity with cross-sectional correlations and the ability of the progressively shorter survey versions to detect change one week after exposure to an ACP intervention (Pearson correlation coefficients). Five hundred one participants (four Canadian and three US sites) were included in item reduction (mean age 69 years [±10], 41% nonwhite). Because of high correlations between readiness and action items, all action items were removed. Because of high correlations and ceiling effects, two process items were removed. Successive factor analysis then created 55-, 34-, 15-, nine-, and four-item versions; 664 participants (from three US ACP clinical trials) were included in validity analysis (age 65 years [±8], 72% nonwhite, 34% Spanish speaking). Cronbach's alphas were high for all versions (four items 0.84-55 items 0.97). Compared with the original survey, cross-sectional correlations were high (four items 0.85; 55 items 0.97) as were delta correlations (four items 0.68; 55 items 0.93). Shorter versions of the ACP Engagement Survey are valid, internally consistent, and able to detect change across a broad range of ACP behaviors for English and Spanish speakers. Shorter ACP surveys can efficiently measure broad ACP behaviors in research and clinical settings. Published by Elsevier Inc.
ERIC Educational Resources Information Center
Indhraratana, Apinya; Kaemkate, Wannee
2012-01-01
The aim of this paper is to develop a reliable and valid tool to assess ethical decision-making ability of nursing students using rubrics. A proposed ethical decision making process, from reviewing related literature was used as a framework for developing the rubrics. Participants included purposive sample of 86 nursing students from the Royal…
ERIC Educational Resources Information Center
Brückner, Sebastian; Pellegrino, James W.
2016-01-01
The Standards for Educational and Psychological Testing indicate that validation of assessments should include analyses of participants' response processes. However, such analyses typically are conducted only to supplement quantitative field studies with qualitative data, and seldom are such data connected to quantitative data on student or item…
Comparison of C5 and C6 Aqua-MODIS Dark Target Aerosol Validation
NASA Technical Reports Server (NTRS)
Munchak, Leigh A.; Levy, Robert C.; Mattoo, Shana
2014-01-01
We compare C5 and C6 validation to compare the C6 10 km aerosol product against the well validated and trusted aerosol product on global and regional scales. Only the 10 km aerosol product is evaluated in this study, validation of the new C6 3 km aerosol product still needs to be performed. Not all of the time series has processed yet for C5 or C6, and the years processed for the 2 products is not exactly the same (this work is preliminary!). To reduce the impact of outlier observations, MODIS is spatially averaged within 27.5 km of the AERONET site, and AERONET is temporatally averaged within 30 minutes of the MODIS overpass time. Only high quality (QA = 3 over land, QA greater than 0 over ocean) pixels are included in the mean.
The Hyper-X Flight Systems Validation Program
NASA Technical Reports Server (NTRS)
Redifer, Matthew; Lin, Yohan; Bessent, Courtney Amos; Barklow, Carole
2007-01-01
For the Hyper-X/X-43A program, the development of a comprehensive validation test plan played an integral part in the success of the mission. The goal was to demonstrate hypersonic propulsion technologies by flight testing an airframe-integrated scramjet engine. Preparation for flight involved both verification and validation testing. By definition, verification is the process of assuring that the product meets design requirements; whereas validation is the process of assuring that the design meets mission requirements for the intended environment. This report presents an overview of the program with emphasis on the validation efforts. It includes topics such as hardware-in-the-loop, failure modes and effects, aircraft-in-the-loop, plugs-out, power characterization, antenna pattern, integration, combined systems, captive carry, and flight testing. Where applicable, test results are also discussed. The report provides a brief description of the flight systems onboard the X-43A research vehicle and an introduction to the ground support equipment required to execute the validation plan. The intent is to provide validation concepts that are applicable to current, follow-on, and next generation vehicles that share the hybrid spacecraft and aircraft characteristics of the Hyper-X vehicle.
Fernández-Domínguez, Juan Carlos; Sesé-Abad, Albert; Morales-Asencio, Jose Miguel; Oliva-Pascual-Vaca, Angel; Salinas-Bueno, Iosune; de Pedro-Gómez, Joan Ernest
2014-12-01
Our goal is to compile and analyse the characteristics - especially validity and reliability - of all the existing international tools that have been used to measure evidence-based clinical practice in physiotherapy. A systematic review conducted with data from exclusively quantitative-type studies synthesized in narrative format. An in-depth search of the literature was conducted in two phases: initial, structured, electronic search of databases and also journals with summarized evidence; followed by a residual-directed search in the bibliographical references of the main articles found in the primary search procedure. The studies included were assigned to members of the research team who acted as peer reviewers. Relevant information was extracted from each of the selected articles using a template that included the general characteristics of the instrument as well as an analysis of the quality of the validation processes carried out, by following the criteria of Terwee. Twenty-four instruments were found to comply with the review screening criteria; however, in all cases, they were found to be limited as regards the 'constructs' included. Besides, they can all be seen to be lacking as regards comprehensiveness associated to the validation process of the psychometric tests used. It seems that what constitutes a rigorously developed assessment instrument for EBP in physical therapy continues to be a challenge. © 2014 John Wiley & Sons, Ltd.
Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric
2014-03-01
Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. © 2013 Published by Elsevier B.V.
Mears, Lisa; Stocks, Stuart M; Albaek, Mads O; Sin, Gürkan; Gernaey, Krist V
2017-03-01
A mechanistic model-based soft sensor is developed and validated for 550L filamentous fungus fermentations operated at Novozymes A/S. The soft sensor is comprised of a parameter estimation block based on a stoichiometric balance, coupled to a dynamic process model. The on-line parameter estimation block models the changing rates of formation of product, biomass, and water, and the rate of consumption of feed using standard, available on-line measurements. This parameter estimation block, is coupled to a mechanistic process model, which solves the current states of biomass, product, substrate, dissolved oxygen and mass, as well as other process parameters including k L a, viscosity and partial pressure of CO 2 . State estimation at this scale requires a robust mass model including evaporation, which is a factor not often considered at smaller scales of operation. The model is developed using a historical data set of 11 batches from the fermentation pilot plant (550L) at Novozymes A/S. The model is then implemented on-line in 550L fermentation processes operated at Novozymes A/S in order to validate the state estimator model on 14 new batches utilizing a new strain. The product concentration in the validation batches was predicted with an average root mean sum of squared error (RMSSE) of 16.6%. In addition, calculation of the Janus coefficient for the validation batches shows a suitably calibrated model. The robustness of the model prediction is assessed with respect to the accuracy of the input data. Parameter estimation uncertainty is also carried out. The application of this on-line state estimator allows for on-line monitoring of pilot scale batches, including real-time estimates of multiple parameters which are not able to be monitored on-line. With successful application of a soft sensor at this scale, this allows for improved process monitoring, as well as opening up further possibilities for on-line control algorithms, utilizing these on-line model outputs. Biotechnol. Bioeng. 2017;114: 589-599. © 2016 Wiley Periodicals, Inc. © 2016 Wiley Periodicals, Inc.
Hayes, Brett K; Stephens, Rachel G; Ngo, Jeremy; Dunn, John C
2018-02-01
Three-experiments examined the number of qualitatively different processing dimensions needed to account for inductive and deductive reasoning. In each study, participants were presented with arguments that varied in logical validity and consistency with background knowledge (believability), and evaluated them according to deductive criteria (whether the conclusion was necessarily true given the premises) or inductive criteria (whether the conclusion was plausible given the premises). We examined factors including working memory load (Experiments 1 and 2), individual working memory capacity (Experiments 1 and 2), and decision time (Experiment 3), which according to dual-processing theories, modulate the contribution of heuristic and analytic processes to reasoning. A number of empirical dissociations were found. Argument validity affected deduction more than induction. Argument believability affected induction more than deduction. Lower working memory capacity reduced sensitivity to argument validity and increased sensitivity to argument believability, especially under induction instructions. Reduced decision time led to decreased sensitivity to argument validity. State-trace analyses of each experiment, however, found that only a single underlying dimension was required to explain patterns of inductive and deductive judgments. These results show that the dissociations, which have traditionally been seen as supporting dual-processing models of reasoning, are consistent with a single-process model that assumes a common evidentiary scale for induction and deduction. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
The development and testing of a qualitative instrument designed to assess critical thinking
NASA Astrophysics Data System (ADS)
Clauson, Cynthia Louisa
This study examined a qualitative approach to assess critical thinking. An instrument was developed that incorporates an assessment process based on Dewey's (1933) concepts of self-reflection and critical thinking as problem solving. The study was designed to pilot test the critical thinking assessment process with writing samples collected from a heterogeneous group of students. The pilot test included two phases. Phase 1 was designed to determine the validity and inter-rater reliability of the instrument using two experts in critical thinking, problem solving, and literacy development. Validity of the instrument was addressed by requesting both experts to respond to ten questions in an interview. The inter-rater reliability was assessed by analyzing the consistency of the two experts' scorings of the 20 writing samples to each other, as well as to my scoring of the same 20 writing samples. Statistical analyses included the Spearman Rho and the Kuder-Richardson (Formula 20). Phase 2 was designed to determine the validity and reliability of the critical thinking assessment process with seven science teachers. Validity was addressed by requesting the teachers to respond to ten questions in a survey and interview. Inter-rater reliability was addressed by comparing the seven teachers' scoring of five writing samples with my scoring of the same five writing samples. Again, the Spearman Rho and the Kuder-Richardson (Formula 20) were used to determine the inter-rater reliability. The validity results suggest that the instrument is helpful as a guide for instruction and provides a systematic method to teach and assess critical thinking while problem solving with students in the classroom. The reliability results show the critical thinking assessment instrument to possess fairly high reliability when used by the experts, but weak reliability when used by classroom teachers. A major conclusion was drawn that teachers, as well as students, would need to receive instruction in critical thinking and in how to use the assessment process in order to gain more consistent interpretations of the six problem-solving steps. Specific changes needing to be made in the instrument to improve the quality are included.
NASA Technical Reports Server (NTRS)
Starr, David
1999-01-01
The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include ASTER, CERES, MISR, MODIS and MOPITT. In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS), AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2, though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community. Detailed information about the EOS Terra validation Program can be found on the EOS Validation program homepage i/e.: http://ospso.gsfc.nasa.gov/validation/valpage.html).
Translating the short version of the Perinatal Grief Scale: process and challenges.
Capitulo, K L; Cornelio, M A; Lenz, E R
2001-08-01
Non-English-speaking populations may be excluded from rigorous clinical research because of the lack of reliable and valid instrumentation to measure psychosocial variables. The purpose of this article is to describe the process and challenges when translating a research instrument. The process will be illustrated in the project of translating into Spanish the Short Version of the Perinatal Grief Scale, extensively studied in English-speaking, primarily Caucasian populations. Translation methods, errors, and tips are included. Tools cannot be used in transcultural research and practice without careful and accurate translation and subsequent psychometric evaluation, which are essential to generate credible and valid findings. Copyright 2001 by W.B. Saunders Company
McCue, J; Osborne, D; Dumont, J; Peters, R; Mei, B; Pierce, G F; Kobayashi, K; Euwart, D
2014-07-01
Recombinant factor IX Fc (rFIXFc) fusion protein is the first of a new class of bioengineered long-acting factors approved for the treatment and prevention of bleeding episodes in haemophilia B. The aim of this work was to describe the manufacturing process for rFIXFc, to assess product quality and to evaluate the capacity of the process to remove impurities and viruses. This manufacturing process utilized a transferable and scalable platform approach established for therapeutic antibody manufacturing and adapted for production of the rFIXFc molecule. rFIXFc was produced using a process free of human- and animal-derived raw materials and a host cell line derived from human embryonic kidney (HEK) 293H cells. The process employed multi-step purification and viral clearance processing, including use of a protein A affinity capture chromatography step, which binds to the Fc portion of the rFIXFc molecule with high affinity and specificity, and a 15 nm pore size virus removal nanofilter. Process validation studies were performed to evaluate identity, purity, activity and safety. The manufacturing process produced rFIXFc with consistent product quality and high purity. Impurity clearance validation studies demonstrated robust and reproducible removal of process-related impurities and adventitious viruses. The rFIXFc manufacturing process produces a highly pure product, free of non-human glycan structures. Validation studies demonstrate that this product is produced with consistent quality and purity. In addition, the scalability and transferability of this process are key attributes to ensure consistent and continuous supply of rFIXFc. © 2014 The Authors. Haemophilia Published by John Wiley & Sons Ltd.
Converting the H. W. Wilson Company Indexes to an Automated System: A Functional Analysis.
ERIC Educational Resources Information Center
Regazzi, John J.
1984-01-01
Description of the computerized information system that supports the editorial and manufacturing processes involved in creation of Wilson's subject indexes and catalogs includes the major subsystems--online data entry, batch input processing, validation and release, file generation and database management, online and offline retrieval, publication…
Routine development of objectively derived search strategies.
Hausner, Elke; Waffenschmidt, Siw; Kaiser, Thomas; Simon, Michael
2012-02-29
Over the past few years, information retrieval has become more and more professionalized, and information specialists are considered full members of a research team conducting systematic reviews. Research groups preparing systematic reviews and clinical practice guidelines have been the driving force in the development of search strategies, but open questions remain regarding the transparency of the development process and the available resources. An empirically guided approach to the development of a search strategy provides a way to increase transparency and efficiency. Our aim in this paper is to describe the empirically guided development process for search strategies as applied by the German Institute for Quality and Efficiency in Health Care (Institut für Qualität und Wirtschaftlichkeit im Gesundheitswesen, or "IQWiG"). This strategy consists of the following steps: generation of a test set, as well as the development, validation and standardized documentation of the search strategy. We illustrate our approach by means of an example, that is, a search for literature on brachytherapy in patients with prostate cancer. For this purpose, a test set was generated, including a total of 38 references from 3 systematic reviews. The development set for the generation of the strategy included 25 references. After application of textual analytic procedures, a strategy was developed that included all references in the development set. To test the search strategy on an independent set of references, the remaining 13 references in the test set (the validation set) were used. The validation set was also completely identified. Our conclusion is that an objectively derived approach similar to that used in search filter development is a feasible way to develop and validate reliable search strategies. Besides creating high-quality strategies, the widespread application of this approach will result in a substantial increase in the transparency of the development process of search strategies.
NASA Technical Reports Server (NTRS)
Carr, Peter C.; Mckissick, Burnell T.
1988-01-01
A joint experiment to investigate simulator validation and cue fidelity was conducted by the Dryden Flight Research Facility of NASA Ames Research Center (Ames-Dryden) and NASA Langley Research Center. The primary objective was to validate the use of a closed-loop pilot-vehicle mathematical model as an analytical tool for optimizing the tradeoff between simulator fidelity requirements and simulator cost. The validation process includes comparing model predictions with simulation and flight test results to evaluate various hypotheses for differences in motion and visual cues and information transfer. A group of five pilots flew air-to-air tracking maneuvers in the Langley differential maneuvering simulator and visual motion simulator and in an F-14 aircraft at Ames-Dryden. The simulators used motion and visual cueing devices including a g-seat, a helmet loader, wide field-of-view horizon, and a motion base platform.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Drake, Richard R.
Vvtools is a suite of testing tools, with a focus on reproducible verification and validation. They are written in pure Python, and contain a test harness and an automated process management tool. Users of vvtools can develop suites of verification and validation tests and run them on small to large high performance computing resources in an automated and reproducible way. The test harness enables complex processes to be performed in each test and even supports a one-level parent/child dependency between tests. It includes a built in capability to manage workloads requiring multiple processors and platforms that use batch queueing systems.
Performing Verification and Validation in Reuse-Based Software Engineering
NASA Technical Reports Server (NTRS)
Addy, Edward A.
1999-01-01
The implementation of reuse-based software engineering not only introduces new activities to the software development process, such as domain analysis and domain modeling, it also impacts other aspects of software engineering. Other areas of software engineering that are affected include Configuration Management, Testing, Quality Control, and Verification and Validation (V&V). Activities in each of these areas must be adapted to address the entire domain or product line rather than a specific application system. This paper discusses changes and enhancements to the V&V process, in order to adapt V&V to reuse-based software engineering.
48 CFR 1401.7001-4 - Acquisition performance measurement systems.
Code of Federal Regulations, 2010 CFR
2010-10-01
...-pronged approach that includes self assessment, statistical data for validation and flexible quality... regulations governing the acquisition process; and (3) Identify and implement changes necessary to improve the...
GPM Ground Validation: Pre to Post-Launch Era
NASA Astrophysics Data System (ADS)
Petersen, Walt; Skofronick-Jackson, Gail; Huffman, George
2015-04-01
NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, accumulation, types and data quality are being routinely generated to facilitate statistical GV of instantaneous (e.g., Level II orbit) and merged (e.g., IMERG) GPM products. Toward assessing precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of both ground and satellite-based estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi-radar, gauge and disdrometer facility located in coastal Virginia. This presentation will summarize the evolution of the NASA GPM GV program from pre to post-launch eras and place focus on evaluation of year-1 post-launch GPM satellite datasets including Level II GPROF, DPR and Combined algorithms, and Level III IMERG products.
NASA Astrophysics Data System (ADS)
Nir, A.; Doughty, C.; Tsang, C. F.
Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.
NASA Construction of Facilities Validation Processes - Total Building Commissioning (TBCx)
NASA Technical Reports Server (NTRS)
Hoover, Jay C.
2004-01-01
Key Atributes include: Total Quality Management (TQM) System that looks at all phases of a project. A team process that spans boundaries. A Commissioning Authority to lead the process. Commissioning requirements in contracts. Independent design review to verify compliance with Facility Project Requirements (FPR). Formal written Commissioning Plan with Documented Results. Functional performance testing (FPT) against the requirements document.
Electrohydraulic Forming of Near-Net Shape Automotive Panels
DOE Office of Scientific and Technical Information (OSTI.GOV)
Golovaschenko, Sergey F.
2013-09-26
The objective of this project was to develop the electrohydraulic forming (EHF) process as a near-net shape automotive panel manufacturing technology that simultaneously reduces the energy embedded in vehicles and the energy consumed while producing automotive structures. Pulsed pressure is created via a shockwave generated by the discharge of high voltage capacitors through a pair of electrodes in a liquid-filled chamber. The shockwave in the liquid initiated by the expansion of the plasma channel formed between two electrodes propagates towards the blank and causes the blank to be deformed into a one-sided die cavity. The numerical model of the EHFmore » process was validated experimentally and was successfully applied to the design of the electrode system and to a multi-electrode EHF chamber for full scale validation of the process. The numerical model was able to predict stresses in the dies during pulsed forming and was validated by the experimental study of the die insert failure mode for corner filling operations. The electrohydraulic forming process and its major subsystems, including durable electrodes, an EHF chamber, a water/air management system, a pulse generator and integrated process controls, were validated to be capable to operate in a fully automated, computer controlled mode for forming of a portion of a full-scale sheet metal component in laboratory conditions. Additionally, the novel processes of electrohydraulic trimming and electrohydraulic calibration were demonstrated at a reduced-scale component level. Furthermore, a hybrid process combining conventional stamping with EHF was demonstrated as a laboratory process for a full-scale automotive panel formed out of AHSS material. The economic feasibility of the developed EHF processes was defined by developing a cost model of the EHF process in comparison to the conventional stamping process.« less
Validating Retinal Fundus Image Analysis Algorithms: Issues and a Proposal
Trucco, Emanuele; Ruggeri, Alfredo; Karnowski, Thomas; Giancardo, Luca; Chaum, Edward; Hubschman, Jean Pierre; al-Diri, Bashir; Cheung, Carol Y.; Wong, Damon; Abràmoff, Michael; Lim, Gilbert; Kumar, Dinesh; Burlina, Philippe; Bressler, Neil M.; Jelinek, Herbert F.; Meriaudeau, Fabrice; Quellec, Gwénolé; MacGillivray, Tom; Dhillon, Bal
2013-01-01
This paper concerns the validation of automatic retinal image analysis (ARIA) algorithms. For reasons of space and consistency, we concentrate on the validation of algorithms processing color fundus camera images, currently the largest section of the ARIA literature. We sketch the context (imaging instruments and target tasks) of ARIA validation, summarizing the main image analysis and validation techniques. We then present a list of recommendations focusing on the creation of large repositories of test data created by international consortia, easily accessible via moderated Web sites, including multicenter annotations by multiple experts, specific to clinical tasks, and capable of running submitted software automatically on the data stored, with clear and widely agreed-on performance criteria, to provide a fair comparison. PMID:23794433
ERIC Educational Resources Information Center
Kupersmidt, Janis B.; Stelter, Rebecca; Dodge, Kenneth A.
2011-01-01
The purpose of this study was to evaluate the psychometric properties of an audio computer-assisted self-interviewing Web-based software application called the Social Information Processing Application (SIP-AP) that was designed to assess social information processing skills in boys in RD through 5th grades. This study included a racially and…
21 CFR 807.35 - Notification of registrant.
Code of Federal Regulations, 2014 CFR
2014-04-01
... manufacture or process biological products (including devices licensed under section 351 of the Public Health... device activities described in § 807.20, validation of registration and the assignment of a device...
21 CFR 807.35 - Notification of registrant.
Code of Federal Regulations, 2013 CFR
2013-04-01
... manufacture or process biological products (including devices licensed under section 351 of the Public Health... device activities described in § 807.20, validation of registration and the assignment of a device...
Measuring adverse events in helicopter emergency medical services: establishing content validity.
Patterson, P Daniel; Lave, Judith R; Martin-Gill, Christian; Weaver, Matthew D; Wadas, Richard J; Arnold, Robert M; Roth, Ronald N; Mosesso, Vincent N; Guyette, Francis X; Rittenberger, Jon C; Yealy, Donald M
2014-01-01
We sought to create a valid framework for detecting adverse events (AEs) in the high-risk setting of helicopter emergency medical services (HEMS). We assembled a panel of 10 expert clinicians (n = 6 emergency medicine physicians and n = 4 prehospital nurses and flight paramedics) affiliated with a large multistate HEMS organization in the Northeast US. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the content validity index (CVI), to quantify the validity of the framework's content. The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: (1) a trigger tool, (2) a method for rating proximal cause, and (3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. We demonstrate a standardized process for the development of a content-valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS.
Turró-Garriga, O; Hermoso Contreras, C; Olives Cladera, J; Mioshi, E; Pelegrín Valero, C; Olivera Pueyo, J; Garre-Olmo, J; Sánchez-Valle, R
2017-06-01
The Frontotemporal Dementia Rating Scale (FTD-FRS) is a tool designed to aid with clinical staging and assessment of the progression of frontotemporal dementia (FTD-FRS). Present a multicentre adaptation and validation study of a Spanish version of the FRS. The adapted version was created using 2 translation-back translation processes (English to Spanish, Spanish to English) and verified by the scale's original authors. We validated the adapted version in a sample of consecutive patients diagnosed with FTD. The procedure included evaluating internal consistency, testing unidimensionality with the Rasch model, analysing construct validity and discriminant validity, and calculating the degree of agreement between the Clinical Dementia Rating scale (CDR) and FTD-FRS for FTD cases. The study included 60 patients with DFT. The mean score on the FRS was 12.1 points (SD=6.5; range, 2-25) with inter-group differences (F=120.3; df=3; P<.001). Cronbach's alpha was 0.897 and principal component analysis of residuals delivered an acceptable eigenvalue for 5 contrasts (1.6-2.7) and 36.1% raw variance. FRS was correlated with the Mini-mental State Examination (r=0.572; P<.001) and functional capacity (DAD; r=0.790; P<.001). FTD-FRS also showed a significant correlation with CDR (r=-0.641; P<.001), but we did observe variability in the severity levels; cases appeared to be less severe according to the CDR than when measured with the FTD-FRS (kappa=0.055). This process of validating the Spanish translation of the FTD-FRS yielded satisfactory results for validity and unidimensionality (severity) in the assessment of patients with FTD. Copyright © 2016 Sociedad Española de Neurología. Publicado por Elsevier España, S.L.U. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Ekechukwu, A
Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validatemore » analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.« less
The SCALE Verified, Archived Library of Inputs and Data - VALID
DOE Office of Scientific and Technical Information (OSTI.GOV)
Marshall, William BJ J; Rearden, Bradley T
The Verified, Archived Library of Inputs and Data (VALID) at ORNL contains high quality, independently reviewed models and results that improve confidence in analysis. VALID is developed and maintained according to a procedure of the SCALE quality assurance (QA) plan. This paper reviews the origins of the procedure and its intended purpose, the philosophy of the procedure, some highlights of its implementation, and the future of the procedure and associated VALID library. The original focus of the procedure was the generation of high-quality models that could be archived at ORNL and applied to many studies. The review process associated withmore » model generation minimized the chances of errors in these archived models. Subsequently, the scope of the library and procedure was expanded to provide high quality, reviewed sensitivity data files for deployment through the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). Sensitivity data files for approximately 400 such models are currently available. The VALID procedure and library continue fulfilling these multiple roles. The VALID procedure is based on the quality assurance principles of ISO 9001 and nuclear safety analysis. Some of these key concepts include: independent generation and review of information, generation and review by qualified individuals, use of appropriate references for design data and documentation, and retrievability of the models, results, and documentation associated with entries in the library. Some highlights of the detailed procedure are discussed to provide background on its implementation and to indicate limitations of data extracted from VALID for use by the broader community. Specifically, external users of data generated within VALID must take responsibility for ensuring that the files are used within the QA framework of their organization and that use is appropriate. The future plans for the VALID library include expansion to include additional experiments from the IHECSBE, to include experiments from areas beyond criticality safety, such as reactor physics and shielding, and to include application models. In the future, external SCALE users may also obtain qualification under the VALID procedure and be involved in expanding the library. The VALID library provides a pathway for the criticality safety community to leverage modeling and analysis expertise at ORNL.« less
On demand processing of climate station sensor data
NASA Astrophysics Data System (ADS)
Wöllauer, Stephan; Forteva, Spaska; Nauss, Thomas
2015-04-01
Large sets of climate stations with several sensors produce big amounts of finegrained time series data. To gain value of this data, further processing and aggregation is needed. We present a flexible system to process the raw data on demand. Several aspects need to be considered to process the raw data in a way that scientists can use the processed data conveniently for their specific research interests. First of all, it is not feasible to pre-process the data in advance because of the great variety of ways it can be processed. Therefore, in this approach only the raw measurement data is archived in a database. When a scientist requires some time series, the system processes the required raw data according to the user-defined request. Based on the type of measurement sensor, some data validation is needed, because the climate station sensors may produce erroneous data. Currently, three validation methods are integrated in the on demand processing system and are optionally selectable. The most basic validation method checks if measurement values are within a predefined range of possible values. For example, it may be assumed that an air temperature sensor measures values within a range of -40 °C to +60 °C. Values outside of this range are considered as a measurement error by this validation method and consequently rejected. An other validation method checks for outliers in the stream of measurement values by defining a maximum change rate between subsequent measurement values. The third validation method compares measurement data to the average values of neighboring stations and rejects measurement values with a high variance. These quality checks are optional, because especially extreme climatic values may be valid but rejected by some quality check method. An other important task is the preparation of measurement data in terms of time. The observed stations measure values in intervals of minutes to hours. Often scientists need a coarser temporal resolution (days, months, years). Therefore, the interval of time aggregation is selectable for the processing. For some use cases it is desirable that the resulting time series are as continuous as possible. To meet these requirements, the processing system includes techniques to fill gaps of missing values by interpolating measurement values with data from adjacent stations using available contemporaneous measurements from the respective stations as training datasets. Alongside processing of sensor values, we created interactive visualization techniques to get a quick overview of a big amount of archived time series data.
Determining Content Validity for the Transition Awareness and Possibilities Scale (TAPS)
ERIC Educational Resources Information Center
Ross, Melynda Burck
2011-01-01
The Transition Awareness & Possibilities Scale (TAPS) was crafted after an extensive review of literature was conducted to find research that examined and described specific aspects of transition programming: inputs, including supports and skill instruction; processes, including parent and support provider perceptions of the transition experience;…
Initial Retrieval Validation from the Joint Airborne IASI Validation Experiment (JAIVEx)
NASA Technical Reports Server (NTRS)
Zhou, Daniel K.; Liu, Xu; Smith, WIlliam L.; Larar, Allen M.; Taylor, Jonathan P.; Revercomb, Henry E.; Mango, Stephen A.; Schluessel, Peter; Calbet, Xavier
2007-01-01
The Joint Airborne IASI Validation Experiment (JAIVEx) was conducted during April 2007 mainly for validation of the Infrared Atmospheric Sounding Interferometer (IASI) on the MetOp satellite, but also included a strong component focusing on validation of the Atmospheric InfraRed Sounder (AIRS) aboard the AQUA satellite. The cross validation of IASI and AIRS is important for the joint use of their data in the global Numerical Weather Prediction process. Initial inter-comparisons of geophysical products have been conducted from different aspects, such as using different measurements from airborne ultraspectral Fourier transform spectrometers (specifically, the NPOESS Airborne Sounder Testbed Interferometer (NAST-I) and the Scanning-High resolution Interferometer Sounder (S-HIS) aboard the NASA WB-57 aircraft), UK Facility for Airborne Atmospheric Measurements (FAAM) BAe146-301 aircraft insitu instruments, dedicated dropsondes, radiosondes, and ground based Raman Lidar. An overview of the JAIVEx retrieval validation plan and some initial results of this field campaign are presented.
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2012 CFR
2012-04-01
... 21 Food and Drugs 8 2012-04-01 2012-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2013 CFR
2013-04-01
... 21 Food and Drugs 8 2013-04-01 2013-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2014 CFR
2014-04-01
... 21 Food and Drugs 8 2014-04-01 2014-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2011 CFR
2011-04-01
... 21 Food and Drugs 8 2011-04-01 2011-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
21 CFR 820.75 - Process validation.
Code of Federal Regulations, 2010 CFR
2010-04-01
... 21 Food and Drugs 8 2010-04-01 2010-04-01 false Process validation. 820.75 Section 820.75 Food and... QUALITY SYSTEM REGULATION Production and Process Controls § 820.75 Process validation. (a) Where the... validated with a high degree of assurance and approved according to established procedures. The validation...
Failure mode and effects analysis outputs: are they valid?
Shebl, Nada Atef; Franklin, Bryony Dean; Barber, Nick
2012-06-10
Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: Face validity: by comparing the FMEA participants' mapped processes with observational work. Content validity: by presenting the FMEA findings to other healthcare professionals. Criterion validity: by comparing the FMEA findings with data reported on the trust's incident report database. Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust's incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA's methodology for scoring failures, there were discrepancies between the teams' estimates and similar incidents reported on the trust's incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA's validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues.
Meeting the needs of an ever-demanding market.
Rigby, Richard
2002-04-01
Balancing cost and performance in packaging is critical. This article outlines techniques to assist in this whilst delivering added value and product differentiation. The techniques include a rigorous statistical process capable of delivering cost reduction and improved quality and a computer modelling process that can save time when validating new packaging options.
Model Transformation for a System of Systems Dependability Safety Case
NASA Technical Reports Server (NTRS)
Murphy, Judy; Driskell, Steve
2011-01-01
The presentation reviews the dependability and safety effort of NASA's Independent Verification and Validation Facility. Topics include: safety engineering process, applications to non-space environment, Phase I overview, process creation, sample SRM artifact, Phase I end result, Phase II model transformation, fault management, and applying Phase II to individual projects.
Evaluation of Process Science Skills: From the Real World to the Ideal World.
ERIC Educational Resources Information Center
Lipowich, Shelley A.
State legislatures and others are recommending and, in some cases, mandating reforms in education including evaluating students' ability to meet stated objectives. This "ideal" situation poses a major problem concerning instruments needed to assess process skills. In the real world, educators do not yet have nationally recognized, valid,…
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis approaches the Vehicle Assembly Building (VAB). It is being towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is towed from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis nears the Vehicle Assembly Building (VAB). It is being towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis awaits transport from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
Mission Life Thermal Analysis and Environment Correlation for the Lunar Reconnaissance Orbiter
NASA Technical Reports Server (NTRS)
Garrison, Matthew B.; Peabody, Hume
2012-01-01
Standard thermal analysis practices include stacking worst-case conditions including environmental heat loads, thermo-optical properties and orbital beta angles. This results in the design being driven by a few bounding thermal cases, although those cases may only represent a very small portion of the actual mission life. The NASA Goddard Space Flight Center Thermal Branch developed a procedure to predict the flight temperatures over the entire mission life, assuming a known beta angle progression, variation in the thermal environment, and a degradation rate in the coatings. This was applied to the Global Precipitation Measurement core spacecraft. In order to assess the validity of this process, this work applies the similar process to the Lunar Reconnaissance Orbiter. A flight-correlated thermal model was exercised to give predictions of the thermal performance over the mission life. These results were then compared against flight data from the first two years of the spacecraft s use. This is used to validate the process and to suggest possible improvements for future analyses.
48 CFR 227.7102-4 - Contract clauses.
Code of Federal Regulations, 2014 CFR
2014-10-01
... processes. (2) Use the clause at 252.227-7015 with its Alternate I in solicitations and contracts, including..., Validation of Restrictive Markings on Technical Data, in solicitations and contracts using FAR part 12...
48 CFR 227.7102-4 - Contract clauses.
Code of Federal Regulations, 2013 CFR
2013-10-01
... processes. (2) Use the clause at 252.227-7015 with its Alternate I in solicitations and contracts, including..., Validation of Restrictive Markings on Technical Data, in solicitations and contracts using FAR part 12...
48 CFR 227.7102-3 - Contract clause.
Code of Federal Regulations, 2010 CFR
2010-10-01
... processes. Do not require the contractor to include this clause in its subcontracts. (2) Use the clause at... for commercial items or commercial components. (c) Use the clause at 252.227-7037, Validation of...
NASA Technical Reports Server (NTRS)
Hand, David W.; Crittenden, John C.; Ali, Anisa N.; Bulloch, John L.; Hokanson, David R.; Parrem, David L.
1996-01-01
This thesis includes the development and verification of an adsorption model for analysis and optimization of the adsorption processes within the International Space Station multifiltration beds. The fixed bed adsorption model includes multicomponent equilibrium and both external and intraparticle mass transfer resistances. Single solute isotherm parameters were used in the multicomponent equilibrium description to predict the competitive adsorption interactions occurring during the adsorption process. The multicomponent equilibrium description used the Fictive Component Analysis to describe adsorption in unknown background matrices. Multicomponent isotherms were used to validate the multicomponent equilibrium description. Column studies were used to develop and validate external and intraparticle mass transfer parameter correlations for compounds of interest. The fixed bed model was verified using a shower and handwash ersatz water which served as a surrogate to the actual shower and handwash wastewater.
2004-01-09
KENNEDY SPACE CENTER, FLA. -- Endeavour backs out of the Orbiter Processing Facility for temporary transfer to the Vehicle Assembly Building. The move allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1993-01-01
Critical issues concerning the modeling of low density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools, and the activity in the NASA Ames Research Center's Aerothermodynamics Branch is described. Inherent in the process is a strong synergism between ground test and real gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flowfield simulation codes are discussed. These models were partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions is sparse and reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high enthalpy flow facilities, such as shock tubes and ballistic ranges.
A Decision Tree for Nonmetric Sex Assessment from the Skull.
Langley, Natalie R; Dudzik, Beatrix; Cloutier, Alesia
2018-01-01
This study uses five well-documented cranial nonmetric traits (glabella, mastoid process, mental eminence, supraorbital margin, and nuchal crest) and one additional trait (zygomatic extension) to develop a validated decision tree for sex assessment. The decision tree was built and cross-validated on a sample of 293 U.S. White individuals from the William M. Bass Donated Skeletal Collection. Ordinal scores from the six traits were analyzed using the partition modeling option in JMP Pro 12. A holdout sample of 50 skulls was used to test the model. The most accurate decision tree includes three variables: glabella, zygomatic extension, and mastoid process. This decision tree yielded 93.5% accuracy on the training sample, 94% on the cross-validated sample, and 96% on a holdout validation sample. Linear weighted kappa statistics indicate acceptable agreement among observers for these variables. Mental eminence should be avoided, and definitions and figures should be referenced carefully to score nonmetric traits. © 2017 American Academy of Forensic Sciences.
Development and validation of the Bush-Francis Catatonia Rating Scale - Brazilian version.
Nunes, Ana Letícia Santos; Filgueiras, Alberto; Nicolato, Rodrigo; Alvarenga, Jussara Mendonça; Silveira, Luciana Angélica Silva; Silva, Rafael Assis da; Cheniaux, Elie
2017-01-01
This article aims to describe the adaptation and translation process of the Bush-Francis Catatonia Rating Scale (BFCRS) and its reduced version, the Bush-Francis Catatonia Screening Instrument (BFCSI) for Brazilian Portuguese, as well as its validation. Semantic equivalence processes included four steps: translation, back translation, evaluation of semantic equivalence and a pilot-study. Validation consisted of simultaneous applications of the instrument in Portuguese by two examiners in 30 catatonic and 30 non-catatonic patients. Total scores averaged 20.07 for the complete scale and 7.80 for its reduced version among catatonic patients, compared with 0.47 and 0.20 among non-catatonic patients, respectively. Overall values of inter-rater reliability of the instruments were 0.97 for the BFCSI and 0.96 for the BFCRS. The scale's version in Portuguese proved to be valid and was able to distinguish between catatonic and non-catatonic patients. It was also reliable, with inter-evaluator reliability indexes as high as those of the original instrument.
Issues and approach to develop validated analysis tools for hypersonic flows: One perspective
NASA Technical Reports Server (NTRS)
Deiwert, George S.
1992-01-01
Critical issues concerning the modeling of low-density hypervelocity flows where thermochemical nonequilibrium effects are pronounced are discussed. Emphasis is on the development of validated analysis tools. A description of the activity in the Ames Research Center's Aerothermodynamics Branch is also given. Inherent in the process is a strong synergism between ground test and real-gas computational fluid dynamics (CFD). Approaches to develop and/or enhance phenomenological models and incorporate them into computational flow-field simulation codes are discussed. These models have been partially validated with experimental data for flows where the gas temperature is raised (compressive flows). Expanding flows, where temperatures drop, however, exhibit somewhat different behavior. Experimental data for these expanding flow conditions are sparse; reliance must be made on intuition and guidance from computational chemistry to model transport processes under these conditions. Ground-based experimental studies used to provide necessary data for model development and validation are described. Included are the performance characteristics of high-enthalpy flow facilities, such as shock tubes and ballistic ranges.
Topaz, Maxim; Lai, Kenneth; Dowding, Dawn; Lei, Victor J; Zisberg, Anna; Bowles, Kathryn H; Zhou, Li
2016-12-01
Electronic health records are being increasingly used by nurses with up to 80% of the health data recorded as free text. However, only a few studies have developed nursing-relevant tools that help busy clinicians to identify information they need at the point of care. This study developed and validated one of the first automated natural language processing applications to extract wound information (wound type, pressure ulcer stage, wound size, anatomic location, and wound treatment) from free text clinical notes. First, two human annotators manually reviewed a purposeful training sample (n=360) and random test sample (n=1100) of clinical notes (including 50% discharge summaries and 50% outpatient notes), identified wound cases, and created a gold standard dataset. We then trained and tested our natural language processing system (known as MTERMS) to process the wound information. Finally, we assessed our automated approach by comparing system-generated findings against the gold standard. We also compared the prevalence of wound cases identified from free-text data with coded diagnoses in the structured data. The testing dataset included 101 notes (9.2%) with wound information. The overall system performance was good (F-measure is a compiled measure of system's accuracy=92.7%), with best results for wound treatment (F-measure=95.7%) and poorest results for wound size (F-measure=81.9%). Only 46.5% of wound notes had a structured code for a wound diagnosis. The natural language processing system achieved good performance on a subset of randomly selected discharge summaries and outpatient notes. In more than half of the wound notes, there were no coded wound diagnoses, which highlight the significance of using natural language processing to enrich clinical decision making. Our future steps will include expansion of the application's information coverage to other relevant wound factors and validation of the model with external data. Copyright © 2016 Elsevier Ltd. All rights reserved.
Measurement in Sensory Modulation: The Sensory Processing Scale Assessment
Miller, Lucy J.; Sullivan, Jillian C.
2014-01-01
OBJECTIVE. Sensory modulation issues have a significant impact on participation in daily life. Moreover, understanding phenotypic variation in sensory modulation dysfunction is crucial for research related to defining homogeneous groups and for clinical work in guiding treatment planning. We thus evaluated the new Sensory Processing Scale (SPS) Assessment. METHOD. Research included item development, behavioral scoring system development, test administration, and item analyses to evaluate reliability and validity across sensory domains. RESULTS. Items with adequate reliability (internal reliability >.4) and discriminant validity (p < .01) were retained. Feedback from the expert panel also contributed to decisions about retaining items in the scale. CONCLUSION. The SPS Assessment appears to be a reliable and valid measure of sensory modulation (scale reliability >.90; discrimination between group effect sizes >1.00). This scale has the potential to aid in differential diagnosis of sensory modulation issues. PMID:25184464
Multiscale and Multiphysics Modeling of Additive Manufacturing of Advanced Materials
NASA Technical Reports Server (NTRS)
Liou, Frank; Newkirk, Joseph; Fan, Zhiqiang; Sparks, Todd; Chen, Xueyang; Fletcher, Kenneth; Zhang, Jingwei; Zhang, Yunlu; Kumar, Kannan Suresh; Karnati, Sreekar
2015-01-01
The objective of this proposed project is to research and develop a prediction tool for advanced additive manufacturing (AAM) processes for advanced materials and develop experimental methods to provide fundamental properties and establish validation data. Aircraft structures and engines demand materials that are stronger, useable at much higher temperatures, provide less acoustic transmission, and enable more aeroelastic tailoring than those currently used. Significant improvements in properties can only be achieved by processing the materials under nonequilibrium conditions, such as AAM processes. AAM processes encompass a class of processes that use a focused heat source to create a melt pool on a substrate. Examples include Electron Beam Freeform Fabrication and Direct Metal Deposition. These types of additive processes enable fabrication of parts directly from CAD drawings. To achieve the desired material properties and geometries of the final structure, assessing the impact of process parameters and predicting optimized conditions with numerical modeling as an effective prediction tool is necessary. The targets for the processing are multiple and at different spatial scales, and the physical phenomena associated occur in multiphysics and multiscale. In this project, the research work has been developed to model AAM processes in a multiscale and multiphysics approach. A macroscale model was developed to investigate the residual stresses and distortion in AAM processes. A sequentially coupled, thermomechanical, finite element model was developed and validated experimentally. The results showed the temperature distribution, residual stress, and deformation within the formed deposits and substrates. A mesoscale model was developed to include heat transfer, phase change with mushy zone, incompressible free surface flow, solute redistribution, and surface tension. Because of excessive computing time needed, a parallel computing approach was also tested. In addition, after investigating various methods, a Smoothed Particle Hydrodynamics Model (SPH Model) was developed to model wire feeding process. Its computational efficiency and simple architecture makes it more robust and flexible than other models. More research on material properties may be needed to realistically model the AAM processes. A microscale model was developed to investigate heterogeneous nucleation, dendritic grain growth, epitaxial growth of columnar grains, columnar-to-equiaxed transition, grain transport in melt, and other properties. The orientations of the columnar grains were almost perpendicular to the laser motion's direction. Compared to the similar studies in the literature, the multiple grain morphology modeling result is in the same order of magnitude as optical morphologies in the experiment. Experimental work was conducted to validate different models. An infrared camera was incorporated as a process monitoring and validating tool to identify the solidus and mushy zones during deposition. The images were successfully processed to identify these regions. This research project has investigated multiscale and multiphysics of the complex AAM processes thus leading to advanced understanding of these processes. The project has also developed several modeling tools and experimental validation tools that will be very critical in the future of AAM process qualification and certification.
A literature review of quantitative indicators to measure the quality of labor and delivery care.
Tripathi, Vandana
2016-02-01
Strengthening measurement of the quality of labor and delivery (L&D) care in low-resource countries requires an understanding of existing approaches. To identify quantitative indicators of L&D care quality and assess gaps in indicators. PubMed, CINAHL Plus, and Embase databases were searched for research published in English between January 1, 1990, and October 31, 2013, using structured terms. Studies describing indicators for L&D care quality assessment were included. Those whose abstracts contained inclusion criteria underwent full-text review. Study characteristics, including indicator selection and data sources, were extracted via a standard spreadsheet. The structured search identified 1224 studies. After abstract and full-text review, 477 were included in the analysis. Most studies selected indicators by using literature review, clinical guidelines, or expert panels. Few indicators were empirically validated; most studies relied on medical record review to measure indicators. Many quantitative indicators have been used to measure L&D care quality, but few have been validated beyond expert opinion. There has been limited use of clinical observation in quality assessment of care processes. The findings suggest the need for validated, efficient consensus indicators of the quality of L&D care processes, particularly in low-resource countries. Copyright © 2015 International Federation of Gynecology and Obstetrics. Published by Elsevier Ireland Ltd. All rights reserved.
Reeves, Todd D; Marbach-Ad, Gili
2016-01-01
Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology--either quantitative or qualitative--on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. © 2016 T. D. Reeves and G. Marbach-Ad. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).
de By, Theo M M H; McDonald, Carl; Süßner, Susanne; Davies, Jill; Heng, Wee Ling; Jashari, Ramadan; Bogers, Ad J J C; Petit, Pieter
2017-11-01
Surgeons needing human cardiovascular tissue for implantation in their patients are confronted with cardiovascular tissue banks that use different methods to identify and decontaminate micro-organisms. To elucidate these differences, we compared the quality of processing methods in 20 tissue banks and 1 reference laboratory. We did this to validate the results for accepting or rejecting tissue. We included the decontamination methods used and the influence of antibiotic cocktails and residues with results and controls. The minor details of the processes were not included. To compare the outcomes of microbiological testing and decontamination methods of heart valve allografts in cardiovascular tissue banks, an international quality round was organized. Twenty cardiovascular tissue banks participated in this quality round. The quality round method was validated first and consisted of sending purposely contaminated human heart valve tissue samples with known micro-organisms to the participants. The participants identified the micro-organisms using their local decontamination methods. Seventeen of the 20 participants correctly identified the micro-organisms; if these samples were heart valves to be released for implantation, 3 of the 20 participants would have decided to accept their result for release. Decontamination was shown not to be effective in 13 tissue banks because of growth of the organisms after decontamination. Articles in the literature revealed that antibiotics are effective at 36°C and not, or less so, at 2-8°C. The decontamination procedure, if it is validated, will ensure that the tissue contains no known micro-organisms. This study demonstrates that the quality round method of sending contaminated tissues and assessing the results of the microbiological cultures is an effective way of validating the processes of tissue banks. Only when harmonization, based on validated methods, has been achieved, will surgeons be able to fully rely on the methods used and have confidence in the consistent sterility of the tissue grafts. Tissue banks should validate their methods so that all stakeholders can trust the outcomes. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.
Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's Mini-Sentinel pilot program initially aimed to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of pulmonary fibrosis and interstitial lung disease. PubMed and Iowa Drug Information Service Web searches were conducted to identify citations applicable to the pulmonary fibrosis/interstitial lung disease HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify pulmonary fibrosis and interstitial lung disease, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on pulmonary fibrosis and interstitial lung disease algorithms and validation estimates. Only five studies provided codes; none provided validation estimates. Because interstitial lung disease includes a broad spectrum of diseases, including pulmonary fibrosis, the scope of these studies varied, as did the corresponding diagnostic codes used. Research needs to be conducted on designing validation studies to test pulmonary fibrosis and interstitial lung disease algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Code of Federal Regulations, 2013 CFR
2013-01-01
... allowed in this section. Ginseng is eligible only if: (1) The ginseng includes stratified seeds for use as... used or put into place during the crop year; and (5) Possess a valid food processing license issued by...
Code of Federal Regulations, 2012 CFR
2012-01-01
... allowed in this section. Ginseng is eligible only if: (1) The ginseng includes stratified seeds for use as... used or put into place during the crop year; and (5) Possess a valid food processing licence issued by...
ERIC Educational Resources Information Center
Finn, Peter
This report records the development and validation by Abt Associates, Inc. of utilization materials developed to accompany the two U.S. Office of Education film series, Jackson Junior High and Dial A-L-C-O-H-O-L. The first section describes the process by which the nine project products were developed. These products include the following: (1) a…
NASA Astrophysics Data System (ADS)
Nasution, Derlina; Syahreni Harahap, Putri; Harahap, Marabangun
2018-03-01
This research aims to: (1) developed a instrument’s learning (lesson plan, worksheet, student’s book, teacher’s guide book, and instrument test) of physics learning through scientific inquiry learning model based Batak culture to achieve skills improvement process of science students and the students’ curiosity; (2) describe the quality of the result of develop instrument’s learning in high school using scientific inquiry learning model based Batak culture (lesson plan, worksheet, student’s book, teacher’s guide book, and instrument test) to achieve the science process skill improvement of students and the student curiosity. This research is research development. This research developed a instrument’s learning of physics by using a development model that is adapted from the development model Thiagarajan, Semmel, and Semmel. The stages are traversed until retrieved a valid physics instrument’s learning, practical, and effective includes :(1) definition phase, (2) the planning phase, and (3) stages of development. Test performed include expert test/validation testing experts, small groups, and test classes is limited. Test classes are limited to do in SMAN 1 Padang Bolak alternating on a class X MIA. This research resulted in: 1) the learning of physics static fluid material specially for high school grade 10th consisted of (lesson plan, worksheet, student’s book, teacher’s guide book, and instrument test) and quality worthy of use in the learning process; 2) each component of the instrument’s learning meet the criteria have valid learning, practical, and effective way to reach the science process skill improvement and curiosity in students.
Plant, Katherine L; Stanton, Neville A
2015-01-01
The perceptual cycle model (PCM) has been widely applied in ergonomics research in domains including road, rail and aviation. The PCM assumes that information processing occurs in a cyclical manner drawing on top-down and bottom-up influences to produce perceptual exploration and actions. However, the validity of the model has not been addressed. This paper explores the construct validity of the PCM in the context of aeronautical decision-making. The critical decision method was used to interview 20 helicopter pilots about critical decision-making. The data were qualitatively analysed using an established coding scheme, and composite PCMs for incident phases were constructed. It was found that the PCM provided a mutually exclusive and exhaustive classification of the information-processing cycles for dealing with critical incidents. However, a counter-cycle was also discovered which has been attributed to skill-based behaviour, characteristic of experts. The practical applications and future research questions are discussed. Practitioner Summary: This paper explores whether information processing, when dealing with critical incidents, occurs in the manner anticipated by the perceptual cycle model. In addition to the traditional processing cycle, a reciprocal counter-cycle was found. This research can be utilised by those who use the model as an accident analysis framework.
ERIC Educational Resources Information Center
Bernstein, Susan; And Others
This report contains 20 summary-descriptions of curriculum programs and materials selected by the Institute for possible inclusion in its synthesis and validation of a K-6 process-promoting curriculum. Each description includes information on the developer and publisher plus a list of references (mostly published descriptions and critiques).…
ERIC Educational Resources Information Center
Marson, Stephen M.; DeAngelis, Donna; Mittal, Nisha
2010-01-01
Objectives: The purpose of this article is to create transparency for the psychometric methods employed for the development of the Association of Social Work Boards' (ASWB) exams. Results: The article includes an assessment of the macro (political) and micro (statistical) environments of testing social work competence. The seven-step process used…
ERIC Educational Resources Information Center
Baker, Susan S.; Pearson, Meredith; Chipman, Helen
2009-01-01
The purpose of this project was to describe the process used for the development of core competencies for paraprofessional nutrition educators in Food Stamp Nutrition Education (FSNE). The development process included the efforts of an expert panel of state and multicounty FSNE leaders to draft the core competencies and the validation of those…
How Many Batches Are Needed for Process Validation under the New FDA Guidance?
Yang, Harry
2013-01-01
The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and they highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance. The newly updated FDA Guidance for Industry on Process Validation: General Principles and Practices ushers in a life cycle approach to process validation. While the guidance no longer considers the use of traditional three-batch validation appropriate, it does not prescribe the number of validation batches for a prospective validation protocol, nor does it provide specific methods to determine it. This potentially could leave manufacturers in a quandary. In this paper, I develop a Bayesian method to address the issue. By combining process knowledge gained from Stage 1 Process Design (PD) with expected outcomes of Stage 2 Process Performance Qualification (PPQ), the number of validation batches for PPQ is determined to provide a high level of assurance that the process will consistently produce future batches meeting quality standards. Several examples based on simulated data are presented to illustrate the use of the Bayesian method in helping manufacturers make risk-based decisions for Stage 2 PPQ, and THEY highlight the advantages of the method over traditional Frequentist approaches. The discussions in the paper lend support for a life cycle and risk-based approach to process validation recommended in the new FDA guidance.
The validation of the Supervision of Thesis Questionnaire (STQ).
Henricson, Maria; Fridlund, Bengt; Mårtensson, Jan; Hedberg, Berith
2018-06-01
The supervision process is characterized by differences between the supervisors' and the students' expectations before the start of writing a bachelor thesis as well as after its completion. A review of the literature did not reveal any scientifically tested questionnaire for evaluating nursing students' expectations of the supervision process when writing a bachelor thesis. The aim of the study was to determine the construct validity and internal consistency reliability of a questionnaire for measuring nursing students' expectations of the bachelor thesis supervision process. The study had a developmental and methodological design carried out in four steps including construct validity and internal consistency reliability statistical procedures: construction of the items, assessment of face validity, data collection and data analysis. This study was conducted at a university in southern Sweden, where students on the "Nursing student thesis, 15 ECTS" course were consecutively selected for participation. Of the 512 questionnaires distributed, 327 were returned, a response rate of 64%. Five factors with a total variance of 74% and good communalities, ≥0.64, were extracted from the 10-item STQ. The internal consistency of the 10 items was 0.68. The five factors were labelled: The nature of the supervision process, The supervisor's role as a coach, The students' progression to self-support, The interaction between students and supervisor and supervisor competence. A didactic, useful and secure questionnaire measuring nursing students' expectations of the bachelor thesis supervision process based on three main forms of supervision was created. Copyright © 2018 Elsevier Ltd. All rights reserved.
Designing Interactive Electronic Module in Chemistry Lessons
NASA Astrophysics Data System (ADS)
Irwansyah, F. S.; Lubab, I.; Farida, I.; Ramdhani, M. A.
2017-09-01
This research aims to design electronic module (e-module) oriented to the development of students’ chemical literacy on the solution colligative properties material. This research undergoes some stages including concept analysis, discourse analysis, storyboard design, design development, product packaging, validation, and feasibility test. Overall, this research undertakes three main stages, namely, Define (in the form of preliminary studies); Design (designing e-module); Develop (including validation and model trial). The concept presentation and visualization used in this e-module is oriented to chemical literacy skills. The presentation order carries aspects of scientific context, process, content, and attitude. Chemists and multi media experts have done the validation to test the initial quality of the products and give a feedback for the product improvement. The feasibility test results stated that the content presentation and display are valid and feasible to be used with the value of 85.77% and 87.94%. These values indicate that this e-module oriented to students’ chemical literacy skills for the solution colligative properties material is feasible to be used.
Smerecnik, Chris M R; Mesters, Ilse; Candel, Math J J M; De Vries, Hein; De Vries, Nanne K
2012-01-01
The role of information processing in understanding people's responses to risk information has recently received substantial attention. One limitation of this research concerns the unavailability of a validated questionnaire of information processing. This article presents two studies in which we describe the development and validation of the Information-Processing Questionnaire to meet that need. Study 1 describes the development and initial validation of the questionnaire. Participants were randomized to either a systematic processing or a heuristic processing condition after which they completed a manipulation check and the initial 15-item questionnaire and again two weeks later. The questionnaire was subjected to factor reliability and validity analyses on both measurement times for purposes of cross-validation of the results. A two-factor solution was observed representing a systematic processing and a heuristic processing subscale. The resulting scale showed good reliability and validity, with the systematic condition scoring significantly higher on the systematic subscale and the heuristic processing condition significantly higher on the heuristic subscale. Study 2 sought to further validate the questionnaire in a field study. Results of the second study corresponded with those of Study 1 and provided further evidence of the validity of the Information-Processing Questionnaire. The availability of this information-processing scale will be a valuable asset for future research and may provide researchers with new research opportunities. © 2011 Society for Risk Analysis.
Measuring Adverse Events in Helicopter Emergency Medical Services: Establishing Content Validity
Patterson, P. Daniel; Lave, Judith R.; Martin-Gill, Christian; Weaver, Matthew D.; Wadas, Richard J.; Arnold, Robert M.; Roth, Ronald N.; Mosesso, Vincent N.; Guyette, Francis X.; Rittenberger, Jon C.; Yealy, Donald M.
2015-01-01
Introduction We sought to create a valid framework for detecting Adverse Events (AEs) in the high-risk setting of Helicopter Emergency Medical Services (HEMS). Methods We assembled a panel of 10 expert clinicians (n=6 emergency medicine physicians and n=4 prehospital nurses and flight paramedics) affiliated with a large multi-state HEMS organization in the Northeast U.S. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the Content Validity Index (CVI), to quantify the validity of the framework’s content. Results The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: 1) a trigger tool, 2) a method for rating proximal cause, and 3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. Conclusions We demonstrate a standardized process for the development of a content valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS. PMID:24003951
2003-12-16
KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis is backed out of the Vehicle Assembly Building for transfer back to the Orbiter Processing Facility. Atlantis spent 10 days in the VAB to allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.
2004-01-09
KENNEDY SPACE CENTER, FLA. -- Endeavour settles into place inside the Vehicle Assembly Building (VAB) where it has been moved for temporary storage. It left the Orbiter Processing Facility (OPF) to allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.
2004-01-09
KENNEDY SPACE CENTER, FLA. -- Endeavour begins rolling out of the Orbiter Processing Facility for temporary transfer to the Vehicle Assembly Building. The move allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.
2004-01-09
KENNEDY SPACE CENTER, FLA. -- Endeavour rolls into the Vehicle Assembly Building (VAB) for temporary storage. The orbiter has been moved from the Orbiter Processing Facility (OPF) to allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.
2004-01-09
KENNEDY SPACE CENTER, FLA. -- Endeavour is towed toward the Vehicle Assembly Building for temporary storage. The orbiter has been moved from the Orbiter Processing Facility (OPF) to allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.
2003-12-16
KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis rolls into the Orbiter Processing Facility after spending 10 days in the Vehicle Assembly Building. The hiatus in the VAB allowed work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.
2004-01-09
KENNEDY SPACE CENTER, FLA. -- Endeavour is ready to be rolled out of the Orbiter Processing Facility for temporary transfer to the Vehicle Assembly Building. The move allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.
2003-12-16
KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis is back inside the Orbiter Processing Facility after spending 10 days in the Vehicle Assembly Building. The hiatus in the VAB allowed work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-16
KENNEDY SPACE CENTER, FLA. -- The orbiter Atlantis is backed away from the Vehicle Assembly Building for transfer back to the Orbiter Processing Facility. Atlantis spent 10 days in the VAB to allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-16
KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis rolls toward the Orbiter Processing Facility after spending 10 days in the Vehicle Assembly Building. The hiatus in the VAB allowed work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-16
KENNEDY SPACE CENTER, FLA. -- The orbiter Atlantis is backed out of the Vehicle Assembly Building for transfer back to the Orbiter Processing Facility. Atlantis spent 10 days in the VAB to allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-16
KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis rolls out of the Vehicle Assembly Building for transfer back to the Orbiter Processing Facility. Atlantis spent 10 days in the VAB to allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-16
KENNEDY SPACE CENTER, FLA. - The orbiter Atlantis is towed back to the Orbiter Processing Facility after spending 10 days in the Vehicle Assembly Building. The hiatus in the VAB allowed work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.
Caro-Bautista, Jorge; Martín-Santos, Francisco Javier; Morales-Asencio, Jose Miguel
2014-06-01
To determine the psychometric properties and theoretical grounding of instruments that evaluate self-care behaviour or barriers in people with type 2 diabetes. There are many instruments designed to evaluate self-care behaviour or barriers in this population, but knowledge about their psychometric validation processes is lacking. Systematic review. We conducted a search for psychometric or validation studies published between January 1990-December 2012. We carried out searches in Pubmed, CINAHL, PsycINFO, ProQuolid, BibliPRO and Google SCHOLAR to identify instruments that evaluated self-care behaviours or barriers to diabetes self-care. We conducted a systematic review with the following inclusion criteria: Psychometric or clinimetric validation studies that included patients with type 2 diabetes (exclusively or partially) and which analysed self-care behaviour or barriers to self-care and proxies like self-efficacy or empowerment, from a multidimensional approach. Language: Spanish or English. Two authors independently assessed the quality of the studies and extracted data using Terwee's proposed criteria: psychometrics properties, dimensionality, theoretical ground and population used for validation through each included instrument. Sixteen instruments achieved the inclusion criteria for the review. We detected important methodological flaws in many of the selected instruments. Only the Self-management Profile for Type 2 Diabetes and Problem Areas in Diabetes Scale met half of Terwee's quality criteria. There are no instruments for identifying self-care behaviours or barriers elaborated with a strong validation process. Further research should be carried out to provide patients, clinicians and researchers with valid and reliable instruments that are methodologically solid and theoretically grounded. © 2013 John Wiley & Sons Ltd.
Biophysics: for HTS hit validation, chemical lead optimization, and beyond.
Genick, Christine C; Wright, S Kirk
2017-09-01
There are many challenges to the drug discovery process, including the complexity of the target, its interactions, and how these factors play a role in causing the disease. Traditionally, biophysics has been used for hit validation and chemical lead optimization. With its increased throughput and sensitivity, biophysics is now being applied earlier in this process to empower target characterization and hit finding. Areas covered: In this article, the authors provide an overview of how biophysics can be utilized to assess the quality of the reagents used in screening assays, to validate potential tool compounds, to test the integrity of screening assays, and to create follow-up strategies for compound characterization. They also briefly discuss the utilization of different biophysical methods in hit validation to help avoid the resource consuming pitfalls caused by the lack of hit overlap between biophysical methods. Expert opinion: The use of biophysics early on in the drug discovery process has proven crucial to identifying and characterizing targets of complex nature. It also has enabled the identification and classification of small molecules which interact in an allosteric or covalent manner with the target. By applying biophysics in this manner and at the early stages of this process, the chances of finding chemical leads with novel mechanisms of action are increased. In the future, focused screens with biophysics as a primary readout will become increasingly common.
Buekenhout, Imke; Leitão, José; Gomes, Ana A
2018-05-24
Month ordering tasks have been used in experimental settings to obtain measures of working memory (WM) capacity in older/clinical groups based solely on their face validity. We sought to assess the appropriateness of using a month ordering task in other contexts, including clinical settings, as a psychometrically sound WM assessment. To this end, we constructed a month ordering task (ucMOT), studied its reliability (internal consistency and temporal stability), and gathered construct-related and criterion-related validity evidence for its use as a WM assessment. The ucMOT proved to be internally consistent and temporally stable, and analyses of the criterion-related validity evidence revealed that its scores predicted the efficiency of language comprehension processes known to depend crucially on WM resources, namely, processes involved in pronoun interpretation. Furthermore, all ucMOT items discriminated between younger and older age groups; the global scores were significantly correlated with scores on well-established WM tasks and presented lower correlations with instruments that evaluate different (although related) processes, namely, inhibition and processing speed. We conclude that the ucMOT possesses solid psychometric properties. Accordingly, we acquired normative data for the Portuguese population, which we present as a regression-based algorithm that yields z scores adjusted for age, gender, and years of formal education. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
DeFelice, Thomas P.; Lloyd, D.; Meyer, D.J.; Baltzer, T. T.; Piraina, P.
2003-01-01
An atmospheric correction algorithm developed for the 1 km Advanced Very High Resolution Radiometer (AVHRR) global land dataset was modified to include a near real-time total column water vapour data input field to account for the natural variability of atmospheric water vapour. The real-time data input field used for this study is the Television and Infrared Observational Satellite (TIROS) Operational Vertical Sounder (TOVS) Pathfinder A global total column water vapour dataset. It was validated prior to its use in the AVHRR atmospheric correction process using two North American AVHRR scenes, namely 13 June and 28 November 1996. The validation results are consistent with those reported by others and entail a comparison between TOVS, radiosonde, experimental sounding, microwave radiometer, and data from a hand-held sunphotometer. The use of this data layer as input to the AVHRR atmospheric correction process is discussed.
Haji Ali Afzali, Hossein; Gray, Jodi; Karnon, Jonathan
2013-04-01
Decision analytic models play an increasingly important role in the economic evaluation of health technologies. Given uncertainties around the assumptions used to develop such models, several guidelines have been published to identify and assess 'best practice' in the model development process, including general modelling approach (e.g., time horizon), model structure, input data and model performance evaluation. This paper focuses on model performance evaluation. In the absence of a sufficient level of detail around model performance evaluation, concerns regarding the accuracy of model outputs, and hence the credibility of such models, are frequently raised. Following presentation of its components, a review of the application and reporting of model performance evaluation is presented. Taking cardiovascular disease as an illustrative example, the review investigates the use of face validity, internal validity, external validity, and cross model validity. As a part of the performance evaluation process, model calibration is also discussed and its use in applied studies investigated. The review found that the application and reporting of model performance evaluation across 81 studies of treatment for cardiovascular disease was variable. Cross-model validation was reported in 55 % of the reviewed studies, though the level of detail provided varied considerably. We found that very few studies documented other types of validity, and only 6 % of the reviewed articles reported a calibration process. Considering the above findings, we propose a comprehensive model performance evaluation framework (checklist), informed by a review of best-practice guidelines. This framework provides a basis for more accurate and consistent documentation of model performance evaluation. This will improve the peer review process and the comparability of modelling studies. Recognising the fundamental role of decision analytic models in informing public funding decisions, the proposed framework should usefully inform guidelines for preparing submissions to reimbursement bodies.
Social anxiety questionnaire (SAQ): Development and preliminary validation.
Łakuta, Patryk
2018-05-30
The Social Anxiety Questionnaire (SAQ) was designed to assess five dimensions of social anxiety as posited by the Clark and Wells' (1995; Clark, 2001) cognitive model. The development of the SAQ involved generation of an item pool, followed by a verification of content validity and the theorized factor structure (Study 1). The final version of the SAQ was then assessed for reliability, temporal stability (test re-test reliability), and construct, criterion-related, and contrasted-group validity (Study 2, 3, and 4). Following a systematic process, the results provide support for the SAQ as reliable, and both theoretically and empirically valid measure. A five-factor structure of the SAQ verified and replicated through confirmatory factor analyses reflect five dimensions of social anxiety: negative self-processing; self-focused attention and self-monitoring; safety behaviours; somatic and cognitive symptoms; and anticipatory and post-event rumination. Results suggest that the SAQ possesses good psychometric properties, while recognizing that additional validation is a required future research direction. It is important to replicate these findings in diverse populations, including a large clinical sample. The SAQ is a promising measure that supports social anxiety as a multidimensional construct, and the foundational role of self-focused cognitive processes in generation and maintenance of social anxiety symptoms. The findings make a significant contribution to the literature, moreover, the SAQ is a first instrument that offers to assess all, proposed by the Clark-Wells model, specific cognitive-affective, physiological, attitudinal, and attention processes related to social anxiety. Copyright © 2018 Elsevier B.V. All rights reserved.
2016-10-01
Study (PASS). We are in the process of evaluating these three biomarker panels in tissue, blood, and urine samples with well annotated clinical and...choice of therapy and decision-making during AS. The objective of the study is to utilize analytically validated assays that take into account tumor...Gleason 6 cancer to Gleason 7 or greater. The analysis plan was determined before specimens were selected for the study , and included 7 breaking
High-throughput neuroimaging-genetics computational infrastructure
Dinov, Ivo D.; Petrosyan, Petros; Liu, Zhizhong; Eggert, Paul; Hobel, Sam; Vespa, Paul; Woo Moon, Seok; Van Horn, John D.; Franco, Joseph; Toga, Arthur W.
2014-01-01
Many contemporary neuroscientific investigations face significant challenges in terms of data management, computational processing, data mining, and results interpretation. These four pillars define the core infrastructure necessary to plan, organize, orchestrate, validate, and disseminate novel scientific methods, computational resources, and translational healthcare findings. Data management includes protocols for data acquisition, archival, query, transfer, retrieval, and aggregation. Computational processing involves the necessary software, hardware, and networking infrastructure required to handle large amounts of heterogeneous neuroimaging, genetics, clinical, and phenotypic data and meta-data. Data mining refers to the process of automatically extracting data features, characteristics and associations, which are not readily visible by human exploration of the raw dataset. Result interpretation includes scientific visualization, community validation of findings and reproducible findings. In this manuscript we describe the novel high-throughput neuroimaging-genetics computational infrastructure available at the Institute for Neuroimaging and Informatics (INI) and the Laboratory of Neuro Imaging (LONI) at University of Southern California (USC). INI and LONI include ultra-high-field and standard-field MRI brain scanners along with an imaging-genetics database for storing the complete provenance of the raw and derived data and meta-data. In addition, the institute provides a large number of software tools for image and shape analysis, mathematical modeling, genomic sequence processing, and scientific visualization. A unique feature of this architecture is the Pipeline environment, which integrates the data management, processing, transfer, and visualization. Through its client-server architecture, the Pipeline environment provides a graphical user interface for designing, executing, monitoring validating, and disseminating of complex protocols that utilize diverse suites of software tools and web-services. These pipeline workflows are represented as portable XML objects which transfer the execution instructions and user specifications from the client user machine to remote pipeline servers for distributed computing. Using Alzheimer's and Parkinson's data, we provide several examples of translational applications using this infrastructure1. PMID:24795619
System theory in industrial patient monitoring: an overview.
Baura, G D
2004-01-01
Patient monitoring refers to the continuous observation of repeating events of physiologic function to guide therapy or to monitor the effectiveness of interventions, and is used primarily in the intensive care unit and operating room. Commonly processed signals are the electrocardiogram, intraarterial blood pressure, arterial saturation of oxygen, and cardiac output. To this day, the majority of physiologic waveform processing in patient monitors is conducted using heuristic curve fitting. However in the early 1990s, a few enterprising engineers and physicians began using system theory to improve their core processing. Applications included improvement of signal-to-noise ratio, either due to low signal levels or motion artifact, and improvement in feature detection. The goal of this mini-symposium is to review the early work in this emerging field, which has led to technologic breakthroughs. In this overview talk, the process of system theory algorithm research and development is discussed. Research for industrial monitors involves substantial data collection, with some data used for algorithm training and the remainder used for validation. Once the algorithms are validated, they are translated into detailed specifications. Development then translates these specifications into DSP code. The DSP code is verified and validated per the Good Manufacturing Practices mandated by FDA.
Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf
2012-01-01
Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability. PMID:22553239
Ruettger, Anke; Nieter, Johanna; Skrypnyk, Artem; Engelmann, Ines; Ziegler, Albrecht; Moser, Irmgard; Monecke, Stefan; Ehricht, Ralf; Sachse, Konrad
2012-07-01
Membrane-based spoligotyping has been converted to DNA microarray format to qualify it for high-throughput testing. We have shown the assay's validity and suitability for direct typing from tissue and detecting new spoligotypes. Advantages of the microarray methodology include rapidity, ease of operation, automatic data processing, and affordability.
Mission Operations of the Mars Exploration Rovers
NASA Technical Reports Server (NTRS)
Bass, Deborah; Lauback, Sharon; Mishkin, Andrew; Limonadi, Daniel
2007-01-01
A document describes a system of processes involved in planning, commanding, and monitoring operations of the rovers Spirit and Opportunity of the Mars Exploration Rover mission. The system is designed to minimize command turnaround time, given that inherent uncertainties in terrain conditions and in successful completion of planned landed spacecraft motions preclude planning of some spacecraft activities until the results of prior activities are known by the ground-based operations team. The processes are partitioned into those (designated as tactical) that must be tied to the Martian clock and those (designated strategic) that can, without loss, be completed in a more leisurely fashion. The tactical processes include assessment of downlinked data, refinement and validation of activity plans, sequencing of commands, and integration and validation of sequences. Strategic processes include communications planning and generation of long-term activity plans. The primary benefit of this partition is to enable the tactical portion of the team to focus solely on tasks that contribute directly to meeting the deadlines for commanding the rover s each sol (1 sol = 1 Martian day) - achieving a turnaround time of 18 hours or less, while facilitating strategic team interactions with other organizations that do not work on a Mars time schedule.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis moves into high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis awaits a tow from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is turned into position outside the Orbiter Processing Facility (OPF) for its tow to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - Workers back the Space Shuttle orbiter Atlantis out of the Orbiter Processing Facility (OPF) for its move to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is moved into high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - Workers prepare to tow the Space Shuttle orbiter Atlantis from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is moments away from a tow from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - Workers monitor the Space Shuttle orbiter Atlantis as it is towed from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis approaches the Vehicle Assembly Building (VAB) high bay 4. It is being towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis approaches high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - Workers walk with Space Shuttle orbiter Atlantis from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB) high bay 4. The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis backs out of the Orbiter Processing Facility (OPF) for its move to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis arrives in high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is almost in position in high bay 4 of the Vehicle Assembly Building (VAB). It was towed from the Orbiter Processing Facility (OPF) to allow work to be performed in the bay that can only be accomplished while it is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
2003-12-05
KENNEDY SPACE CENTER, FLA. - The Space Shuttle orbiter Atlantis is reflected in a rain puddle as it is towed from the Orbiter Processing Facility (OPF) to the Vehicle Assembly Building (VAB). The move will allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the processing facility includes annual validation of the bay's cranes, work platforms, lifting mechanisms, and jack stands. Atlantis will remain in the VAB for about 10 days, then return to the OPF as work resumes to prepare it for launch in September 2004 on the first return-to-flight mission, STS-114.
Kruithof, Nena; Van Cleef, Melanie Hubertina Maria; Rasquin, Sascha Maria Cornelia; Bovend'Eerdt, Thamar Johannes Henricus
2016-01-01
Our objective is to investigate the feasibility and validity of a new instrument to screen for determinants of poststroke fatigue during the rehabilitation process. This prospective cohort study was conducted within the stroke department of a rehabilitation center. The participants in the study were postacute adult stroke patients. The Detection List Fatigue (DLF)was administered 2 weeks after the start of the rehabilitation program and again 6 weeks later. To determine the construct validity, the Hospital Anxiety and Depression Scale, the Checklist Individual Strength subscale fatigue, and the Fatigue Severity Scale--7-item version were administered. A fatigue rating scale was used to measure the patients' fatigue experience. Frequency analyses of the number of patients reporting poststroke fatigue determinants according to the DLF were performed. One hundred seven patients (mean age 60 years) without severe communication difficulties were included in the study. The DLF was easy to understand and quick to administer. The DLF showed good internal consistency (Cronbach's alpha: .79 and .87), high convergent validity (rs = .85 and rs = .79), and good divergent validity (rs = .31 and rs = .45). The majority of the patients (88.4%-90.2%) experienced at least 2 poststroke fatigue (PSF) determinants,of which "sleeping problem" was most frequently reported. The DLF is a feasible and valid instrument for the screening of PSF determinants throughout the rehabilitation process in stroke patients. Future studies should investigate whether the use of the list in determining a treatment plan prevents the development of PSF.
The cross-cultural equivalence of participation instruments: a systematic review.
Stevelink, S A M; van Brakel, W H
2013-07-01
Concepts such as health-related quality of life, disability and participation may differ across cultures. Consequently, when assessing such a concept using a measure developed elsewhere, it is important to test its cultural equivalence. Previous research suggested a lack of cultural equivalence testing in several areas of measurement. This paper reviews the process of cross-cultural equivalence testing of instruments to measure participation in society. An existing cultural equivalence framework was adapted and used to assess participation instruments on five categories of equivalence: conceptual, item, semantic, measurement and operational equivalence. For each category, several aspects were rated, resulting in an overall category rating of 'minimal/none', 'partial' or 'extensive'. The best possible overall study rating was five 'extensive' ratings. Articles were included if the instruments focussed explicitly on measuring 'participation' and were theoretically grounded in the ICIDH(-2) or ICF. Cross-validation articles were only included if it concerned an adaptation of an instrument developed in a high or middle-income country to a low-income country or vice versa. Eight cross-cultural validation studies were included in which five participation instruments were tested (Impact on Participation and Autonomy, London Handicap Scale, Perceived Impact and Problem Profile, Craig Handicap Assessment Reporting Technique, Participation Scale). Of these eight studies, only three received at least two 'extensive' ratings for the different categories of equivalence. The majority of the cultural equivalence ratings given were 'partial' and 'minimal/none'. The majority of the 'none/minimal' ratings were given for item and measurement equivalence. The cross-cultural equivalence testing of the participation instruments included leaves much to be desired. A detailed checklist is proposed for designing a cross-validation study. Once a study has been conducted, the checklist can be used to ensure comprehensive reporting of the validation (equivalence) testing process and its results. • Participation instruments are often used in a different cultural setting than initial developed for. • The conceptualization of participation may vary across cultures. Therefore, cultural equivalence – the extent to which an instrument is equally suitable for use in two or more cultures – is an important concept to address. • This review showed that the process of cultural equivalence testing of the included participation instruments was often addressed insufficiently. • Clinicians should be aware that application of participations instruments in a different culture than initially developed for needs prior testing of cultural validity in the next context.
A systematic review of validated sinus surgery simulators.
Stew, B; Kao, S S-T; Dharmawardana, N; Ooi, E H
2018-06-01
Simulation provides a safe and effective opportunity to develop surgical skills. A variety of endoscopic sinus surgery (ESS) simulators has been described in the literature. Validation of these simulators allows for effective utilisation in training. To conduct a systematic review of the published literature to analyse the evidence for validated ESS simulation. Pubmed, Embase, Cochrane and Cinahl were searched from inception of the databases to 11 January 2017. Twelve thousand five hundred and sixteen articles were retrieved of which 10 112 were screened following the removal of duplicates. Thirty-eight full-text articles were reviewed after meeting search criteria. Evidence of face, content, construct, discriminant and predictive validity was extracted. Twenty articles were included in the analysis describing 12 ESS simulators. Eleven of these simulators had undergone validation: 3 virtual reality, 7 physical bench models and 1 cadaveric simulator. Seven of the simulators were shown to have face validity, 7 had construct validity and 1 had predictive validity. None of the simulators demonstrated discriminate validity. This systematic review demonstrates that a number of ESS simulators have been comprehensively validated. Many of the validation processes, however, lack standardisation in outcome reporting, thus limiting a meta-analysis comparison between simulators. © 2017 John Wiley & Sons Ltd.
Causation and Validation of Nursing Diagnoses: A Middle Range Theory.
de Oliveira Lopes, Marcos Venícios; da Silva, Viviane Martins; Herdman, T Heather
2017-01-01
To describe a predictive middle range theory (MRT) that provides a process for validation and incorporation of nursing diagnoses in clinical practice. Literature review. The MRT includes definitions, a pictorial scheme, propositions, causal relationships, and translation to nursing practice. The MRT can be a useful alternative for education, research, and translation of this knowledge into practice. This MRT can assist clinicians in understanding clinical reasoning, based on temporal logic and spectral interaction among elements of nursing classifications. In turn, this understanding will improve the use and accuracy of nursing diagnosis, which is a critical component of the nursing process that forms a basis for nursing practice standards worldwide. © 2015 NANDA International, Inc.
Validation of the Chinese expanded Euthanasia Attitude Scale.
Chong, Alice Ming-Lin; Fok, Shiu-Yeu
2013-01-01
This article reports the validation of the Chinese version of an expanded 31-item Euthanasia Attitude Scale. A 4-stage validation process included a pilot survey of 119 college students and a randomized household survey with 618 adults in Hong Kong. Confirmatory factor analysis confirmed a 4-factor structure of the scale, which can therefore be used to examine attitudes toward general, active, passive, and non-voluntary euthanasia. The scale considers the role effect in decision-making about euthanasia requests and facilitates cross-cultural comparison of attitudes toward euthanasia. The new Chinese scale is more robust than its Western predecessors conceptually and measurement-wise.
Failure mode and effects analysis outputs: are they valid?
2012-01-01
Background Failure Mode and Effects Analysis (FMEA) is a prospective risk assessment tool that has been widely used within the aerospace and automotive industries and has been utilised within healthcare since the early 1990s. The aim of this study was to explore the validity of FMEA outputs within a hospital setting in the United Kingdom. Methods Two multidisciplinary teams each conducted an FMEA for the use of vancomycin and gentamicin. Four different validity tests were conducted: · Face validity: by comparing the FMEA participants’ mapped processes with observational work. · Content validity: by presenting the FMEA findings to other healthcare professionals. · Criterion validity: by comparing the FMEA findings with data reported on the trust’s incident report database. · Construct validity: by exploring the relevant mathematical theories involved in calculating the FMEA risk priority number. Results Face validity was positive as the researcher documented the same processes of care as mapped by the FMEA participants. However, other healthcare professionals identified potential failures missed by the FMEA teams. Furthermore, the FMEA groups failed to include failures related to omitted doses; yet these were the failures most commonly reported in the trust’s incident database. Calculating the RPN by multiplying severity, probability and detectability scores was deemed invalid because it is based on calculations that breach the mathematical properties of the scales used. Conclusion There are significant methodological challenges in validating FMEA. It is a useful tool to aid multidisciplinary groups in mapping and understanding a process of care; however, the results of our study cast doubt on its validity. FMEA teams are likely to need different sources of information, besides their personal experience and knowledge, to identify potential failures. As for FMEA’s methodology for scoring failures, there were discrepancies between the teams’ estimates and similar incidents reported on the trust’s incident database. Furthermore, the concept of multiplying ordinal scales to prioritise failures is mathematically flawed. Until FMEA’s validity is further explored, healthcare organisations should not solely depend on their FMEA results to prioritise patient safety issues. PMID:22682433
AlJohani, Khalid A; Kendall, Garth E; Snider, Paul D
2016-01-01
To translate and examine the psychometric properties of the Arabic version of the Summary of Diabetes Self-Care Activities. An instrument translation and validation study. A total of 243 participants (33 first sample and 210 second sample) diagnosed with type 2 diabetes mellitus were recruited from four primary health care centers in Saudi Arabia. The study was guided by the World Health Organization guidelines for translation and validation of instrument. Translation indicators showed satisfactory outcomes for each included process in the forward-translation, an expert panel, and back-translation stages. Reliability and validity outcomes were as follows: test-retest, r = .912 and p < .001; split-half = .9; and Cronbach's alpha (α) = .76. The alpha scores for the subscales were as follows: diet, .89; exercise, .83; blood glucose testing, .92; and foot care, .77. Principal component analysis revealed the presence of four components with eigenvalues greater than 1.0, explaining 34.4, 16.4, 15.4, and 11.2% of the variance in everyday practices for these items, respectively (accumulated = 77.1%). The translation and validation processes revealed acceptable psychometric properties. The instrument could evaluate diabetes self-care in Saudi Arabia and has the potential to be used in other Arabic-speaking populations. © The Author(s) 2014.
Hill, Jonathan C; Kang, Sujin; Benedetto, Elena; Myers, Helen; Blackburn, Steven; Smith, Stephanie; Hay, Elaine; Rees, Jonathan; Beard, David; Glyn-Jones, Sion; Barker, Karen; Ellis, Benjamin; Fitzpatrick, Ray; Price, Andrew
2016-01-01
Objectives Current musculoskeletal outcome tools are fragmented across different healthcare settings and conditions. Our objectives were to develop and validate a single musculoskeletal outcome measure for use throughout the pathway and patients with different musculoskeletal conditions: the Arthritis Research UK Musculoskeletal Health Questionnaire (MSK-HQ). Setting A consensus workshop with stakeholders from across the musculoskeletal community, workshops and individual interviews with a broad mix of musculoskeletal patients identified and prioritised outcomes for MSK-HQ inclusion. Initial psychometric validation was conducted in four cohorts from community physiotherapy, and secondary care orthopaedic hip, knee and shoulder clinics. Participants Stakeholders (n=29) included primary care, physiotherapy, orthopaedic and rheumatology patients (n=8); general practitioners, physiotherapists, orthopaedists, rheumatologists and pain specialists (n=7), patient and professional national body representatives (n=10), and researchers (n=4). The four validation cohorts included 570 participants (n=210 physiotherapy, n=150 hip, n=150 knee, n=60 shoulder patients). Outcome measures Outcomes included the MSK-HQ's acceptability, feasibility, comprehension, readability and responder burden. The validation cohort outcomes were the MSK-HQ's completion rate, test–retest reliability and convergent validity with reference standards (EQ-5D-5L, Oxford Hip, Knee, Shoulder Scores, and the Keele MSK-PROM). Results Musculoskeletal domains prioritised were pain severity, physical function, work interference, social interference, sleep, fatigue, emotional health, physical activity, independence, understanding, confidence to self-manage and overall impact. Patients reported MSK-HQ items to be ‘highly relevant’ and ‘easy to understand’. Completion rates were high (94.2%), with scores normally distributed, and no floor/ceiling effects. Test–retest reliability was excellent, and convergent validity was strong (correlations 0.81–0.88). Conclusions A new musculoskeletal outcome measure has been developed through a coproduction process with patients to capture prioritised outcomes for use throughout the pathway and with different musculoskeletal conditions. Four validation cohorts found that the MSK-HQ had high completion rates, excellent test–retest reliability and strong convergent validity with reference standards. Further validation studies are ongoing, including a cohort with rheumatoid/inflammatory arthritis. PMID:27496243
Vaucher, Paul; Cardoso, Isabel; Veldstra, Janet L.; Herzig, Daniela; Herzog, Michael; Mangin, Patrice; Favrat, Bernard
2014-01-01
When facing age-related cerebral decline, older adults are unequally affected by cognitive impairment without us knowing why. To explore underlying mechanisms and find possible solutions to maintain life-space mobility, there is a need for a standardized behavioral test that relates to behaviors in natural environments. The aim of the project described in this paper was therefore to provide a free, reliable, transparent, computer-based instrument capable of detecting age-related changes on visual processing and cortical functions for the purposes of research into human behavior in computational transportation science. After obtaining content validity, exploring psychometric properties of the developed tasks, we derived (Study 1) the scoring method for measuring cerebral decline on 106 older drivers aged ≥70 years attending a driving refresher course organized by the Swiss Automobile Association to test the instrument's validity against on-road driving performance (106 older drivers). We then validated the derived method on a new sample of 182 drivers (Study 2). We then measured the instrument's reliability having 17 healthy, young volunteers repeat all tests included in the instrument five times (Study 3) and explored the instrument's psychophysical underlying functions on 47 older drivers (Study 4). Finally, we tested the instrument's responsiveness to alcohol and effects on performance on a driving simulator in a randomized, double-blinded, placebo, crossover, dose-response, validation trial including 20 healthy, young volunteers (Study 5). The developed instrument revealed good psychometric properties related to processing speed. It was reliable (ICC = 0.853) and showed reasonable association to driving performance (R2 = 0.053), and responded to blood alcohol concentrations of 0.5 g/L (p = 0.008). Our results suggest that MedDrive is capable of detecting age-related changes that affect processing speed. These changes nevertheless do not necessarily affect driving behavior. PMID:25346674
[Validation and verfication of microbiology methods].
Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción
2015-01-01
Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linard, Joshua; Campbell, Sam
This event included annual sampling of groundwater and surface water locations at the Gunnison, Colorado, Processing Site. Sampling and analyses were conducted as specified in Sampling and Analysis Plan for U.S. Department of Energy Office of Legacy Management Sites. Samples were collected from 28 monitoring wells, three domestic wells, and six surface locations in April at the processing site as specified in the 2010 Ground Water Compliance Action Plan for the Gunnison, Colorado, Processing Site. Domestic wells 0476 and 0477 were sampled in July because the homes were unoccupied in April, and the wells were not in use. Duplicate samplesmore » were collected from locations 0113, 0248, and 0477. One equipment blank was collected during this sampling event. Water levels were measured at all monitoring wells that were sampled. No issues were identified during the data validation process that requires additional action or follow-up.« less
DOE Office of Scientific and Technical Information (OSTI.GOV)
None
2011-02-25
There are many voices calling for a future of abundant clean energy. The choices are difficult and the challenges daunting. How will we get there? The National Renewable Energy Laboratory integrates the entire spectrum of innovation including fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. The innovation process at NREL is interdependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.
None
2018-05-11
There are many voices calling for a future of abundant clean energy. The choices are difficult and the challenges daunting. How will we get there? The National Renewable Energy Laboratory integrates the entire spectrum of innovation including fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. The innovation process at NREL is interdependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.
[Failure modes and effects analysis in the prescription, validation and dispensing process].
Delgado Silveira, E; Alvarez Díaz, A; Pérez Menéndez-Conde, C; Serna Pérez, J; Rodríguez Sagrado, M A; Bermejo Vicedo, T
2012-01-01
To apply a failure modes and effects analysis to the prescription, validation and dispensing process for hospitalised patients. A work group analysed all of the stages included in the process from prescription to dispensing, identifying the most critical errors and establishing potential failure modes which could produce a mistake. The possible causes, their potential effects, and the existing control systems were analysed to try and stop them from developing. The Hazard Score was calculated, choosing those that were ≥ 8, and a Severity Index = 4 was selected independently of the hazard Score value. Corrective measures and an implementation plan were proposed. A flow diagram that describes the whole process was obtained. A risk analysis was conducted of the chosen critical points, indicating: failure mode, cause, effect, severity, probability, Hazard Score, suggested preventative measure and strategy to achieve so. Failure modes chosen: Prescription on the nurse's form; progress or treatment order (paper); Prescription to incorrect patient; Transcription error by nursing staff and pharmacist; Error preparing the trolley. By applying a failure modes and effects analysis to the prescription, validation and dispensing process, we have been able to identify critical aspects, the stages in which errors may occur and the causes. It has allowed us to analyse the effects on the safety of the process, and establish measures to prevent or reduce them. Copyright © 2010 SEFH. Published by Elsevier Espana. All rights reserved.
76 FR 1138 - Enhanced Assessment Instruments
Federal Register 2010, 2011, 2012, 2013, 2014
2011-01-07
... priorities: (a) Collaborating with institutions of higher education, other research institutions, or other... a member State may hold); (2) The consortium's method and process (e.g., consensus, majority) for... available on an ongoing basis for research, including for prospective linking, validity, and program...
ERIC Educational Resources Information Center
Plum, Terry; Smalley, Topsy N.
1994-01-01
Discussion of humanities research focuses on the humanist patron as author of the text. Highlights include the research process; style of expression; interpretation; multivocality; reflexivity; social validation; repatriation; the image of the library for the author; patterns of searching behavior; and reference librarian responses. (37…
PERCLOS: A Valid Psychophysiological Measure of Alertness As Assessed by Psychomotor Vigilance
DOT National Transportation Integrated Search
2002-04-01
The Logical Architecture is based on a Computer Aided Systems Engineering (CASE) model of the requirements for the flow of data and control through the various functions included in Intelligent Transportation Systems (ITS). Process Specifications pro...
Newgard, Craig D.; Zive, Dana; Jui, Jonathan; Weathers, Cody; Daya, Mohamud
2011-01-01
Objectives To compare case ascertainment, agreement, validity, and missing values for clinical research data obtained, processed, and linked electronically from electronic health records (EHR), compared to “manual” data processing and record abstraction in a cohort of out-ofhospital trauma patients. Methods This was a secondary analysis of two sets of data collected for a prospective, population-based, out-of-hospital trauma cohort evaluated by 10 emergency medical services (EMS) agencies transporting to 16 hospitals, from January 1, 2006 through October 2, 2007. Eighteen clinical, operational, procedural, and outcome variables were collected and processed separately and independently using two parallel data processing strategies, by personnel blinded to patients in the other group. The electronic approach included electronic health record data exports from EMS agencies, reformatting and probabilistic linkage to outcomes from local trauma registries and state discharge databases. The manual data processing approach included chart matching, data abstraction, and data entry by a trained abstractor. Descriptive statistics, measures of agreement, and validity were used to compare the two approaches to data processing. Results During the 21-month period, 418 patients underwent both data processing methods and formed the primary cohort. Agreement was good to excellent (kappa 0.76 to 0.97; intraclass correlation coefficient 0.49 to 0.97), with exact agreement in 67% to 99% of cases, and a median difference of zero for all continuous and ordinal variables. The proportions of missing out-of-hospital values were similar between the two approaches, although electronic processing generated more missing outcomes (87 out of 418, 21%, 95% CI = 17% to 25%) than the manual approach (11 out of 418, 3%, 95% CI = 1% to 5%). Case ascertainment of eligible injured patients was greater using electronic methods (n = 3,008) compared to manual methods (n = 629). Conclusions In this sample of out-of-hospital trauma patients, an all-electronic data processing strategy identified more patients and generated values with good agreement and validity compared to traditional data collection and processing methods. PMID:22320373
DOE Office of Scientific and Technical Information (OSTI.GOV)
Erickson, Paul A.; Liao, Chang-hsien
2007-11-15
A passive flow disturbance has been proven to enhance the conversion of fuel in a methanol-steam reformer. This study presents a statistical validation of the experiment based on a standard 2{sup k} factorial experiment design and the resulting empirical model of the enhanced hydrogen producing process. A factorial experiment design was used to statistically analyze the effects and interactions of various input factors in the experiment. Three input factors, including the number of flow disturbers, catalyst size, and reactant flow rate were investigated for their effects on the fuel conversion in the steam-reformation process. Based on the experimental results, anmore » empirical model was developed and further evaluated with an uncertainty analysis and interior point data. (author)« less
Setup, Validation and Quality Control of a Centralized WGS laboratory - Lessons Learned.
Arnold, Cath; Edwards, Kirstin; Desai, Meeta; Platt, Steve; Green, Jonathan; Conway, David
2018-04-25
Routine use of Whole Genome analysis for infectious diseases can be used to enlighten various scenarios pertaining to Public Health, including identification of microbial pathogens; relating individual cases to an outbreak of infectious disease; establishing an association between an outbreak of food poisoning and a specific food vehicle; inferring drug susceptibility; source tracing of contaminants and study of variations in the genome affect pathogenicity/virulence. We describe the setup, validation and ongoing verification of a centralised WGS laboratory to carry out the sequencing for these public health functions for the National Infection Services, Public Health England in the UK. The performance characteristics and Quality Control metrics measured during validation and verification of the entire end to end process (accuracy, precision, reproducibility and repeatability) are described and include information regarding the automated pass and release of data to service users without intervention. © Crown copyright 2018.
Wild, Diane; Furtado, Tamzin; Angalakuditi, Mallik
2012-01-01
Background The Child Behavior Checklist (CBCL) is a caregiver rating scale for assessing the behavioral profile of children. It was developed in the US, and has been extensively translated and used in a large number of studies internationally. Objective The objective of this study was to translate the CBCL into six languages using a rigorous translation methodology, placing particular emphasis on cultural adaptation and ensuring that the measure has content validity with carers of children with epilepsy. Methods A rigorous translation and cultural adaptation methodology was used. This is a process which includes two forward translations, reconciliation, two back-translations, and cognitive debriefing interviews with five carers of children with epilepsy in each country. In addition, a series of open-ended questions were asked of the carers in order to provide evidence of content validity. Results A number of cultural adaptations were made during the translation process. This included adaptations to the examples of sports and hobbies. An addition of “milk delivery” was made to the job examples in the Malayalam translation. In addition, two sexual problem items were removed from the Hebrew translation for Israel. Conclusion An additional six translations of the CBCL are now available for use in multinational studies. These translations have evidence of content validity for use with parents of children with epilepsy and have been appropriately culturally adapted so that they are acceptable for use in the target countries. The study highlights the importance of a rigorous translation process and the process of cultural adaptation. PMID:22715318
Brief International Cognitive Assessment for MS (BICAMS): international standards for validation.
Benedict, Ralph H B; Amato, Maria Pia; Boringa, Jan; Brochet, Bruno; Foley, Fred; Fredrikson, Stan; Hamalainen, Paivi; Hartung, Hans; Krupp, Lauren; Penner, Iris; Reder, Anthony T; Langdon, Dawn
2012-07-16
An international expert consensus committee recently recommended a brief battery of tests for cognitive evaluation in multiple sclerosis. The Brief International Cognitive Assessment for MS (BICAMS) battery includes tests of mental processing speed and memory. Recognizing that resources for validation will vary internationally, the committee identified validation priorities, to facilitate international acceptance of BICAMS. Practical matters pertaining to implementation across different languages and countries were discussed. Five steps to achieve optimal psychometric validation were proposed. In Step 1, test stimuli should be standardized for the target culture or language under consideration. In Step 2, examiner instructions must be standardized and translated, including all information from manuals necessary for administration and interpretation. In Step 3, samples of at least 65 healthy persons should be studied for normalization, matched to patients on demographics such as age, gender and education. The objective of Step 4 is test-retest reliability, which can be investigated in a small sample of MS and/or healthy volunteers over 1-3 weeks. Finally, in Step 5, criterion validity should be established by comparing MS and healthy controls. At this time, preliminary studies are underway in a number of countries as we move forward with this international assessment tool for cognition in MS.
Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara
2010-05-01
There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.
Preliminary data on validity of the Drug Addiction Treatment Efficacy Questionnaire.
Kastelic, Andrej; Mlakar, Janez; Pregelj, Peter
2013-09-01
This study describes the validation process for the Slovenian version of the Drug Addiction Treatment Efficacy Questionnaire (DATEQ). DATEQ was constructed from the questionnaires used at the Centre for the Treatment of Drug Addiction, Ljubljana University Psychiatric Hospital, and within the network of Centres for the Prevention and Treatment of Drug Addiction in Slovenia during the past 14 years. The Slovenian version of the DATEQ was translated to English using the 'forward-backward' procedure by its authors and their co-workers. The validation process included 100 male and female patients with established addiction to illicit drugs who had been prescribed opioid substitution therapy. The DATEQ questionnaire was used in the study, together with clinical evaluation to measure psychological state and to evaluate the efficacy of treatment in the last year. To determinate the validity of DATEQ the correlation with the clinical assessments of the outcome was calculated using one-way ANOVA. The F value was 44.4, p<0.001 (sum of squares: between groups 210.4, df=2, within groups 229.7, df=97, total 440.1, df=99). At the cut-off 4 the sensitivity is 81% and specificity 83%. The validation process for the Slovenian DATEQ version shows metric properties similar to those found in international studies of similar questionnaires, suggesting that it measures the same constructs, in the same way and as similar questionnaires. However, the relatively low sensitivity and specificity suggests caution when using DATEQ as the only measure of outcome.
Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie
This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.
Review of surface steam sterilization for validation purposes.
van Doornmalen, Joost; Kopinga, Klaas
2008-03-01
Sterilization is an essential step in the process of producing sterile medical devices. To guarantee sterility, the process of sterilization must be validated. Because there is no direct way to measure sterility, the techniques applied to validate the sterilization process are based on statistical principles. Steam sterilization is the most frequently applied sterilization method worldwide and can be validated either by indicators (chemical or biological) or physical measurements. The steam sterilization conditions are described in the literature. Starting from these conditions, criteria for the validation of steam sterilization are derived and can be described in terms of physical parameters. Physical validation of steam sterilization appears to be an adequate and efficient validation method that could be considered as an alternative for indicator validation. Moreover, physical validation can be used for effective troubleshooting in steam sterilizing processes.
Gorgulho, B M; Pot, G K; Marchioni, D M
2017-05-01
The aim of this study was to evaluate the validity and reliability of the Main Meal Quality Index when applied on the UK population. The indicator was developed to assess meal quality in different populations, and is composed of 10 components: fruit, vegetables (excluding potatoes), ratio of animal protein to total protein, fiber, carbohydrate, total fat, saturated fat, processed meat, sugary beverages and desserts, and energy density, resulting in a score range of 0-100 points. The performance of the indicator was measured using strategies for assessing content validity, construct validity, discriminant validity and reliability, including principal component analysis, linear regression models and Cronbach's alpha. The indicator presented good reliability. The Main Meal Quality Index has been shown to be valid for use as an instrument to evaluate, monitor and compare the quality of meals consumed by adults in the United Kingdom.
Espinoza-Venegas, Maritza; Sanhueza-Alvarado, Olivia; Ramírez-Elizondo, Noé; Sáez-Carrillo, Katia
2015-01-01
OBJECTIVE: The current study aimed to validate the construct and reliability of an emotional intelligence scale. METHOD: The Trait Meta-Mood Scale-24 was applied to 349 nursing students. The process included content validation, which involved expert reviews, pilot testing, measurements of reliability using Cronbach's alpha, and factor analysis to corroborate the validity of the theoretical model's construct. RESULTS: Adequate Cronbach coefficients were obtained for all three dimensions, and factor analysis confirmed the scale's dimensions (perception, comprehension, and regulation). CONCLUSION: The Trait Meta-Mood Scale is a reliable and valid tool to measure the emotional intelligence of nursing students. Its use allows for accurate determinations of individuals' abilities to interpret and manage emotions. At the same time, this new construct is of potential importance for measurements in nursing leadership; educational, organizational, and personal improvements; and the establishment of effective relationships with patients. PMID:25806642
Kelley, Brian D; Tannatt, Molly; Magnusson, Robert; Hagelberg, Sigrid; Booth, James
2004-08-05
An affinity chromatography step was developed for purification of recombinant B-Domain Deleted Factor VIII (BDDrFVIII) using a peptide ligand selected from a phage display library. The peptide library had variegated residues, contained both within a disulfide bond-constrained ring and flanking the ring. The peptide ligand binds to BDDrFVIII with a dissociation constant of approximately 1 microM both in free solution and when immobilized on a chromatographic resin. The peptide is chemically synthesized and the affinity resin is produced by coupling the peptide to an agarose matrix preactivated with N-hydroxysuccinimide. Coupling conditions were optimized to give consistent and complete ligand incorporation and validated with a robustness study that tested various combinations of processing limits. The peptide affinity chromatographic operation employs conditions very similar to an immunoaffinity chromatography step currently in use for BDDrFVIII manufacture. The process step provides excellent recovery of BDDrFVIII from a complex feed stream and reduces host cell protein and DNA by 3-4 logs. Process validation studies established resin reuse over 26 cycles without changes in product recovery or purity. A robustness study using a factorial design was performed and showed that the step was insensitive to small changes in process conditions that represent normal variation in commercial manufacturing. A scaled-down model of the process step was qualified and used for virus removal studies. A validation package addressing the safety of the leached peptide included leaching rate measurements under process conditions, testing of peptide levels in product pools, demonstration of robust removal downstream by spiking studies, end product testing, and toxicological profiling of the ligand. The peptide ligand affinity step was scaled up for cGMP production of BDDrFVIII for clinical trials.
Dyrlund, Thomas F; Poulsen, Ebbe T; Scavenius, Carsten; Sanggaard, Kristian W; Enghild, Jan J
2012-09-01
Data processing and analysis of proteomics data are challenging and time consuming. In this paper, we present MS Data Miner (MDM) (http://sourceforge.net/p/msdataminer), a freely available web-based software solution aimed at minimizing the time required for the analysis, validation, data comparison, and presentation of data files generated in MS software, including Mascot (Matrix Science), Mascot Distiller (Matrix Science), and ProteinPilot (AB Sciex). The program was developed to significantly decrease the time required to process large proteomic data sets for publication. This open sourced system includes a spectra validation system and an automatic screenshot generation tool for Mascot-assigned spectra. In addition, a Gene Ontology term analysis function and a tool for generating comparative Excel data reports are included. We illustrate the benefits of MDM during a proteomics study comprised of more than 200 LC-MS/MS analyses recorded on an AB Sciex TripleTOF 5600, identifying more than 3000 unique proteins and 3.5 million peptides. © 2012 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
The GPM Ground Validation Program: Pre to Post-Launch
NASA Astrophysics Data System (ADS)
Petersen, W. A.
2014-12-01
NASA GPM Ground Validation (GV) activities have transitioned from the pre to post-launch era. Prior to launch direct validation networks and associated partner institutions were identified world-wide, covering a plethora of precipitation regimes. In the U.S. direct GV efforts focused on use of new operational products such as the NOAA Multi-Radar Multi-Sensor suite (MRMS) for TRMM validation and GPM radiometer algorithm database development. In the post-launch, MRMS products including precipitation rate, types and data quality are being routinely generated to facilitate statistical GV of instantaneous and merged GPM products. To assess precipitation column impacts on product uncertainties, range-gate to pixel-level validation of both Dual-Frequency Precipitation Radar (DPR) and GPM microwave imager data are performed using GPM Validation Network (VN) ground radar and satellite data processing software. VN software ingests quality-controlled volumetric radar datasets and geo-matches those data to coincident DPR and radiometer level-II data. When combined MRMS and VN datasets enable more comprehensive interpretation of ground-satellite estimation uncertainties. To support physical validation efforts eight (one) field campaigns have been conducted in the pre (post) launch era. The campaigns span regimes from northern latitude cold-season snow to warm tropical rain. Most recently the Integrated Precipitation and Hydrology Experiment (IPHEx) took place in the mountains of North Carolina and involved combined airborne and ground-based measurements of orographic precipitation and hydrologic processes underneath the GPM Core satellite. One more U.S. GV field campaign (OLYMPEX) is planned for late 2015 and will address cold-season precipitation estimation, process and hydrology in the orographic and oceanic domains of western Washington State. Finally, continuous direct and physical validation measurements are also being conducted at the NASA Wallops Flight Facility multi-radar, gauge and disdrometer facility located in coastal Virginia. This presentation will summarize the evolution of the NASA GPM GV program from pre to post-launch eras and highlight early evaluations of GPM satellite datasets.
NASA Technical Reports Server (NTRS)
Starr, David
2000-01-01
The EOS Terra mission will be launched in July 1999. This mission has great relevance to the atmospheric radiation community and global change issues. Terra instruments include Advanced Spaceborne Thermal Emission and Reflection Radiometer (ASTER), Clouds and Earth's Radiant Energy System (CERES), Multi-Angle Imaging Spectroradiometer (MISR), Moderate Resolution Imaging Spectroradiometer (MODIS) and Measurements of Pollution in the Troposphere (MOPITT). In addition to the fundamental radiance data sets, numerous global science data products will be generated, including various Earth radiation budget, cloud and aerosol parameters, as well as land surface, terrestrial ecology, ocean color, and atmospheric chemistry parameters. Significant investments have been made in on-board calibration to ensure the quality of the radiance observations. A key component of the Terra mission is the validation of the science data products. This is essential for a mission focused on global change issues and the underlying processes. The Terra algorithms have been subject to extensive pre-launch testing with field data whenever possible. Intensive efforts will be made to validate the Terra data products after launch. These include validation of instrument calibration (vicarious calibration) experiments, instrument and cross-platform comparisons, routine collection of high quality correlative data from ground-based networks, such as AERONET, and intensive sites, such as the SGP ARM site, as well as a variety field experiments, cruises, etc. Airborne simulator instruments have been developed for the field experiment and underflight activities including the MODIS Airborne Simulator (MAS) AirMISR, MASTER (MODIS-ASTER), and MOPITT-A. All are integrated on the NASA ER-2 though low altitude platforms are more typically used for MASTER. MATR is an additional sensor used for MOPITT algorithm development and validation. The intensive validation activities planned for the first year of the Terra mission will be described with emphasis on derived geophysical parameters of most relevance to the atmospheric radiation community.
A new dataset validation system for the Planetary Science Archive
NASA Astrophysics Data System (ADS)
Manaud, N.; Zender, J.; Heather, D.; Martinez, S.
2007-08-01
The Planetary Science Archive is the official archive for the Mars Express mission. It has received its first data by the end of 2004. These data are delivered by the PI teams to the PSA team as datasets, which are formatted conform to the Planetary Data System (PDS). The PI teams are responsible for analyzing and calibrating the instrument data as well as the production of reduced and calibrated data. They are also responsible of the scientific validation of these data. ESA is responsible of the long-term data archiving and distribution to the scientific community and must ensure, in this regard, that all archived products meet quality. To do so, an archive peer-review is used to control the quality of the Mars Express science data archiving process. However a full validation of its content is missing. An independent review board recently recommended that the completeness of the archive as well as the consistency of the delivered data should be validated following well-defined procedures. A new validation software tool is being developed to complete the overall data quality control system functionality. This new tool aims to improve the quality of data and services provided to the scientific community through the PSA, and shall allow to track anomalies in and to control the completeness of datasets. It shall ensure that the PSA end-users: (1) can rely on the result of their queries, (2) will get data products that are suitable for scientific analysis, (3) can find all science data acquired during a mission. We defined dataset validation as the verification and assessment process to check the dataset content against pre-defined top-level criteria, which represent the general characteristics of good quality datasets. The dataset content that is checked includes the data and all types of information that are essential in the process of deriving scientific results and those interfacing with the PSA database. The validation software tool is a multi-mission tool that has been designed to provide the user with the flexibility of defining and implementing various types of validation criteria, to iteratively and incrementally validate datasets, and to generate validation reports.
The teamwork in assertive community treatment (TACT) scale: development and validation.
Wholey, Douglas R; Zhu, Xi; Knoke, David; Shah, Pri; Zellmer-Bruhn, Mary; Witheridge, Thomas F
2012-11-01
Team design is meticulously specified for assertive community treatment (ACT) teams, yet performance can vary across ACT teams, even those with high fidelity. By developing and validating the Teamwork in Assertive Community Treatment (TACT) scale, investigators examined the role of team processes in ACT performance. The TACT scale measuring ACT teamwork was developed from a conceptual model grounded in organizational research and adapted for the ACT and mental health context. TACT subscales were constructed after exploratory and confirmatory factor analyses. The reliability, discriminant validity, predictive validity, temporal stability, internal consistency, and within-team agreement were established with surveys from approximately 300 members of 26 Minnesota ACT teams who completed the questionnaire three times, at six-month intervals. Nine TACT subscales emerged from the analyses: exploration, exploitation of new and existing knowledge, psychological safety, goal agreement, conflict, constructive controversy, information accessibility, encounter preparedness, and consumer-centered care. These nine subscales demonstrated fit and temporal stability (confirmatory factor analysis), high internal consistency (Cronbach's alpha), and within-team agreement and between-team differences (rwg and intraclass correlations). Correlational analyses of the subscales revealed that they measure related yet distinctive aspects of ACT team processes, and regression analyses demonstrated predictive validity (encounter preparedness is related to staff outcomes). The TACT scale demonstrated high reliability and validity and can be included in research and evaluation of teamwork in ACT and mental health teams.
The quality of instruments to assess the process of shared decision making: A systematic review.
Gärtner, Fania R; Bomhof-Roordink, Hanna; Smith, Ian P; Scholl, Isabelle; Stiggelbout, Anne M; Pieterse, Arwen H
2018-01-01
To inventory instruments assessing the process of shared decision making and appraise their measurement quality, taking into account the methodological quality of their validation studies. In a systematic review we searched seven databases (PubMed, Embase, Emcare, Cochrane, PsycINFO, Web of Science, Academic Search Premier) for studies investigating instruments measuring the process of shared decision making. Per identified instrument, we assessed the level of evidence separately for 10 measurement properties following a three-step procedure: 1) appraisal of the methodological quality using the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist, 2) appraisal of the psychometric quality of the measurement property using three possible quality scores, 3) best-evidence synthesis based on the number of studies, their methodological and psychometrical quality, and the direction and consistency of the results. The study protocol was registered at PROSPERO: CRD42015023397. We included 51 articles describing the development and/or evaluation of 40 shared decision-making process instruments: 16 patient questionnaires, 4 provider questionnaires, 18 coding schemes and 2 instruments measuring multiple perspectives. There is an overall lack of evidence for their measurement quality, either because validation is missing or methods are poor. The best-evidence synthesis indicated positive results for a major part of instruments for content validity (50%) and structural validity (53%) if these were evaluated, but negative results for a major part of instruments when inter-rater reliability (47%) and hypotheses testing (59%) were evaluated. Due to the lack of evidence on measurement quality, the choice for the most appropriate instrument can best be based on the instrument's content and characteristics such as the perspective that they assess. We recommend refinement and validation of existing instruments, and the use of COSMIN-guidelines to help guarantee high-quality evaluations.
Body Fluids as a Source of Diagnostic Biomarkers: Prostate — EDRN Public Portal
Recent advances in high-throughput protein expression profiling of bodily fluids has generated great enthusiasm and hope for this approach as a potent diagnostic tool. At the center of these efforts is the application of SELDI-TOF-MS and artificial intelligence algorithms by the EDRN BDL site at Eastern Virginia Medical School and the DMCC respectively. When the expression profiling process was applied to sera from individuals with prostate cancer (N=197), BPH (N=92) or from otherwise healthy donors (N=97) we achieved an overall misclassification rate of 90% sensitivity. Since this represents a noticeable improvement in current clinical approach we are proposing to embark upon a validation process. The described studies are designed to address validation issues and include three phases. Phase 1; Synchronization of SELDI Output within the EDRN-Prostate-SELDI Investigational Collaboration (EPSIC); addressing portability (A) Synchronize SELDI instrumentation and robotic sample processing across the EPSIC using pooled serum(QC); (B) Establish the portability and reproducibility of the SELDI protein profiling approach within the EPSIC using normal and prostate cancer patient’s serum from a single site; (C) Establish robustness of the approach toward geographic, sample collection and processing differences within EPSIC using case and control serum from five different sites. Phase 2; Population Validation Establish geographic variability and robustness in a large cross-sectional study among different sample population. Phase 3; Clinical Validation; validate the serum protein expression profiling coupled with a learning algorithm as a means for early detection of prostate cancer using longitudinal PCPT samples. We have assembled a cohesive multi-institutional team for completing these studies in a timely and efficient manner. The team consists of five EDRN laboratories, DMCC and CBI and the proposed budget reflects the total involvement.
Validation of a home food inventory among low-income Spanish- and Somali-speaking families.
Hearst, Mary O; Fulkerson, Jayne A; Parke, Michelle; Martin, Lauren
2013-07-01
To refine and validate an existing home food inventory (HFI) for low-income Somali- and Spanish-speaking families. Formative assessment was conducted using two focus groups, followed by revisions of the HFI, translation of written materials and instrument validation in participants’ homes. Twin Cities Metropolitan Area, Minnesota, USA. Thirty low-income families with children of pre-school age (fifteen Spanish-speaking; fifteen Somali-speaking) completed the HFI simultaneously with, but independently of, a trained staff member. Analysis consisted of calculation of both item-specific and average food group kappa coefficients, specificity, sensitivity and Spearman’s correlation between participants’ and staff scores as a means of assessing criterion validity of individual items, food categories and the obesogenic score. The formative assessment revealed the need for few changes/additions for food items typically found in Spanish-speaking households. Somali-speaking participants requested few additions, but many deletions, including frozen processed food items, non-perishable produce and many sweets as they were not typical food items kept in the home. Generally, all validity indices were within an acceptable range, with the exception of values associated with items such as ‘whole wheat bread’ (k = 0.16). The obesogenic score (presence of high-fat, high-energy foods) had high criterion validity with k = 0.57, sensitivity = 91.8%, specificity = 70.6% and Spearman correlation = 0.78. The revised HFI is a valid assessment tool for use among Spanish and Somali households. This instrument refinement and validation process can be replicated with other population groups.
Khoury, Joseph D; Wang, Wei-Lien; Prieto, Victor G; Medeiros, L Jeffrey; Kalhor, Neda; Hameed, Meera; Broaddus, Russell; Hamilton, Stanley R
2018-02-01
Biomarkers that guide therapy selection are gaining unprecedented importance as targeted therapy options increase in scope and complexity. In conjunction with high-throughput molecular techniques, therapy-guiding biomarker assays based upon immunohistochemistry (IHC) have a critical role in cancer care in that they inform about the expression status of a protein target. Here, we describe the validation procedures for four clinical IHC biomarker assays-PTEN, RB, MLH1, and MSH2-for use as integral biomarkers in the nationwide NCI-Molecular Analysis for Therapy Choice (NCI-MATCH) EAY131 clinical trial. Validation procedures were developed through an iterative process based on collective experience and adaptation of broad guidelines from the FDA. The steps included primary antibody selection; assay optimization; development of assay interpretation criteria incorporating biological considerations; and expected staining patterns, including indeterminate results, orthogonal validation, and tissue validation. Following assay lockdown, patient samples and cell lines were used for analytic and clinical validation. The assays were then approved as laboratory-developed tests and used for clinical trial decisions for treatment selection. Calculations of sensitivity and specificity were undertaken using various definitions of gold-standard references, and external validation was required for the PTEN IHC assay. In conclusion, validation of IHC biomarker assays critical for guiding therapy in clinical trials is feasible using comprehensive preanalytic, analytic, and postanalytic steps. Implementation of standardized guidelines provides a useful framework for validating IHC biomarker assays that allow for reproducibility across institutions for routine clinical use. Clin Cancer Res; 24(3); 521-31. ©2017 AACR . ©2017 American Association for Cancer Research.
Estimation of Particulate Mass and Manganese Exposure Levels among Welders
Hobson, Angela; Seixas, Noah; Sterling, David; Racette, Brad A.
2011-01-01
Background: Welders are frequently exposed to Manganese (Mn), which may increase the risk of neurological impairment. Historical exposure estimates for welding-exposed workers are needed for epidemiological studies evaluating the relationship between welding and neurological or other health outcomes. The objective of this study was to develop and validate a multivariate model to estimate quantitative levels of welding fume exposures based on welding particulate mass and Mn concentrations reported in the published literature. Methods: Articles that described welding particulate and Mn exposures during field welding activities were identified through a comprehensive literature search. Summary measures of exposure and related determinants such as year of sampling, welding process performed, type of ventilation used, degree of enclosure, base metal, and location of sampling filter were extracted from each article. The natural log of the reported arithmetic mean exposure level was used as the dependent variable in model building, while the independent variables included the exposure determinants. Cross-validation was performed to aid in model selection and to evaluate the generalizability of the models. Results: A total of 33 particulate and 27 Mn means were included in the regression analysis. The final model explained 76% of the variability in the mean exposures and included welding process and degree of enclosure as predictors. There was very little change in the explained variability and root mean squared error between the final model and its cross-validation model indicating the final model is robust given the available data. Conclusions: This model may be improved with more detailed exposure determinants; however, the relatively large amount of variance explained by the final model along with the positive generalizability results of the cross-validation increases the confidence that the estimates derived from this model can be used for estimating welder exposures in absence of individual measurement data. PMID:20870928
DOE Office of Scientific and Technical Information (OSTI.GOV)
Sanchez, A; Little, K; Chung, J
Purpose: To validate the use of a Channelized Hotelling Observer (CHO) model for guiding image processing parameter selection and enable improved nodule detection in digital chest radiography. Methods: In a previous study, an anthropomorphic chest phantom was imaged with and without PMMA simulated nodules using a GE Discovery XR656 digital radiography system. The impact of image processing parameters was then explored using a CHO with 10 Laguerre-Gauss channels. In this work, we validate the CHO’s trend in nodule detectability as a function of two processing parameters by conducting a signal-known-exactly, multi-reader-multi-case (MRMC) ROC observer study. Five naive readers scored confidencemore » of nodule visualization in 384 images with 50% nodule prevalence. The image backgrounds were regions-of-interest extracted from 6 normal patient scans, and the digitally inserted simulated nodules were obtained from phantom data in previous work. Each patient image was processed with both a near-optimal and a worst-case parameter combination, as determined by the CHO for nodule detection. The same 192 ROIs were used for each image processing method, with 32 randomly selected lung ROIs per patient image. Finally, the MRMC data was analyzed using the freely available iMRMC software of Gallas et al. Results: The image processing parameters which were optimized for the CHO led to a statistically significant improvement (p=0.049) in human observer AUC from 0.78 to 0.86, relative to the image processing implementation which produced the lowest CHO performance. Conclusion: Differences in user-selectable image processing methods on a commercially available digital radiography system were shown to have a marked impact on performance of human observers in the task of lung nodule detection. Further, the effect of processing on humans was similar to the effect on CHO performance. Future work will expand this study to include a wider range of detection/classification tasks and more observers, including experienced chest radiologists.« less
A REMOTE SENSING AND GIS-ENABLED HIGHWAY ASSET MANAGEMENT SYSTEM PHASE 2
DOT National Transportation Integrated Search
2018-02-02
The objective of this project is to validate the use of commercial remote sensing and spatial information (CRS&SI) technologies, including emerging 3D line laser imaging technology, mobile light detection and ranging (LiDAR), image processing algorit...
A remote sensing and GIS-enabled highway asset management system : final report.
DOT National Transportation Integrated Search
2016-04-01
The objective of this project is to validate the use of commercial remote sensing and spatial information : (CRS&SI) technologies, including emerging 3D line laser imaging technology, mobile LiDAR, image : processing algorithms, and GPS/GIS technolog...
49 CFR 229.211 - Processing of petitions.
Code of Federal Regulations, 2012 CFR
2012-10-01
... to the U.S. Department of Transportation Docket Operations (M-30), West Building Ground Floor, Room... requires additional information to appropriately consider the petition, FRA will conduct a hearing on the..., or both which may include validated computer modeling, structural crush analysis, component testing...
49 CFR 229.211 - Processing of petitions.
Code of Federal Regulations, 2013 CFR
2013-10-01
... to the U.S. Department of Transportation Docket Operations (M-30), West Building Ground Floor, Room... requires additional information to appropriately consider the petition, FRA will conduct a hearing on the..., or both which may include validated computer modeling, structural crush analysis, component testing...
49 CFR 229.211 - Processing of petitions.
Code of Federal Regulations, 2014 CFR
2014-10-01
... to the U.S. Department of Transportation Docket Operations (M-30), West Building Ground Floor, Room... requires additional information to appropriately consider the petition, FRA will conduct a hearing on the..., or both which may include validated computer modeling, structural crush analysis, component testing...
Non-Technical Skills for Surgeons (NOTSS): Critical appraisal of its measurement properties.
Jung, James J; Borkhoff, Cornelia M; Jüni, Peter; Grantcharov, Teodor P
2018-02-17
To critically appraise the development and measurement properties, including sensibility, reliability, and validity of the Non-Technical Skills of Surgeons (NOTSS) system. Articles that described development process of the NOTSS system were identified. Relevant primary studies that presented evidence of reliability and validity were identified through a comprehensive literature review. NOTSS was developed through robust item generation and reduction strategies. It was shown to have good content validity, acceptability, and feasibility. Inter-rater reliability increased with greater expertise and number of assessors. Studies demonstrated evidence of cross-sectional construct validity, in that the tool was able to differentiate known groups of varied non-technical skill levels. Evidence of longitudinal construct validity also existed to demonstrate that NOTSS detected changes in non-technical skills before and after targeted training. In populations and settings presented in our critical appraisal, NOTSS provided reliable and valid measurements of intraoperative non-technical skills of surgeons. Copyright © 2018 Elsevier Inc. All rights reserved.
Karapetyan, Karen; Batchelor, Colin; Sharpe, David; Tkachenko, Valery; Williams, Antony J
2015-01-01
There are presently hundreds of online databases hosting millions of chemical compounds and associated data. As a result of the number of cheminformatics software tools that can be used to produce the data, subtle differences between the various cheminformatics platforms, as well as the naivety of the software users, there are a myriad of issues that can exist with chemical structure representations online. In order to help facilitate validation and standardization of chemical structure datasets from various sources we have delivered a freely available internet-based platform to the community for the processing of chemical compound datasets. The chemical validation and standardization platform (CVSP) both validates and standardizes chemical structure representations according to sets of systematic rules. The chemical validation algorithms detect issues with submitted molecular representations using pre-defined or user-defined dictionary-based molecular patterns that are chemically suspicious or potentially requiring manual review. Each identified issue is assigned one of three levels of severity - Information, Warning, and Error - in order to conveniently inform the user of the need to browse and review subsets of their data. The validation process includes validation of atoms and bonds (e.g., making aware of query atoms and bonds), valences, and stereo. The standard form of submission of collections of data, the SDF file, allows the user to map the data fields to predefined CVSP fields for the purpose of cross-validating associated SMILES and InChIs with the connection tables contained within the SDF file. This platform has been applied to the analysis of a large number of data sets prepared for deposition to our ChemSpider database and in preparation of data for the Open PHACTS project. In this work we review the results of the automated validation of the DrugBank dataset, a popular drug and drug target database utilized by the community, and ChEMBL 17 data set. CVSP web site is located at http://cvsp.chemspider.com/. A platform for the validation and standardization of chemical structure representations of various formats has been developed and made available to the community to assist and encourage the processing of chemical structure files to produce more homogeneous compound representations for exchange and interchange between online databases. While the CVSP platform is designed with flexibility inherent to the rules that can be used for processing the data we have produced a recommended rule set based on our own experiences with the large data sets such as DrugBank, ChEMBL, and data sets from ChemSpider.
NASA Technical Reports Server (NTRS)
Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je
2010-01-01
The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.
Ares I-X Range Safety Simulation Verification and Analysis Independent Validation and Verification
NASA Technical Reports Server (NTRS)
Merry, Carl M.; Tarpley, Ashley F.; Craig, A. Scott; Tartabini, Paul V.; Brewer, Joan D.; Davis, Jerel G.; Dulski, Matthew B.; Gimenez, Adrian; Barron, M. Kyle
2011-01-01
NASA s Ares I-X vehicle launched on a suborbital test flight from the Eastern Range in Florida on October 28, 2009. To obtain approval for launch, a range safety final flight data package was generated to meet the data requirements defined in the Air Force Space Command Manual 91-710 Volume 2. The delivery included products such as a nominal trajectory, trajectory envelopes, stage disposal data and footprints, and a malfunction turn analysis. The Air Force s 45th Space Wing uses these products to ensure public and launch area safety. Due to the criticality of these data, an independent validation and verification effort was undertaken to ensure data quality and adherence to requirements. As a result, the product package was delivered with the confidence that independent organizations using separate simulation software generated data to meet the range requirements and yielded consistent results. This document captures Ares I-X final flight data package verification and validation analysis, including the methodology used to validate and verify simulation inputs, execution, and results and presents lessons learned during the process
Lev, Sagit; Ayalon, Liat
2018-03-01
Despite the significance of ethical issues faced by social workers, research on moral distress among social workers has been extremely limited. The aim of the current study is to describe the development and content validation of a unique questionnaire to measure moral distress among social workers in long-term care facilities for older adults in Israel. The construction of the questionnaire was based on a secondary analysis of a qualitative study that addressed the moral dilemma of social workers in nursing homes in Israel. A content validation included review and evaluation by two experts, a cognitive interview with a nursing home social worker, and three focus groups of experts and the target population. The initial questionnaire consisted of 25 items. After the content validation process the questionnaire in its final version, consisted of 17 items and included two scales, measuring the frequency of morally loaded events and the intensity of distress that followed them. We believe that the questionnaire can contribute by broadening and deepening ethics discourse and research, with regard to social workers' obligation dilemmas and conflicts.
Molecular epidemiology: new rules for new tools?
Merlo, Domenico Franco; Sormani, Maria Pia; Bruzzi, Paolo
2006-08-30
Molecular epidemiology combines biological markers and epidemiological observations in the study of the environmental and genetic determinants of cancer and other diseases. The potential advantages associated with biomarkers are manifold and include: (a) increased sensitivity and specificity to carcinogenic exposures; (b) more precise evaluation of the interplay between genetic and environmental determinants of cancer; (c) earlier detection of carcinogenic effects of exposure; (d) characterization of disease subtypes-etiologies patterns; (e) evaluation of primary prevention measures. These, in turn, may translate into better tools for etiologic research, individual risk assessment, and, ultimately, primary and secondary prevention. An area that has not received sufficient attention concerns the validation of these biomarkers as surrogate endpoints for cancer risk. Validation of a candidate biomarker's surrogacy is the demonstration that it possesses the properties required for its use as a substitute for a true endpoint. The principles underlying the validation process underwent remarkable developments and discussion in therapeutic research. However, the challenges posed by the application of these principles to epidemiological research, where the basic tool for this validation (i.e., the randomized study) is seldom possible, have not been thoroughly explored. The validation process of surrogacy must be applied rigorously to intermediate biomarkers of cancer risk before using them as risk predictors at the individual as well as at the population level.
Selection of Surrogate Bacteria for Use in Food Safety Challenge Studies: A Review.
Hu, Mengyi; Gurtler, Joshua B
2017-09-01
Nonpathogenic surrogate bacteria are prevalently used in a variety of food challenge studies in place of foodborne pathogens such as Listeria monocytogenes, Salmonella, Escherichia coli O157:H7, and Clostridium botulinum because of safety and sanitary concerns. Surrogate bacteria should have growth characteristics and/or inactivation kinetics similar to those of target pathogens under given conditions in challenge studies. It is of great importance to carefully select and validate potential surrogate bacteria when verifying microbial inactivation processes. A validated surrogate responds similar to the targeted pathogen when tested for inactivation kinetics, growth parameters, or survivability under given conditions in agreement with appropriate statistical analyses. However, a considerable number of food studies involving putative surrogate bacteria lack convincing validation sources or adequate validation processes. Most of the validation information for surrogates in these studies is anecdotal and has been collected from previous publications but may not be sufficient for given conditions in the study at hand. This review is limited to an overview of select studies and discussion of the general criteria and approaches for selecting potential surrogate bacteria under given conditions. The review also includes a list of documented bacterial pathogen surrogates and their corresponding food products and treatments to provide guidance for future studies.
Eye-Tracking as a Tool in Process-Oriented Reading Test Validation
ERIC Educational Resources Information Center
Solheim, Oddny Judith; Uppstad, Per Henning
2011-01-01
The present paper addresses the continuous need for methodological reflection on how to validate inferences made on the basis of test scores. Validation is a process that requires many lines of evidence. In this article we discuss the potential of eye tracking methodology in process-oriented reading test validation. Methodological considerations…
Translated Versions of Voice Handicap Index (VHI)-30 across Languages: A Systematic Review
SEIFPANAHI, Sadegh; JALAIE, Shohreh; NIKOO, Mohammad Reza; SOBHANI-RAD, Davood
2015-01-01
Background: In this systematic review, the aim is to investigate different VHI-30 versions between languages regarding their validity, reliability and their translation process. Methods: Articles were extracted systematically from some of the prime databases including Cochrane, googlescholar, MEDLINE (via PubMed gate), Sciencedirect, Web of science, and their reference lists by Voice Handicap Index keyword with only title limitation and time of publication (from 1997 to 2014). However the other limitations (e.g. excluding non-English, other versions of VHI ones, and so on) applied manually after studying the papers. In order to appraise the methodology of the papers, three authors did it by 12-item diagnostic test checklist in “Critical Appraisal Skills Programme” or (CASP) site. After applying all of the screenings, the papers that had the study eligibility criteria such as; translation, validity, and reliability processes, included in this review. Results: The remained non-repeated articles were 12 from different languages. All of them reported validity, reliability and translation method, which presented in details in this review. Conclusion: Mainly the preferred method for translation in the gathered papers was “Brislin’s classic back-translation model (1970), although the procedure was not performed completely but it was more prominent than other translation procedures. High test-retest reliability, internal consistency and moderate construct validity between different languages in regards to all 3 VHI-30 domains confirm the applicability of translated VHI-30 version across languages. PMID:26056664
NASA Astrophysics Data System (ADS)
Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.
2016-07-01
The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.
Farnbach, Sara; Evans, John; Eades, Anne-Marie; Gee, Graham; Fernando, Jamie; Hammond, Belinda; Simms, Matty; DeMasi, Karrina; Hackett, Maree
2017-11-03
Process evaluations are conducted alongside research projects to identify the context, impact and consequences of research, determine whether it was conducted per protocol and to understand how, why and for whom an intervention is effective. We present a process evaluation protocol for the Getting it Right research project, which aims to determine validity of a culturally adapted depression screening tool for use by Aboriginal and Torres Strait Islander people. In this process evaluation, we aim to: (1) explore the context, impact and consequences of conducting Getting It Right, (2) explore primary healthcare staff and community representatives' experiences with the research project, (3) determine if it was conducted per protocol and (4) explore experiences with the depression screening tool, including perceptions about how it could be implemented into practice (if found to be valid). We also describe the partnerships established to conduct this process evaluation and how the national Values and Ethics: Guidelines for Ethical Conduct in Aboriginal and Torres Strait Islander Health Research is met. Realist and grounded theory approaches are used. Qualitative data include semistructured interviews with primary healthcare staff and community representatives involved with Getting it Right. Iterative data collection and analysis will inform a coding framework. Interviews will continue until saturation of themes is reached, or all participants are considered. Data will be triangulated against administrative data and patient feedback. An Aboriginal and Torres Strait Islander Advisory Group guides this research. Researchers will be blinded from validation data outcomes for as long as is feasible. The University of Sydney Human Research Ethics Committee, Aboriginal Health and Medical Research Council of New South Wales and six state ethics committees have approved this research. Findings will be submitted to academic journals and presented at conferences. ACTRN12614000705684. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.
The Role of Structural Models in the Solar Sail Flight Validation Process
NASA Technical Reports Server (NTRS)
Johnston, John D.
2004-01-01
NASA is currently soliciting proposals via the New Millennium Program ST-9 opportunity for a potential Solar Sail Flight Validation (SSFV) experiment to develop and operate in space a deployable solar sail that can be steered and provides measurable acceleration. The approach planned for this experiment is to test and validate models and processes for solar sail design, fabrication, deployment, and flight. These models and processes would then be used to design, fabricate, and operate scaleable solar sails for future space science missions. There are six validation objectives planned for the ST9 SSFV experiment: 1) Validate solar sail design tools and fabrication methods; 2) Validate controlled deployment; 3) Validate in space structural characteristics (focus of poster); 4) Validate solar sail attitude control; 5) Validate solar sail thrust performance; 6) Characterize the sail's electromagnetic interaction with the space environment. This poster presents a top-level assessment of the role of structural models in the validation process for in-space structural characteristics.
Integrating Model-Based Transmission Reduction into a multi-tier architecture
NASA Astrophysics Data System (ADS)
Straub, J.
A multi-tier architecture consists of numerous craft as part of the system, orbital, aerial, and surface tiers. Each tier is able to collect progressively greater levels of information. Generally, craft from lower-level tiers are deployed to a target of interest based on its identification by a higher-level craft. While the architecture promotes significant amounts of science being performed in parallel, this may overwhelm the computational and transmission capabilities of higher-tier craft and links (particularly the deep space link back to Earth). Because of this, a new paradigm in in-situ data processing is required. Model-based transmission reduction (MBTR) is such a paradigm. Under MBTR, each node (whether a single spacecraft in orbit of the Earth or another planet or a member of a multi-tier network) is given an a priori model of the phenomenon that it is assigned to study. It performs activities to validate this model. If the model is found to be erroneous, corrective changes are identified, assessed to ensure their significance for being passed on, and prioritized for transmission. A limited amount of verification data is sent with each MBTR assertion message to allow those that might rely on the data to validate the correct operation of the spacecraft and MBTR engine onboard. Integrating MBTR with a multi-tier framework creates an MBTR hierarchy. Higher levels of the MBTR hierarchy task lower levels with data collection and assessment tasks that are required to validate or correct elements of its model. A model of the expected conditions is sent to the lower level craft; which then engages its own MBTR engine to validate or correct the model. This may include tasking a yet lower level of craft to perform activities. When the MBTR engine at a given level receives all of its component data (whether directly collected or from delegation), it randomly chooses some to validate (by reprocessing the validation data), performs analysis and sends its own results (v- lidation and/or changes of model elements and supporting validation data) to its upstream node. This constrains data transmission to only significant (either because it includes a change or is validation data critical for assessing overall performance) information and reduces the processing requirements (by not having to process insignificant data) at higher-level nodes. This paper presents a framework for multi-tier MBTR and two demonstration mission concepts: an Earth sensornet and a mission to Mars. These multi-tier MBTR concepts are compared to a traditional mission approach.
Brydges, Ryan; Hatala, Rose; Zendejas, Benjamin; Erwin, Patricia J; Cook, David A
2015-02-01
To examine the evidence supporting the use of simulation-based assessments as surrogates for patient-related outcomes assessed in the workplace. The authors systematically searched MEDLINE, EMBASE, Scopus, and key journals through February 26, 2013. They included original studies that assessed health professionals and trainees using simulation and then linked those scores with patient-related outcomes assessed in the workplace. Two reviewers independently extracted information on participants, tasks, validity evidence, study quality, patient-related and simulation-based outcomes, and magnitude of correlation. All correlations were pooled using random-effects meta-analysis. Of 11,628 potentially relevant articles, the 33 included studies enrolled 1,203 participants, including postgraduate physicians (n = 24 studies), practicing physicians (n = 8), medical students (n = 6), dentists (n = 2), and nurses (n = 1). The pooled correlation for provider behaviors was 0.51 (95% confidence interval [CI], 0.38 to 0.62; n = 27 studies); for time behaviors, 0.44 (95% CI, 0.15 to 0.66; n = 7); and for patient outcomes, 0.24 (95% CI, -0.02 to 0.47; n = 5). Most reported validity evidence was favorable, though studies often included only correlational evidence. Validity evidence of internal structure (n = 13 studies), content (n = 12), response process (n = 2), and consequences (n = 1) were reported less often. Three tools showed large pooled correlations and favorable (albeit incomplete) validity evidence. Simulation-based assessments often correlate positively with patient-related outcomes. Although these surrogates are imperfect, tools with established validity evidence may replace workplace-based assessments for evaluating select procedural skills.
Power Plant Model Validation Tool
DOE Office of Scientific and Technical Information (OSTI.GOV)
The PPMV is used to validate generator model using disturbance recordings. The PPMV tool contains a collection of power plant models and model validation studies, as well as disturbance recordings from a number of historic grid events. The user can import data from a new disturbance into the database, which converts PMU and SCADA data into GE PSLF format, and then run the tool to validate (or invalidate) the model for a specific power plant against its actual performance. The PNNL PPMV tool enables the automation of the process of power plant model validation using disturbance recordings. The tool usesmore » PMU and SCADA measurements as input information. The tool automatically adjusts all required EPCL scripts and interacts with GE PSLF in the batch mode. The main tool features includes: The tool interacts with GE PSLF; The tool uses GE PSLF Play-In Function for generator model validation; Database of projects (model validation studies); Database of the historic events; Database of the power plant; The tool has advanced visualization capabilities; and The tool automatically generates reports« less
NASA Astrophysics Data System (ADS)
Yu, Long; Druckenbrod, Markus; Greve, Martin; Wang, Ke-qi; Abdel-Maksoud, Moustafa
2015-10-01
A fully automated optimization process is provided for the design of ducted propellers under open water conditions, including 3D geometry modeling, meshing, optimization algorithm and CFD analysis techniques. The developed process allows the direct integration of a RANSE solver in the design stage. A practical ducted propeller design case study is carried out for validation. Numerical simulations and open water tests are fulfilled and proved that the optimum ducted propeller improves hydrodynamic performance as predicted.
Detection of delamination defects in CFRP materials using ultrasonic signal processing.
Benammar, Abdessalem; Drai, Redouane; Guessoum, Abderrezak
2008-12-01
In this paper, signal processing techniques are tested for their ability to resolve echoes associated with delaminations in carbon fiber-reinforced polymer multi-layered composite materials (CFRP) detected by ultrasonic methods. These methods include split spectrum processing (SSP) and the expectation-maximization (EM) algorithm. A simulation study on defect detection was performed, and results were validated experimentally on CFRP with and without delamination defects taken from aircraft. Comparison of the methods for their ability to resolve echoes are made.
[Computerized system validation of clinical researches].
Yan, Charles; Chen, Feng; Xia, Jia-lai; Zheng, Qing-shan; Liu, Daniel
2015-11-01
Validation is a documented process that provides a high degree of assurance. The computer system does exactly and consistently what it is designed to do in a controlled manner throughout the life. The validation process begins with the system proposal/requirements definition, and continues application and maintenance until system retirement and retention of the e-records based on regulatory rules. The objective to do so is to clearly specify that each application of information technology fulfills its purpose. The computer system validation (CSV) is essential in clinical studies according to the GCP standard, meeting product's pre-determined attributes of the specifications, quality, safety and traceability. This paper describes how to perform the validation process and determine relevant stakeholders within an organization in the light of validation SOPs. Although a specific accountability in the implementation of the validation process might be outsourced, the ultimate responsibility of the CSV remains on the shoulder of the business process owner-sponsor. In order to show that the compliance of the system validation has been properly attained, it is essential to set up comprehensive validation procedures and maintain adequate documentations as well as training records. Quality of the system validation should be controlled using both QC and QA means.
Allen, James; Fok, Carlotta Ching Ting; Henry, David; Skewes, Monica
2012-09-01
Concerns in some settings regarding the accuracy and ethics of employing direct questions about alcohol use suggest need for alternative assessment approaches with youth. Umyuangcaryaraq is a Yup'ik Alaska Native word meaning "Reflecting." The Reflective Processes Scale was developed as a youth measure tapping awareness and thinking over potential negative consequences of alcohol misuse as a protective factor that includes cultural elements often shared by many other Alaska Native and American Indian cultures. This study assessed multidimensional structure, item functioning, and validity. Responses from 284 rural Alaska Native youth allowed bifactor analysis to assess structure, estimates of location and discrimination parameters, and convergent and discriminant validity. A bifactor model of the scale items with three content factors provided excellent fit to observed data. Item response theory analysis suggested a binary response format as optimal. Evidence of convergent and discriminant validity was established. The measure provides an assessment of reflective processes about alcohol that Alaska Native youth engage in when thinking about reasons not to drink. The concept of reflective processes has potential to extend understandings of cultural variation in mindfulness, alcohol expectancies research, and culturally mediated protective factors in Alaska Native and American Indian youth.
The Dispositions for Culturally Responsive Pedagogy Scale
ERIC Educational Resources Information Center
Whitaker, Manya C.; Valtierra, Kristina Marie
2018-01-01
Purpose: The purpose of this study is to develop and validate the dispositions for culturally responsive pedagogy scale (DCRPS). Design/methodology/approach: Scale development consisted of a six-step process including item development, expert review, exploratory factor analysis, factor interpretation, confirmatory factor analysis and convergent…
40 CFR 1043.41 - EIAPP certification process.
Code of Federal Regulations, 2014 CFR
2014-07-01
... test engine you provide must include appropriate manifolds, aftertreatment devices, electronic control... CONTROLS CONTROL OF NOX, SOX, AND PM EMISSIONS FROM MARINE ENGINES AND VESSELS SUBJECT TO THE MARPOL... application for an EIAPP certificate for each engine family. An EIAPP certificate is valid starting with the...
40 CFR 1043.41 - EIAPP certification process.
Code of Federal Regulations, 2010 CFR
2010-07-01
... test engine you provide must include appropriate manifolds, aftertreatment devices, electronic control... CONTROLS CONTROL OF NOX, SOX, AND PM EMISSIONS FROM MARINE ENGINES AND VESSELS SUBJECT TO THE MARPOL... application for an EIAPP certificate for each engine family. An EIAPP certificate is valid starting with the...
40 CFR 1043.41 - EIAPP certification process.
Code of Federal Regulations, 2012 CFR
2012-07-01
... test engine you provide must include appropriate manifolds, aftertreatment devices, electronic control... CONTROLS CONTROL OF NOX, SOX, AND PM EMISSIONS FROM MARINE ENGINES AND VESSELS SUBJECT TO THE MARPOL... application for an EIAPP certificate for each engine family. An EIAPP certificate is valid starting with the...
40 CFR 1043.41 - EIAPP certification process.
Code of Federal Regulations, 2013 CFR
2013-07-01
... test engine you provide must include appropriate manifolds, aftertreatment devices, electronic control... CONTROLS CONTROL OF NOX, SOX, AND PM EMISSIONS FROM MARINE ENGINES AND VESSELS SUBJECT TO THE MARPOL... application for an EIAPP certificate for each engine family. An EIAPP certificate is valid starting with the...
40 CFR 1043.41 - EIAPP certification process.
Code of Federal Regulations, 2011 CFR
2011-07-01
... test engine you provide must include appropriate manifolds, aftertreatment devices, electronic control... CONTROLS CONTROL OF NOX, SOX, AND PM EMISSIONS FROM MARINE ENGINES AND VESSELS SUBJECT TO THE MARPOL... application for an EIAPP certificate for each engine family. An EIAPP certificate is valid starting with the...
ERIC Educational Resources Information Center
Weber, Robert J.; Dixon, Stacey
1989-01-01
Gain analysis is applied to the invention of the sewing needle as well as different sewing implements and modes of sewing. The analysis includes a two-subject experiment. To validate the generality of gain heuristics and underlying switching processes, the invention of the assembly line is also analyzed. (TJH)
2003-12-16
KENNEDY SPACE CENTER, FLA. - Workers accompany the orbiter Atlantis as it is towed back to the Orbiter Processing Facility after spending 10 days in the Vehicle Assembly Building. The hiatus in the VAB allowed work to be performed in the OPF that can only be accomplished while the bay is empty. Work included annual validation of the bay's cranes, work platforms, lifting mechanisms and jack stands. Work resumes to prepare Atlantis for launch in September 2004 on the first return-to-flight mission, STS-114.
2004-01-09
KENNEDY SPACE CENTER, FLA. -- Endeavour is towed in front of the Vehicle Assembly Building (VAB) where it is going for temporary storage. The orbiter has been moved from the Orbiter Processing Facility (OPF) to allow work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.
Validation of a 30-year-old process for the manufacture of L-asparaginase from Erwinia chrysanthemi.
Gervais, David; Allison, Nigel; Jennings, Alan; Jones, Shane; Marks, Trevor
2013-04-01
A 30-year-old manufacturing process for the biologic product L-asparaginase from the plant pathogen Erwinia chrysanthemi was rigorously qualified and validated, with a high level of agreement between validation data and the 6-year process database. L-Asparaginase exists in its native state as a tetrameric protein and is used as a chemotherapeutic agent in the treatment regimen for Acute Lymphoblastic Leukaemia (ALL). The manufacturing process involves fermentation of the production organism, extraction and purification of the L-asparaginase to make drug substance (DS), and finally formulation and lyophilisation to generate drug product (DP). The extensive manufacturing experience with the product was used to establish ranges for all process parameters and product quality attributes. The product and in-process intermediates were rigorously characterised, and new assays, such as size-exclusion and reversed-phase UPLC, were developed, validated, and used to analyse several pre-validation batches. Finally, three prospective process validation batches were manufactured and product quality data generated using both the existing and the new analytical methods. These data demonstrated the process to be robust, highly reproducible and consistent, and the validation was successful, contributing to the granting of an FDA product license in November, 2011.
King, Gillian; Shepherd, Tracy A; Servais, Michelle; Willoughby, Colleen; Bolack, Linda; Strachan, Deborah; Moodie, Sheila; Baldwin, Patricia; Knickle, Kerry; Parker, Kathryn; Savage, Diane; McNaughton, Nancy
2016-10-01
To describe the creation and validation of six simulations concerned with effective listening and interpersonal communication in pediatric rehabilitation. The simulations involved clinicians from various disciplines, were based on clinical scenarios related to client issues, and reflected core aspects of listening/communication. Each simulation had a key learning objective, thus focusing clinicians on specific listening skills. The article outlines the process used to turn written scenarios into digital video simulations, including steps taken to establish content validity and authenticity, and to establish a series of videos based on the complexity of their learning objectives, given contextual factors and associated macrocognitive processes that influence the ability to listen. A complexity rating scale was developed and used to establish a gradient of easy/simple, intermediate, and hard/complex simulations. The development process exemplifies an evidence-based, integrated knowledge translation approach to the teaching and learning of listening and communication skills.
A high-fidelity weather time series generator using the Markov Chain process on a piecewise level
NASA Astrophysics Data System (ADS)
Hersvik, K.; Endrerud, O.-E. V.
2017-12-01
A method is developed for generating a set of unique weather time-series based on an existing weather series. The method allows statistically valid weather variations to take place within repeated simulations of offshore operations. The numerous generated time series need to share the same statistical qualities as the original time series. Statistical qualities here refer mainly to the distribution of weather windows available for work, including durations and frequencies of such weather windows, and seasonal characteristics. The method is based on the Markov chain process. The core new development lies in how the Markov Process is used, specifically by joining small pieces of random length time series together rather than joining individual weather states, each from a single time step, which is a common solution found in the literature. This new Markov model shows favorable characteristics with respect to the requirements set forth and all aspects of the validation performed.
Jeon, Joonryong
2017-01-01
In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size. PMID:28704945
Heo, Gwanghee; Jeon, Joonryong
2017-07-12
In this paper, a data compression technology-based intelligent data acquisition (IDAQ) system was developed for structural health monitoring of civil structures, and its validity was tested using random signals (El-Centro seismic waveform). The IDAQ system was structured to include a high-performance CPU with large dynamic memory for multi-input and output in a radio frequency (RF) manner. In addition, the embedded software technology (EST) has been applied to it to implement diverse logics needed in the process of acquiring, processing and transmitting data. In order to utilize IDAQ system for the structural health monitoring of civil structures, this study developed an artificial filter bank by which structural dynamic responses (acceleration) were efficiently acquired, and also optimized it on the random El-Centro seismic waveform. All techniques developed in this study have been embedded to our system. The data compression technology-based IDAQ system was proven valid in acquiring valid signals in a compressed size.
NASA Astrophysics Data System (ADS)
Choi, Jin-Ha; Lee, Jaewon; Shin, Woojung; Choi, Jeong-Woo; Kim, Hyun Jung
2016-10-01
Nanotechnology and bioengineering have converged over the past decades, by which the application of multi-functional nanoparticles (NPs) has been emerged in clinical and biomedical fields. The NPs primed to detect disease-specific biomarkers or to deliver biopharmaceutical compounds have beena validated in conventional in vitro culture models including two dimensional (2D) cell cultures or 3D organoid models. However, a lack of experimental models that have strong human physiological relevance has hampered accurate validation of the safety and functionality of NPs. Alternatively, biomimetic human "Organs-on-Chips" microphysiological systems have recapitulated the mechanically dynamic 3D tissue interface of human organ microenvironment, in which the transport, cytotoxicity, biocompatibility, and therapeutic efficacy of NPs and their conjugates may be more accurately validated. Finally, integration of NP-guided diagnostic detection and targeted nanotherapeutics in conjunction with human organs-on-chips can provide a novel avenue to accelerate the NP-based drug development process as well as the rapid detection of cellular secretomes associated with pathophysiological processes.
NASA Astrophysics Data System (ADS)
Kassem, A.; Sawan, M.; Boukadoum, M.; Haidar, A.
2005-12-01
We are concerned with the design, implementation, and validation of a perception SoC based on an ultrasonic array of sensors. The proposed SoC is dedicated to ultrasonic echography applications. A rapid prototyping platform is used to implement and validate the new architecture of the digital signal processing (DSP) core. The proposed DSP core efficiently integrates all of the necessary ultrasonic B-mode processing modules. It includes digital beamforming, quadrature demodulation of RF signals, digital filtering, and envelope detection of the received signals. This system handles 128 scan lines and 6400 samples per scan line with a[InlineEquation not available: see fulltext.] angle of view span. The design uses a minimum size lookup memory to store the initial scan information. Rapid prototyping using an ARM/FPGA combination is used to validate the operation of the described system. This system offers significant advantages of portability and a rapid time to market.
Benedict, Ralph HB; DeLuca, John; Phillips, Glenn; LaRocca, Nicholas; Hudson, Lynn D; Rudick, Richard
2017-01-01
Cognitive and motor performance measures are commonly employed in multiple sclerosis (MS) research, particularly when the purpose is to determine the efficacy of treatment. The increasing focus of new therapies on slowing progression or reversing neurological disability makes the utilization of sensitive, reproducible, and valid measures essential. Processing speed is a basic elemental cognitive function that likely influences downstream processes such as memory. The Multiple Sclerosis Outcome Assessments Consortium (MSOAC) includes representatives from advocacy organizations, Food and Drug Administration (FDA), European Medicines Agency (EMA), National Institute of Neurological Disorders and Stroke (NINDS), academic institutions, and industry partners along with persons living with MS. Among the MSOAC goals is acceptance and qualification by regulators of performance outcomes that are highly reliable and valid, practical, cost-effective, and meaningful to persons with MS. A critical step for these neuroperformance metrics is elucidation of clinically relevant benchmarks, well-defined degrees of disability, and gradients of change that are deemed clinically meaningful. This topical review provides an overview of research on one particular cognitive measure, the Symbol Digit Modalities Test (SDMT), recognized as being particularly sensitive to slowed processing of information that is commonly seen in MS. The research in MS clearly supports the reliability and validity of this test and recently has supported a responder definition of SDMT change approximating 4 points or 10% in magnitude. PMID:28206827
Benedict, Ralph Hb; DeLuca, John; Phillips, Glenn; LaRocca, Nicholas; Hudson, Lynn D; Rudick, Richard
2017-04-01
Cognitive and motor performance measures are commonly employed in multiple sclerosis (MS) research, particularly when the purpose is to determine the efficacy of treatment. The increasing focus of new therapies on slowing progression or reversing neurological disability makes the utilization of sensitive, reproducible, and valid measures essential. Processing speed is a basic elemental cognitive function that likely influences downstream processes such as memory. The Multiple Sclerosis Outcome Assessments Consortium (MSOAC) includes representatives from advocacy organizations, Food and Drug Administration (FDA), European Medicines Agency (EMA), National Institute of Neurological Disorders and Stroke (NINDS), academic institutions, and industry partners along with persons living with MS. Among the MSOAC goals is acceptance and qualification by regulators of performance outcomes that are highly reliable and valid, practical, cost-effective, and meaningful to persons with MS. A critical step for these neuroperformance metrics is elucidation of clinically relevant benchmarks, well-defined degrees of disability, and gradients of change that are deemed clinically meaningful. This topical review provides an overview of research on one particular cognitive measure, the Symbol Digit Modalities Test (SDMT), recognized as being particularly sensitive to slowed processing of information that is commonly seen in MS. The research in MS clearly supports the reliability and validity of this test and recently has supported a responder definition of SDMT change approximating 4 points or 10% in magnitude.
Collecting the data but missing the point: validity of hand hygiene audit data.
Jeanes, A; Coen, P G; Wilson, A P; Drey, N S; Gould, D J
2015-06-01
Monitoring of hand hygiene compliance (HHC) by observation has been used in healthcare for more than a decade to provide assurance of infection control practice. The validity of this information is rarely tested. To examine the process and validity of collecting and reporting HHC data based on direct observation of compliance. Five years of HHC data routinely collected in one large National Health Service hospital trust were examined. The data collection process was reviewed by survey and interview of the auditors. HHC data collected for other research purposes undertaken during this period were compared with the organizational data set. After an initial increase, the reported HHC remained unchanged close to its intended target throughout this period. Examination of the data collection process revealed changes, including local interpretations of the data collection system, which invalidated the results. A minority of auditors had received formal training in observation and feedback of results. Whereas observation of HHC is the current gold standard, unless data collection definitions and methods are unambiguous, published, carefully supervised, and regularly monitored, variations may occur which affect the validity of the data. If the purpose of HHC monitoring is to improve practice and minimize transmission of infection, then a focus on progressively improving performance rather than on achieving a target may offer greater opportunities to achieve this. Copyright © 2015 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.
The Multitheoretical List of Therapeutic Interventions - 30 items (MULTI-30).
Solomonov, Nili; McCarthy, Kevin S; Gorman, Bernard S; Barber, Jacques P
2018-01-16
To develop a brief version of the Multitheoretical List of Therapeutic Interventions (MULTI-60) in order to decrease completion time burden by approximately half, while maintaining content coverage. Study 1 aimed to select 30 items. Study 2 aimed to examine the reliability and internal consistency of the MULTI-30. Study 3 aimed to validate the MULTI-30 and ensure content coverage. In Study 1, the sample included 186 therapist and 255 patient MULTI ratings, and 164 ratings of sessions coded by trained observers. Internal consistency (Chronbach's alpha and McDonald's omega) was calculated and confirmatory factor analysis was conducted. Psychotherapy experts rated content relevance. Study 2 included a sample of 644 patient and 522 therapist ratings, and 793 codings of psychotherapy sessions. In Study 3, the sample included 33 codings of sessions. A series of regression analyses was conducted to examine replication of previously published findings using the MULTI-30. The MULTI-30 was found valid, reliable, and internally consistent across 2564 ratings examined across the three studies presented. The MULTI-30 a brief and reliable process measure. Future studies are required for further validation.
ERIC Educational Resources Information Center
Teeter, Phyllis Anne; Smith, Philip L.
The final report of the 2-year project describes the development and validation of microcomputer software to help assess reading disabled elementary grade children and to provide basic reading instruction. Accomplishments of the first year included: design of the STAR Neuro-Cognitive Assessment Program which includes a reproduction of…
Ackerman, Sara L; Gourley, Gato; Le, Gem; Williams, Pamela; Yazdany, Jinoos; Sarkar, Urmimala
2018-03-14
The aim of the study was to develop standards for tracking patient safety gaps in ambulatory care in safety net health systems. Leaders from five California safety net health systems were invited to participate in a modified Delphi process sponsored by the Safety Promotion Action Research and Knowledge Network (SPARKNet) and the California Safety Net Institute in 2016. During each of the three Delphi rounds, the feasibility and validity of 13 proposed patient safety measures were discussed and prioritized. Surveys and transcripts from the meetings were analyzed to understand the decision-making process. The Delphi process included eight panelists. Consensus was reached to adopt 9 of 13 proposed measures. All 9 measures were unanimously considered valid, but concern was expressed about the feasibility of implementing several of the measures. Although safety net health systems face high barriers to standardized measurement, our study demonstrates that consensus can be reached on acceptable and feasible methods for tracking patient safety gaps in safety net health systems. If accompanied by the active participation key stakeholder groups, including patients, clinicians, staff, data system professionals, and health system leaders, the consensus measures reported here represent one step toward improving ambulatory patient safety in safety net health systems.
Seismology software: state of the practice
NASA Astrophysics Data System (ADS)
Smith, W. Spencer; Zeng, Zheng; Carette, Jacques
2018-05-01
We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools.
Seismology software: state of the practice
NASA Astrophysics Data System (ADS)
Smith, W. Spencer; Zeng, Zheng; Carette, Jacques
2018-02-01
We analyzed the state of practice for software development in the seismology domain by comparing 30 software packages on four aspects: product, implementation, design, and process. We found room for improvement in most seismology software packages. The principal areas of concern include a lack of adequate requirements and design specification documents, a lack of test data to assess reliability, a lack of examples to get new users started, and a lack of technological tools to assist with managing the development process. To assist going forward, we provide recommendations for a document-driven development process that includes a problem statement, development plan, requirement specification, verification and validation (V&V) plan, design specification, code, V&V report, and a user manual. We also provide advice on tool use, including issue tracking, version control, code documentation, and testing tools.
P-8A Poseidon strategy for modeling & simulation verification validation & accreditation (VV&A)
NASA Astrophysics Data System (ADS)
Kropp, Derek L.
2009-05-01
One of the first challenges in addressing the need for Modeling & Simulation (M&S) Verification, Validation, & Accreditation (VV&A) is to develop an approach for applying structured and formalized VV&A processes. The P-8A Poseidon Multi-Mission Maritime Aircraft (MMA) Program Modeling and Simulation Accreditation Strategy documents the P-8A program's approach to VV&A. The P-8A strategy tailors a risk-based approach and leverages existing bodies of knowledge, such as the Defense Modeling and Simulation Office Recommended Practice Guide (DMSO RPG), to make the process practical and efficient. As the program progresses, the M&S team must continue to look for ways to streamline the process, add supplemental steps to enhance the process, and identify and overcome procedural, organizational, and cultural challenges. This paper includes some of the basics of the overall strategy, examples of specific approaches that have worked well, and examples of challenges that the M&S team has faced.
The NIH analytical methods and reference materials program for dietary supplements.
Betz, Joseph M; Fisher, Kenneth D; Saldanha, Leila G; Coates, Paul M
2007-09-01
Quality of botanical products is a great uncertainty that consumers, clinicians, regulators, and researchers face. Definitions of quality abound, and include specifications for sanitation, adventitious agents (pesticides, metals, weeds), and content of natural chemicals. Because dietary supplements (DS) are often complex mixtures, they pose analytical challenges and method validation may be difficult. In response to product quality concerns and the need for validated and publicly available methods for DS analysis, the US Congress directed the Office of Dietary Supplements (ODS) at the National Institutes of Health (NIH) to accelerate an ongoing methods validation process, and the Dietary Supplements Methods and Reference Materials Program was created. The program was constructed from stakeholder input and incorporates several federal procurement and granting mechanisms in a coordinated and interlocking framework. The framework facilitates validation of analytical methods, analytical standards, and reference materials.
Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of anaphylaxis. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis health outcome of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify anaphylaxis and including validation estimates of the coding algorithms. Our search revealed limited literature focusing on anaphylaxis that provided administrative and claims data-based algorithms and validation estimates. Only four studies identified via literature searches provided validated algorithms; however, two additional studies were identified by Mini-Sentinel collaborators and were incorporated. The International Classification of Diseases, Ninth Revision, codes varied, as did the positive predictive value, depending on the cohort characteristics and the specific codes used to identify anaphylaxis. Research needs to be conducted on designing validation studies to test anaphylaxis algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Sawatzky, Richard; Chan, Eric K H; Zumbo, Bruno D; Ahmed, Sara; Bartlett, Susan J; Bingham, Clifton O; Gardner, William; Jutai, Jeffrey; Kuspinar, Ayse; Sajobi, Tolulope; Lix, Lisa M
2017-09-01
Obtaining the patient's view about the outcome of care is an essential component of patient-centered care. Many patient-reported outcome (PRO) instruments for different purposes have been developed since the 1960s. Measurement validation is fundamental in the development, evaluation, and use of PRO instruments. This paper provides a review of modern perspectives of measurement validation in relation to the followings three questions as applied to PROs: (1) What evidence is needed to warrant comparisons between groups and individuals? (2) What evidence is needed to warrant comparisons over time? and (3) What are the value implications, including personal and societal consequences, of using PRO scores? Measurement validation is an ongoing process that involves the accumulation of evidence regarding the justification of inferences, actions, and decisions based on measurement scores. These include inferences pertaining to comparisons between groups and comparisons over time as well as consideration of value implications of using PRO scores. Personal and societal consequences must be examined as part of a comprehensive approach to measurement validation. The answers to these three questions are fundamental to the the validity of different types of inferences, actions, and decisions made on PRO scores in health research, health care administration, and clinical practice. Copyright © 2016 Elsevier Inc. All rights reserved.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Grassberger, C; Paganetti, H
Purpose: To develop a model that includes the process of resistance development into the treatment optimization process for schedules that include targeted therapies. Further, to validate the approach using clinical data and to apply the model to assess the optimal induction period with targeted agents before curative treatment with chemo-radiation in stage III lung cancer. Methods: Growth of the tumor and its subpopulations is modeled by Gompertzian growth dynamics, resistance induction as a stochastic process. Chemotherapy induced cell kill is modeled by log-cell kill dynamics, targeted agents similarly but restricted to the sensitive population. Radiation induced cell kill is assumedmore » to follow the linear-quadratic model. The validation patient data consist of a cohort of lung cancer patients treated with tyrosine kinase inhibitors that had longitudinal imaging data available. Results: The resistance induction model was successfully validated using clinical trial data from 49 patients treated with targeted agents. The observed recurrence kinetics, with tumors progressing from 1.4–63 months, result in tumor growth equaling a median volume doubling time of 92 days [34–248] and a median fraction of pre-existing resistance of 0.035 [0–0.22], in agreement with previous clinical studies. The model revealed widely varying optimal time points for the use of curative therapy, reaching from ∼1m to >6m depending on the patient’s growth rate and amount of pre-existing resistance. This demonstrates the importance of patient-specific treatment schedules when targeted agents are incorporated into the treatment. Conclusion: We developed a model including evolutionary dynamics of resistant sub-populations with traditional chemotherapy and radiation cell kill models. Fitting to clinical data yielded patient specific growth rates and resistant fraction in agreement with previous studies. Further application of the model demonstrated how proper timing of chemo-radiation could minimize the probability of resistance, increasing tumor control significantly.« less
Pernambuco, Leandro; Espelt, Albert; Magalhães, Hipólito Virgílio; Lima, Kenio Costa de
2017-06-08
to present a guide with recommendations for translation, adaptation, elaboration and process of validation of tests in Speech and Language Pathology. the recommendations were based on international guidelines with a focus on the elaboration, translation, cross-cultural adaptation and validation process of tests. the recommendations were grouped into two Charts, one of them with procedures for translation and transcultural adaptation and the other for obtaining evidence of validity, reliability and measures of accuracy of the tests. a guide with norms for the organization and systematization of the process of elaboration, translation, cross-cultural adaptation and validation process of tests in Speech and Language Pathology was created.
Ownby, Raymond L; Acevedo, Amarilis; Waldrop-Valverde, Drenna; Jacobs, Robin J; Caballero, Joshua; Davenport, Rosemary; Homs, Ana-Maria; Czaja, Sara J; Loewenstein, David
2013-01-01
Current measures of health literacy have been criticized on a number of grounds, including use of a limited range of content, development on small and atypical patient groups, and poor psychometric characteristics. In this paper, we report the development and preliminary validation of a new computer-administered and -scored health literacy measure addressing these limitations. Items in the measure reflect a wide range of content related to health promotion and maintenance as well as care for diseases. The development process has focused on creating a measure that will be useful in both Spanish and English, while not requiring substantial time for clinician training and individual administration and scoring. The items incorporate several formats, including questions based on brief videos, which allow for the assessment of listening comprehension and the skills related to obtaining information on the Internet. In this paper, we report the interim analyses detailing the initial development and pilot testing of the items (phase 1 of the project) in groups of Spanish and English speakers. We then describe phase 2, which included a second round of testing of the items, in new groups of Spanish and English speakers, and evaluation of the new measure's reliability and validity in relation to other measures. Data are presented that show that four scales (general health literacy, numeracy, conceptual knowledge, and listening comprehension), developed through a process of item and factor analyses, have significant relations to existing measures of health literacy.
NASA Astrophysics Data System (ADS)
Wrożyna, Andrzej; Pernach, Monika; Kuziak, Roman; Pietrzyk, Maciej
2016-04-01
Due to their exceptional strength properties combined with good workability the Advanced High-Strength Steels (AHSS) are commonly used in automotive industry. Manufacturing of these steels is a complex process which requires precise control of technological parameters during thermo-mechanical treatment. Design of these processes can be significantly improved by the numerical models of phase transformations. Evaluation of predictive capabilities of models, as far as their applicability in simulation of thermal cycles thermal cycles for AHSS is considered, was the objective of the paper. Two models were considered. The former was upgrade of the JMAK equation while the latter was an upgrade of the Leblond model. The models can be applied to any AHSS though the examples quoted in the paper refer to the Dual Phase (DP) steel. Three series of experimental simulations were performed. The first included various thermal cycles going beyond limitations of the continuous annealing lines. The objective was to validate models behavior in more complex cooling conditions. The second set of tests included experimental simulations of the thermal cycle characteristic for the continuous annealing lines. Capability of the models to describe properly phase transformations in this process was evaluated. The third set included data from the industrial continuous annealing line. Validation and verification of models confirmed their good predictive capabilities. Since it does not require application of the additivity rule, the upgrade of the Leblond model was selected as the better one for simulation of industrial processes in AHSS production.
Saidel-Goley, Isaac N; Albiero, Erin E; Flannery, Kathleen A
2012-02-01
Dissociation is a mental process resulting in the disruption of memory, perception, and sometimes identity. At a nonclinical level, only mild dissociative experiences occur. The nature of nonclinical dissociation is disputed in the literature, with some asserting that it is a beneficial information processing style and others positing that it is a psychopathological phenomenon. The purpose of this study was to further the understanding of nonclinical dissociation with respect to memory and attention, by including a more ecologically valid virtual reality (VR) memory task along with standard neuropsychological tasks. Forty-five undergraduate students from a small liberal arts college in the northeast participated for course credit. The participants completed a battery of tasks including two standard memory tasks, a standard attention task, and an experimental VR memory task; the VR task included immersion in a virtual apartment, followed by incidental object-location recall for objects in the virtual apartment. Support for the theoretical model portraying nonclinical dissociation as a beneficial information processing style was found in this study. Dissociation scores were positively correlated with working memory scores and attentional processing scores on the standard neuropsychological tasks. In terms of the VR task, dissociation scores were positively correlated with more false positive memories that could be the result of a tendency of nonclinical highly dissociative individuals to create more elaborative schemas. This study also demonstrates that VR paradigms add to the prediction of cognitive functioning in testing protocols using standard neuropsychological tests, while simultaneously increasing ecological validity.
Wesson, Jacqueline; Clemson, Lindy; Crawford, John D; Kochan, Nicole A; Brodaty, Henry; Reppermund, Simone
2017-05-01
To explore the validity of the Large Allen's Cognitive Level Screen-5 (LACLS-5) as a performance-based measure of functional cognition, representing an ability to perform complex everyday activities in older adults with mild cognitive impairment (MCI) and mild dementia living in the community. Using cross-sectional data from the Sydney Memory and Ageing Study, 160 community-dwelling older adults with normal cognition (CN; N = 87), MCI (N = 43), or dementia (N = 30) were studied. Functional cognition (LACLS-5), complex everyday activities (Disability Assessment for Dementia [DAD]), Assessment of Motor and Process Skills [AMPS]), and neuropsychological measures were used. Participants with dementia performed worse than CN on all clinical measures, and MCI participants were intermediate. Correlational analyses showed that LACLS-5 was most strongly related to AMPS Process scores, DAD instrumental activities of daily living subscale, Mini-Mental State Exam, Block Design, Logical Memory, and Trail Making Test B. Multiple regression analysis indicated that both cognitive (Block Design) and functional measures (AMPS Process score) and sex predicted LACLS-5 performance. Finally, LACLS-5 was able to adequately discriminate between CN and dementia and between MCI and dementia but was unable to reliably distinguish between CN and MCI. Construct validity, including convergent and discriminative validity, was supported. LACLS-5 is a valid performance-based measure for evaluating functional cognition. Discriminativevalidity is acceptable for identifying mild dementia but requires further refinement for detecting MCI. Copyright © 2017 American Association for Geriatric Psychiatry. Published by Elsevier Inc. All rights reserved.
Development and validation of the pro-environmental behaviour scale for women's health.
Kim, HyunKyoung
2017-05-01
This study was aimed to develop and test the Pro-environmental Behavior Scale for Women's Health. Women adopt sustainable behaviours and alter their life styles to protect the environment and their health from environmental pollution. The conceptual framework of pro-environmental behaviours was based on Rogers' protection motivation theory and Weinstein's precaution adoption process model. The cross-sectional design was used for instrument development. The instrument development process consisted of a literature review, personal depth interviews and focus group interviews. The sample comprised 356 adult women recruited in April-May 2012 in South Korea using quota sampling. For construct validity, exploratory factor analysis was conducted to examine the factor structure, after which convergent and discriminant validity and known-group comparisons were tested. Principal component analysis yielded 17 items with four factors, including 'women's health protection,' 'chemical exposure prevention,' 'alternative consumption,' and 'community-oriented behaviour'. The Cronbach's α was 0·81. Convergent and discriminant validity were supported by performing correlations with other environmental-health and health-behaviour measures. Nursing professionals can reliably use the instrument to assess women's behaviours, which protect their health and the environment. © 2016 John Wiley & Sons Ltd.
Analysis of SSME HPOTP rotordynamics subsynchronous whirl
NASA Technical Reports Server (NTRS)
1984-01-01
The causes and remedies of vibration and subsynchronous whirl problems encountered in the Shuttle Main Engine SSME turbomachinery are analyzed. Because the nonlinear and linearized models of the turbopumps play such an important role in the analysis process, the main emphasis is concentrated on the verification and improvement of these tools. It has been the goal of our work to validate the equations of motion used in the models are validated, including the assumptions upon which they are based. Verification of th SSME rotordynamics simulation and the developed enhancements, are emphasized.
IVHM for the 3rd Generation RLV Program: Technology Development
NASA Technical Reports Server (NTRS)
Kahle, Bill
2000-01-01
The objective behind the Integrated Vehicle Health Management (IVHM) project is to develop and integrate the technologies which can provide a continuous, intelligent, and adaptive health state of a vehicle and use this information to improve safety and reduce costs of operations. Technological areas discussed include: developing, validating, and transfering next generation IVHM technologies to near term industry and government reusable launch systems; focus NASA on the next generation and highly advanced sensor and software technologies; and validating IVHM systems engineering design process for future programs.
Validation of gamma irradiator controls for quality and regulatory compliance
NASA Astrophysics Data System (ADS)
Harding, Rorry B.; Pinteric, Francis J. A.
1995-09-01
Since 1978 the U.S. Food and Drug Administration (FDA) has had both the legal authority and the Current Good Manufacturing Practice (CGMP) regulations in place to require irradiator owners who process medical devices to produce evidence of Irradiation Process Validation. One of the key components of Irradiation Process Validation is the validation of the irradiator controls. However, it is only recently that FDA audits have focused on this component of the process validation. What is Irradiator Control System Validation? What constitutes evidence of control? How do owners obtain evidence? What is the irradiator supplier's role in validation? How does the ISO 9000 Quality Standard relate to the FDA's CGMP requirement for evidence of Control System Validation? This paper presents answers to these questions based on the recent experiences of Nordion's engineering and product management staff who have worked with several US-based irradiator owners. This topic — Validation of Irradiator Controls — is a significant regulatory compliance and operations issue within the irradiator suppliers' and users' community.
Southern Regional Center for Lightweight Innovative Design
DOE Office of Scientific and Technical Information (OSTI.GOV)
Wang, Paul T.
The Southern Regional Center for Lightweight Innovative Design (SRCLID) has developed an experimentally validated cradle-to-grave modeling and simulation effort to optimize automotive components in order to decrease weight and cost, yet increase performance and safety in crash scenarios. In summary, the three major objectives of this project are accomplished: To develop experimentally validated cradle-to-grave modeling and simulation tools to optimize automotive and truck components for lightweighting materials (aluminum, steel, and Mg alloys and polymer-based composites) with consideration of uncertainty to decrease weight and cost, yet increase the performance and safety in impact scenarios; To develop multiscale computational models that quantifymore » microstructure-property relations by evaluating various length scales, from the atomic through component levels, for each step of the manufacturing process for vehicles; and To develop an integrated K-12 educational program to educate students on lightweighting designs and impact scenarios. In this final report, we divided the content into two parts: the first part contains the development of building blocks for the project, including materials and process models, process-structure-property (PSP) relationship, and experimental validation capabilities; the second part presents the demonstration task for Mg front-end work associated with USAMP projects.« less
Kupersmidt, Janis B; Stelter, Rebecca; Dodge, Kenneth A
2011-12-01
The purpose of this study was to evaluate the psychometric properties of an audio computer-assisted self-interviewing Web-based software application called the Social Information Processing Application (SIP-AP) that was designed to assess social information processing skills in boys in 3rd through 5th grades. This study included a racially and ethnically diverse sample of 244 boys ages 8 through 12 (M = 9.4) from public elementary schools in 3 states. The SIP-AP includes 8 videotaped vignettes, filmed from the first-person perspective, that depict common misunderstandings among boys. Each vignette shows a negative outcome for the victim and ambiguous intent on the part of the perpetrator. Boys responded to 16 Web-based questions representing the 5 social information processing mechanisms, after viewing each vignette. Parents and teachers completed measures assessing boys' antisocial behavior. Confirmatory factor analyses revealed that a model positing the original 5 cognitive mechanisms fit the data well when the items representing prosocial cognitions were included on their own factor, creating a 6th factor. The internal consistencies for each of the 16 individual cognitions as well as for the 6 cognitive mechanism scales were excellent. Boys with elevated scores on 5 of the 6 cognitive mechanisms exhibited more antisocial behavior than boys whose scores were not elevated. These findings highlight the need for further research on the measurement of prosocial cognitions or cognitive strengths in boys in addition to assessing cognitive deficits. Findings suggest that the SIP-AP is a reliable and valid tool for use in future research of social information processing skills in boys.
Kupersmidt, Janis B.; Stelter, Rebecca; Dodge, Kenneth A.
2013-01-01
The purpose of this study was to evaluate the psychometric properties of an audio computer-assisted self-interviewing Web-based software application called the Social Information Processing Application (SIP-AP) that was designed to assess social information processing skills in boys in 3rd through 5th grades. This study included a racially and ethnically diverse sample of 244 boys ages 8 through 12 (M = 9.4) from public elementary schools in 3 states. The SIP-AP includes 8 videotaped vignettes, filmed from the first-person perspective, that depict common misunderstandings among boys. Each vignette shows a negative outcome for the victim and ambiguous intent on the part of the perpetrator. Boys responded to 16 Web-based questions representing the 5 social information processing mechanisms, after viewing each vignette. Parents and teachers completed measures assessing boys’ antisocial behavior. Confirmatory factor analyses revealed that a model positing the original 5 cognitive mechanisms fit the data well when the items representing prosocial cognitions were included on their own factor, creating a 6th factor. The internal consistencies for each of the 16 individual cognitions as well as for the 6 cognitive mechanism scales were excellent. Boys with elevated scores on 5 of the 6 cognitive mechanisms exhibited more antisocial behavior than boys whose scores were not elevated. These findings highlight the need for further research on the measurement of prosocial cognitions or cognitive strengths in boys in addition to assessing cognitive deficits. Findings suggest that the SIP-AP is a reliable and valid tool for use in future research of social information processing skills in boys. PMID:21534693
Remote Sensing Image Quality Assessment Experiment with Post-Processing
NASA Astrophysics Data System (ADS)
Jiang, W.; Chen, S.; Wang, X.; Huang, Q.; Shi, H.; Man, Y.
2018-04-01
This paper briefly describes the post-processing influence assessment experiment, the experiment includes three steps: the physical simulation, image processing, and image quality assessment. The physical simulation models sampled imaging system in laboratory, the imaging system parameters are tested, the digital image serving as image processing input are produced by this imaging system with the same imaging system parameters. The gathered optical sampled images with the tested imaging parameters are processed by 3 digital image processes, including calibration pre-processing, lossy compression with different compression ratio and image post-processing with different core. Image quality assessment method used is just noticeable difference (JND) subject assessment based on ISO20462, through subject assessment of the gathered and processing images, the influence of different imaging parameters and post-processing to image quality can be found. The six JND subject assessment experimental data can be validated each other. Main conclusions include: image post-processing can improve image quality; image post-processing can improve image quality even with lossy compression, image quality with higher compression ratio improves less than lower ratio; with our image post-processing method, image quality is better, when camera MTF being within a small range.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Jernigan, Dann A.; Blanchat, Thomas K.
It is necessary to improve understanding and develop temporally- and spatially-resolved integral scale validation data of the heat flux incident to a complex object in addition to measuring the thermal response of said object located within the fire plume for the validation of the SIERRA/FUEGO/SYRINX fire and SIERRA/CALORE codes. To meet this objective, a complex calorimeter with sufficient instrumentation to allow validation of the coupling between FUEGO/SYRINX/CALORE has been designed, fabricated, and tested in the Fire Laboratory for Accreditation of Models and Experiments (FLAME) facility. Validation experiments are specifically designed for direct comparison with the computational predictions. Making meaningful comparisonmore » between the computational and experimental results requires careful characterization and control of the experimental features or parameters used as inputs into the computational model. Validation experiments must be designed to capture the essential physical phenomena, including all relevant initial and boundary conditions. This report presents the data validation steps and processes, the results of the penlight radiant heat experiments (for the purpose of validating the CALORE heat transfer modeling of the complex calorimeter), and the results of the fire tests in FLAME.« less
Küçükdeveci, Ayse A; Sahin, Hülya; Ataman, Sebnem; Griffiths, Bridget; Tennant, Alan
2004-02-15
Guidelines have been established for cross-cultural adaptation of outcome measures. However, invariance across cultures must also be demonstrated through analysis of Differential Item Functioning (DIF). This is tested in the context of a Turkish adaptation of the Health Assessment Questionnaire (HAQ). Internal construct validity of the adapted HAQ is assessed by Rasch analysis; reliability, by internal consistency and the intraclass correlation coefficient; external construct validity, by association with impairments and American College of Rheumatology functional stages. Cross-cultural validity is tested through DIF by comparison with data from the UK version of the HAQ. The adapted version of the HAQ demonstrated good internal construct validity through fit of the data to the Rasch model (mean item fit 0.205; SD 0.998). Reliability was excellent (alpha = 0.97) and external construct validity was confirmed by expected associations. DIF for culture was found in only 1 item. Cross-cultural validity was found to be sufficient for use in international studies between the UK and Turkey. Future adaptation of instruments should include analysis of DIF at the field testing stage in the adaptation process.
Federal Register 2010, 2011, 2012, 2013, 2014
2013-12-10
... for, the manufacture, preproduction design validation (including a process to assess the performance... requirements governing the design, manufacture, packing, labeling, storage, installation, and servicing of all... for Quality Assurance in Design/Development, Production, Installation, and Servicing.'' The CGMP/QS...
Federal Register 2010, 2011, 2012, 2013, 2014
2010-10-18
... facilities and controls used for, the manufacture, preproduction design validation (including a process to... requirements governing the design, manufacture, packing, labeling, storage, installation, and servicing of all... Model for Quality Assurance in Design/Development, Production, Installation, and Servicing.'' The CGMP...
User Authentication and Authorization Challenges in a Networked Library Environment.
ERIC Educational Resources Information Center
Machovec, George S.
1997-01-01
Discusses computer user authentication and authorization issues when libraries need to let valid users access databases and information services without making the process too difficult for either party. Common solutions are explained, including filtering, passwords, and kerberos (cryptographic authentication scheme for secure use over public…
Space Shuttle Propulsion System Reliability
NASA Technical Reports Server (NTRS)
Welzyn, Ken; VanHooser, Katherine; Moore, Dennis; Wood, David
2011-01-01
This session includes the following sessions: (1) External Tank (ET) System Reliability and Lessons, (2) Space Shuttle Main Engine (SSME), Reliability Validated by a Million Seconds of Testing, (3) Reusable Solid Rocket Motor (RSRM) Reliability via Process Control, and (4) Solid Rocket Booster (SRB) Reliability via Acceptance and Testing.
Osmosis and Diffusion Conceptual Assessment
ERIC Educational Resources Information Center
Fisher, Kathleen M.; Williams, Kathy S.; Lineback, Jennifer Evarts
2011-01-01
Biology student mastery regarding the mechanisms of diffusion and osmosis is difficult to achieve. To monitor comprehension of these processes among students at a large public university, we developed and validated an 18-item Osmosis and Diffusion Conceptual Assessment (ODCA). This assessment includes two-tiered items, some adopted or modified…
Hill, Jonathan C; Kang, Sujin; Benedetto, Elena; Myers, Helen; Blackburn, Steven; Smith, Stephanie; Dunn, Kate M; Hay, Elaine; Rees, Jonathan; Beard, David; Glyn-Jones, Sion; Barker, Karen; Ellis, Benjamin; Fitzpatrick, Ray; Price, Andrew
2016-08-05
Current musculoskeletal outcome tools are fragmented across different healthcare settings and conditions. Our objectives were to develop and validate a single musculoskeletal outcome measure for use throughout the pathway and patients with different musculoskeletal conditions: the Arthritis Research UK Musculoskeletal Health Questionnaire (MSK-HQ). A consensus workshop with stakeholders from across the musculoskeletal community, workshops and individual interviews with a broad mix of musculoskeletal patients identified and prioritised outcomes for MSK-HQ inclusion. Initial psychometric validation was conducted in four cohorts from community physiotherapy, and secondary care orthopaedic hip, knee and shoulder clinics. Stakeholders (n=29) included primary care, physiotherapy, orthopaedic and rheumatology patients (n=8); general practitioners, physiotherapists, orthopaedists, rheumatologists and pain specialists (n=7), patient and professional national body representatives (n=10), and researchers (n=4). The four validation cohorts included 570 participants (n=210 physiotherapy, n=150 hip, n=150 knee, n=60 shoulder patients). Outcomes included the MSK-HQ's acceptability, feasibility, comprehension, readability and responder burden. The validation cohort outcomes were the MSK-HQ's completion rate, test-retest reliability and convergent validity with reference standards (EQ-5D-5L, Oxford Hip, Knee, Shoulder Scores, and the Keele MSK-PROM). Musculoskeletal domains prioritised were pain severity, physical function, work interference, social interference, sleep, fatigue, emotional health, physical activity, independence, understanding, confidence to self-manage and overall impact. Patients reported MSK-HQ items to be 'highly relevant' and 'easy to understand'. Completion rates were high (94.2%), with scores normally distributed, and no floor/ceiling effects. Test-retest reliability was excellent, and convergent validity was strong (correlations 0.81-0.88). A new musculoskeletal outcome measure has been developed through a coproduction process with patients to capture prioritised outcomes for use throughout the pathway and with different musculoskeletal conditions. Four validation cohorts found that the MSK-HQ had high completion rates, excellent test-retest reliability and strong convergent validity with reference standards. Further validation studies are ongoing, including a cohort with rheumatoid/inflammatory arthritis. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/
Bountziouka, V; Bathrellou, E; Giotopoulou, A; Katsagoni, C; Bonou, M; Vallianou, N; Barbetseas, J; Avgerinos, P C; Panagiotakos, D B
2012-08-01
The aim of this work was to evaluate the repeatability and the validity of a food frequency questionnaire (FFQ), and to discuss the methodological framework of such procedures. The semi-quantitative FFQ included 69 questions regarding the frequency of consumption of all main food groups and beverages usually consumed and 7 questions regarding eating behaviors. Five hundred individuals (37 ± 15 yrs, 38% males) were recruited for the repeatability process, while another 432 (46 ± 16 yrs, 40% males) also completed 3-Day Diaries (3DD) for the validation process. The repeatability of the FFQ was adequate for all food items tested (Kendall's tau-b: 0.26-0.67, p < 0.05), energy and macronutrients intake (energy adjusted correlation coefficients ranged between 0.56-0.69, p < 0.05). Moderate validity of the FFQ was observed for "dairy products", "fruit", "alcohol" and "stimulants" (tau-b: 0.31-0.60, p < 0.05), whereas low agreement was shown for "starchy products", "legumes", "vegetables", "meat", "fish", "sweets", "eggs", "fats and oils" (tau-b < 0.30, p < 0.05). The FFQ was also valid regarding energy and macronutrients intake. Sensitivity analyses by sex and BMI category (< or ≥25 kg/m(2)) showed similar validity of the FFQ for all food groups (apart from "fats and oils" intake), as well as energy and nutrient intake. The proposed FFQ has proven repeatable and relatively valid for foods' intake, and could therefore be used for nutritional assessment purposes. Copyright © 2010 Elsevier B.V. All rights reserved.
Masucci, Giuseppe V; Cesano, Alessandra; Hawtin, Rachael; Janetzki, Sylvia; Zhang, Jenny; Kirsch, Ilan; Dobbin, Kevin K; Alvarez, John; Robbins, Paul B; Selvan, Senthamil R; Streicher, Howard Z; Butterfield, Lisa H; Thurin, Magdalena
2016-01-01
Immunotherapies have emerged as one of the most promising approaches to treat patients with cancer. Recently, there have been many clinical successes using checkpoint receptor blockade, including T cell inhibitory receptors such as cytotoxic T-lymphocyte-associated antigen 4 (CTLA-4) and programmed cell death-1 (PD-1). Despite demonstrated successes in a variety of malignancies, responses only typically occur in a minority of patients in any given histology. Additionally, treatment is associated with inflammatory toxicity and high cost. Therefore, determining which patients would derive clinical benefit from immunotherapy is a compelling clinical question. Although numerous candidate biomarkers have been described, there are currently three FDA-approved assays based on PD-1 ligand expression (PD-L1) that have been clinically validated to identify patients who are more likely to benefit from a single-agent anti-PD-1/PD-L1 therapy. Because of the complexity of the immune response and tumor biology, it is unlikely that a single biomarker will be sufficient to predict clinical outcomes in response to immune-targeted therapy. Rather, the integration of multiple tumor and immune response parameters, such as protein expression, genomics, and transcriptomics, may be necessary for accurate prediction of clinical benefit. Before a candidate biomarker and/or new technology can be used in a clinical setting, several steps are necessary to demonstrate its clinical validity. Although regulatory guidelines provide general roadmaps for the validation process, their applicability to biomarkers in the cancer immunotherapy field is somewhat limited. Thus, Working Group 1 (WG1) of the Society for Immunotherapy of Cancer (SITC) Immune Biomarkers Task Force convened to address this need. In this two volume series, we discuss pre-analytical and analytical (Volume I) as well as clinical and regulatory (Volume II) aspects of the validation process as applied to predictive biomarkers for cancer immunotherapy. To illustrate the requirements for validation, we discuss examples of biomarker assays that have shown preliminary evidence of an association with clinical benefit from immunotherapeutic interventions. The scope includes only those assays and technologies that have established a certain level of validation for clinical use (fit-for-purpose). Recommendations to meet challenges and strategies to guide the choice of analytical and clinical validation design for specific assays are also provided.
SAM 2 and SAGE data management and processing
NASA Technical Reports Server (NTRS)
Osborn, M. T.; Trepte, C. R.
1987-01-01
The data management and processing supplied by ST Systems Corporation (STX) for the Stratospheric Aerosol Measurement 2 (SAM 2) and Stratospheric Aerosol and Gas Experiment (SAGE) experiments for the years 1983 to 1986 are described. Included are discussions of data validation, documentation, and scientific analysis, as well as the archival schedule met by the operational reduction of SAM 2 and SAGE data. Work under this contract resulted in the archiving of the first seven years of SAM 2 data and all three years of SAGE data. A list of publications and presentations supported was also included.
Singh, Varun Pratap; Singh, Rajkumar
2014-03-01
The aim of this study was to develop a reliable and valid Nepali version of the Psychosocial Impact of Dental Aesthetic Questionnaire (PIDAQ). Cross-sectional descriptive validation study. B.P. Koirala Institute of Health Sciences, Dharan, Nepal. A rigorous translation process including conceptual and semantic evaluation, translation, back translation and pre-testing was carried out. Two hundred and fifty-two undergraduates, including equal numbers of males and females with an age ranging from 18 to 29 years (mean age: 22·33±2·114 years), participated in this study. Reliability was assessed by Cronbach's alpha coefficient and the coefficient of correlation was used to assess correlation between items and test-retest reliability. The construct validity was tested by factorial analysis. Convergent construct validity was tested by comparison of PIDAQ scores with the aesthetic component of the index of orthodontic treatment needs (IOTN-AC) and perception of occlusion scale (POS), respectively. Discriminant construct validity was assessed by differences in score for those who demand treatment and those who did not. The response rate was 100%. One hundred and twenty-three individuals had a demand for orthodontic treatment. The Nepali PIDAQ had excellent reliability with Cronbach's alpha of 0·945, corrected item correlation between 0·525 and 0·790 and overall test-retest reliability of 0·978. The construct validity was good with formation of a new sub-domain 'Dental self-consciousness'. The scale had good correlation with IOTN-AC and POS fulfilling convergent construct validity. The discriminant construct validity was proved by significant differences in scores for subjects with demand and without demand for treatment. To conclude, Nepali version of PIDAQ has good psychometric properties and can be used effectively in this population group for further research.
2008-01-01
PDA Technical Report No. 14 has been written to provide current best practices, such as application of risk-based decision making, based in sound science to provide a foundation for the validation of column-based chromatography processes and to expand upon information provided in Technical Report No. 42, Process Validation of Protein Manufacturing. The intent of this technical report is to provide an integrated validation life-cycle approach that begins with the use of process development data for the definition of operational parameters as a basis for validation, confirmation, and/or minor adjustment to these parameters at manufacturing scale during production of conformance batches and maintenance of the validated state throughout the product's life cycle.
NASA Astrophysics Data System (ADS)
Kaiser, C.; Roll, K.; Volk, W.
2017-09-01
In the automotive industry, the manufacturing of automotive outer panels requires hemming processes in which two sheet metal parts are joined together by bending the flange of the outer part over the inner part. Because of decreasing development times and the steadily growing number of vehicle derivatives, an efficient digital product and process validation is necessary. Commonly used simulations, which are based on the finite element method, demand significant modelling effort, which results in disadvantages especially in the early product development phase. To increase the efficiency of designing hemming processes this paper presents a hemming-specific metamodel approach. The approach includes a part analysis in which the outline of the automotive outer panels is initially split into individual segments. By doing a para-metrization of each of the segments and assigning basic geometric shapes, the outline of the part is approximated. Based on this, the hemming parameters such as flange length, roll-in, wrinkling and plastic strains are calculated for each of the geometric basic shapes by performing a meta-model-based segmental product validation. The metamodel is based on an element similar formulation that includes a reference dataset of various geometric basic shapes. A random automotive outer panel can now be analysed and optimized based on the hemming-specific database. By implementing this approach into a planning system, an efficient optimization of designing hemming processes will be enabled. Furthermore, valuable time and cost benefits can be realized in a vehicle’s development process.
NASA Astrophysics Data System (ADS)
Astuti, Sri Rejeki Dwi; Suyanta, LFX, Endang Widjajanti; Rohaeti, Eli
2017-05-01
The demanding of assessment in learning process was impact by policy changes. Nowadays, assessment is not only emphasizing knowledge, but also skills and attitudes. However, in reality there are many obstacles in measuring them. This paper aimed to describe how to develop integrated assessment instrument and to verify instruments' validity such as content validity and construct validity. This instrument development used test development model by McIntire. Development process data was acquired based on development test step. Initial product was observed by three peer reviewer and six expert judgments (two subject matter experts, two evaluation experts and two chemistry teachers) to acquire content validity. This research involved 376 first grade students of two Senior High Schools in Bantul Regency to acquire construct validity. Content validity was analyzed used Aiken's formula. The verifying of construct validity was analyzed by exploratory factor analysis using SPSS ver 16.0. The result show that all constructs in integrated assessment instrument are asserted valid according to content validity and construct validity. Therefore, the integrated assessment instrument is suitable for measuring critical thinking abilities and science process skills of senior high school students on electrolyte solution matter.
Clinical intuition in the nursing process and decision-making-A mixed-studies review.
Melin-Johansson, Christina; Palmqvist, Rebecca; Rönnberg, Linda
2017-12-01
To review what is characteristic of registered nurses' intuition in clinical settings, in relationships and in the nursing process. Intuition is a controversial concept and nurses believe that there are difficulties in how they should explain their nursing actions or decisions based on intuition. Much of the evidence from the body of research indicates that nurses value their intuition in a variety of clinical settings. More information on how nurses integrate intuition as a core element in daily clinical work would contribute to an improved understanding on how they go about this. Intuition deserves a place in evidence-based activities, where intuition is an important component associated with the nursing process. An integrative review strengthened with a mixed-studies review. Literature searches were conducted in the databases CINAHL, PubMed and PsycINFO, and literature published 1985-2016 were included. The findings in the studies were analysed with content analysis, and the synthesis process entailed a reasoning between the authors. After a quality assessment, 16 studies were included. The analysis and synthesis resulted in three categories. The characteristics of intuition in the nurse's daily clinical activities include application, assertiveness and experiences; in the relationships with patients' intuition include unique connections, mental and bodily responses, and personal qualities; and in the nursing process include support and guidance, component and clues in decision-making, and validating decisions. Intuition is more than simply a "gut feeling," and it is a process based on knowledge and care experience and has a place beside research-based evidence. Nurses integrate both analysis and synthesis of intuition alongside objective data when making decisions. They should rely on their intuition and use this knowledge in clinical practice as a support in decision-making, which increases the quality and safety of patient care. We find that intuition plays a key role in more or less all of the steps in the nursing process as a base for decision-making that supports safe patient care, and is a validated component of nursing clinical care expertise. © 2017 John Wiley & Sons Ltd.
Modified Confidence Intervals for the Mean of an Autoregressive Process.
1985-08-01
Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our
NASA SMD Airborne Science Capabilities for Development and Testing of New Instruments
NASA Technical Reports Server (NTRS)
Fladeland, Matthew
2015-01-01
The SMD NASA Airborne Science Program operates and maintains a fleet of highly modified aircraft to support instrument development, satellite instrument calibration, data product validation and earth science process studies. This poster will provide an overview of aircraft available to NASA researchers including performance specifications and modifications for instrument support, processes for requesting aircraft time and developing cost estimates for proposals, and policies and procedures required to ensure safety of flight.
NASA Technical Reports Server (NTRS)
Colle, Brian A.; Molthan, Andrew L.
2013-01-01
The representation of clouds in climate and weather models is a driver in forecast uncertainty. Cloud microphysics parameterizations are challenged by having to represent a diverse range of ice species. Key characteristics of predicted ice species include habit and fall speed, and complex interactions that result from mixed-phased processes like riming. Our proposed activity leverages Global Precipitation Measurement (GPM) Mission ground validation studies to improve parameterizations
ERIC Educational Resources Information Center
Burton, Laura J.; Mazerolle, Stephanie M.
2011-01-01
Context: Instrument validation is an important facet of survey research methods and athletic trainers must be aware of the important underlying principles. Objective: To discuss the process of survey development and validation, specifically the process of construct validation. Background: Athletic training researchers frequently employ the use of…
Intelligent Sensors for Integrated Systems Health Management (ISHM)
NASA Technical Reports Server (NTRS)
Schmalzel, John L.
2008-01-01
IEEE 1451 Smart Sensors contribute to a number of ISHM goals including cost reduction achieved through: a) Improved configuration management (TEDS); and b) Plug-and-play re-configuration. Intelligent Sensors are adaptation of Smart Sensors to include ISHM algorithms; this offers further benefits: a) Sensor validation. b) Confidence assessment of measurement, and c) Distributed ISHM processing. Space-qualified intelligent sensors are possible a) Size, mass, power constraints. b) Bus structure/protocol.
Issues in the Intellectual Assessment of Hearing Impaired Children
ERIC Educational Resources Information Center
Hughes, Deana; Sapp, Gary L.; Kohler, Maxie P.
2006-01-01
The assessment of hearing impaired children is fraught with a number of problems. These include lack of valid assessment measures, faulty theoretical assumptions, lack of knowledge regarding the functioning of cognitive processes of these children, and biases against these children. This article briefly considers these issues and describes a study…
Qualitative Analysis on Stage: Making the Research Process More Public.
ERIC Educational Resources Information Center
Anfara, Vincent A., Jr.; Brown, Kathleen M.
The increased use of qualitative research methods has spurred interest in developing formal standards for assessing its validity. These standards, however, fall short if they do not include public disclosure of methods as a criterion. The researcher must be accountable in documenting the actions associated with establishing internal validity…
Student Off-Task Electronic Multitasking Predictors: Scale Development and Validation
ERIC Educational Resources Information Center
Qian, Yuxia; Li, Li
2017-01-01
In an attempt to better understand factors contributing to students' off-task electronic multitasking behavior in class, the research included two studies that developed a scale of students' off-task electronic multitasking predictors (the SOTEMP scale), and explored relationships between the scale and various classroom communication processes and…
If Anything Can Go Wrong, Maybe It Will.
ERIC Educational Resources Information Center
Wager, Jane C.; Rayner, Gail T.
Thirty personnel involved in various stages of the Training Extension Course (TEC) design, development, and distribution process were interviewed by telephone to determine the major problems perceived within each stage of the program, which provides validated extension training wherever U.S. soldiers are stationed. Those interviewed included TEC…
Is the Brain Stuff Still the Right (or Left) Stuff?
ERIC Educational Resources Information Center
Lynch, Dudley
1986-01-01
The author presents evidence that supports the argument for the validity of right brain-left brain theories. Discusses the brain's "sense of the future," what the brain does with new information, and altering the brain's ability to process change. A bibliography of further readings is included. (CT)
Student Misbehaviors in Online Classrooms: Scale Development and Validation
ERIC Educational Resources Information Center
Li, Li; Titsworth, Scott
2015-01-01
The current program of research included two studies that developed the Student Online Misbehaviors (SOMs) scale and explored relationships between the SOMs and various classroom communication processes and outcomes. The first study inductively developed initial SOM typologies and tested factor structure via an exploratory factor analysis.…
Cognitive Predictors of Rapid Picture Naming
ERIC Educational Resources Information Center
Decker, Scott L.; Roberts, Alycia M.; Englund, Julia A.
2013-01-01
Deficits in rapid automatized naming (RAN) have been found to be a sensitive cognitive marker for children with dyslexia. However, there is a lack of consensus regarding the construct validity and theoretical neuro-cognitive processes involved in RAN. Additionally, most studies investigating RAN include a narrow range of cognitive measures. The…
Tailoring the Interview Process for More Effective Personnel Selection.
ERIC Educational Resources Information Center
Saville, Anthony
Structuring the initial teacher employment interview adds validity to selection and appropriately utilizes human resources. Five aspects of an effective interview program include: (1) developing a job analysis plan; (2) reviewing the applications; (3) planning for the interview; (4) the interview instrument; and (5) legal implications. An…
Development and Validation of the Poverty Attributions Survey
ERIC Educational Resources Information Center
Bennett, Robert M.; Raiz, Lisa; Davis, Tamara S.
2016-01-01
This article describes the process of developing and testing the Poverty Attribution Survey (PAS), a measure of poverty attributions. The PAS is theory based and includes original items as well as items from previously tested poverty attribution instruments. The PAS was electronically administered to a sample of state-licensed professional social…
Unver, Vesile; Basak, Tulay; Watts, Penni; Gaioso, Vanessa; Moss, Jacqueline; Tastan, Sevinc; Iyigun, Emine; Tosun, Nuran
2017-02-01
The purpose of this study was to adapt the "Student Satisfaction and Self-Confidence in Learning Scale" (SCLS), "Simulation Design Scale" (SDS), and "Educational Practices Questionnaire" (EPQ) developed by Jeffries and Rizzolo into Turkish and establish the reliability and the validity of these translated scales. A sample of 87 nursing students participated in this study. These scales were cross-culturally adapted through a process including translation, comparison with original version, back translation, and pretesting. Construct validity was evaluated by factor analysis, and criterion validity was evaluated using the Perceived Learning Scale, Patient Intervention Self-confidence/Competency Scale, and Educational Belief Scale. Cronbach's alpha values were found as 0.77-0.85 for SCLS, 0.73-0.86 for SDS, and 0.61-0.86 for EPQ. The results of this study show that the Turkish versions of all scales are validated and reliable measurement tools.
NREL Spectrum of Clean Energy Innovation (Brochure)
DOE Office of Scientific and Technical Information (OSTI.GOV)
Not Available
2011-09-01
This brochure describes the NREL Spectrum of Clean Energy Innovation, which includes analysis and decision support, fundamental science, market relevant research, systems integration, testing and validation, commercialization and deployment. Through deep technical expertise and an unmatched breadth of capabilities, the National Renewable Energy Laboratory (NREL) leads an integrated approach across the spectrum of renewable energy innovation. From scientific discovery to accelerating market deployment, NREL works in partnership with private industry to drive the transformation of our nation's energy systems. NREL integrates the entire spectrum of innovation, including fundamental science, market relevant research, systems integration, testing and validation, commercialization, and deployment.more » Our world-class analysis and decision support informs every point on the spectrum. The innovation process at NREL is inter-dependent and iterative. Many scientific breakthroughs begin in our own laboratories, but new ideas and technologies may come to NREL at any point along the innovation spectrum to be validated and refined for commercial use.« less
Bjelland, Mona; Hausken, Solveig E S; Sleddens, Ester F C; Andersen, Lene F; Lie, Hanne C; Finset, Arnstein; Maes, Lea; Melbye, Elisabeth L; Glavin, Kari; Hanssen-Bauer, Merete W; Lien, Nanna
2014-10-15
There is a need for valid and comprehensive measures of parental influence on children's energy balance-related behaviours (EBRB). Such measures should be based on a theoretical framework, acknowledging the dynamic and complex nature of interactions occurring within a family. The aim of the Family & Dietary habits (F&D) project was to develop a conceptual framework identifying important and changeable family processes influencing dietary behaviours of 13-15 year olds. A second aim was to develop valid and reliable questionnaires for adolescents and their parents (both mothers and fathers) measuring these processes. A stepwise approach was used; (1) preparation of scope and structure, (2) development of the F&D questionnaires, (3) the conducting of pilot studies and (4) the conducting of validation studies (assessing internal reliability, test-retest reliability and confirmatory factor analysis) using data from a cross-sectional study. The conceptual framework includes psychosocial concepts such as family functioning, cohesion, conflicts, communication, work-family stress, parental practices and parental style. The physical characteristics of the home environment include accessibility and availability of different food items, while family meals are the sociocultural setting included. Individual characteristics measured are dietary intake (vegetables and sugar-sweetened beverages) and adolescents' impulsivity. The F&D questionnaires developed were tested in a test-retest (54 adolescents and 44 of their parents) and in a cross-sectional survey including 440 adolescents (13-15 year olds), 242 mothers and 155 fathers. The samples appear to be relatively representative for Norwegian adolescents and parents. For adolescents, mothers and fathers, the test-retest reliability of the dietary intake, frequencies of (family) meals, work-family stress and communication variables was satisfactory (ICC: 0.53-0.99). Barratt Impulsiveness Scale-Brief (BIS-Brief) was included, assessing adolescent's impulsivity. The internal reliability (Cronbach's alphas: 0.77/0.82) and test-retest reliability values (ICC: 0.74/0.77) of BIS-Brief were good. The conceptual framework developed may be a useful tool in guiding measurement and assessment of the home food environment and family processes related to adolescents' dietary habits, in particular and for EBRBs more generally. The results support the use of the F&D questionnaires as psychometrically sound tools to assess family characteristics and adolescent's impulsivity.
NASA Astrophysics Data System (ADS)
Well, Reinhard; Böttcher, Jürgen; Butterbach-Bahl, Klaus; Dannenmann, Michael; Deppe, Marianna; Dittert, Klaus; Dörsch, Peter; Horn, Marcus; Ippisch, Olaf; Mikutta, Robert; Müller, Carsten; Müller, Christoph; Senbayram, Mehmet; Vogel, Hans-Jörg; Wrage-Mönnig, Nicole
2016-04-01
Robust denitrification data suitable to validate soil N2 fluxes in denitrification models are scarce due to methodical limitations and the extreme spatio-temporal heterogeneity of denitrification in soils. Numerical models have become essential tools to predict denitrification at different scales. Model performance could either be tested for total gaseous flux (NO + N2O + N2), individual denitrification products (e.g. N2O and/or NO) or for the effect of denitrification factors (e.g. C-availability, respiration, diffusivity, anaerobic volume, etc.). While there are numerous examples for validating N2O fluxes, there are neither robust field data of N2 fluxes nor sufficiently resolved measurements of control factors used as state variables in the models. To the best of our knowledge there has been only one published validation of modelled soil N2 flux by now, using a laboratory data set to validate an ecosystem model. Hence there is a need for validation data at both, the mesocosm and the field scale including validation of individual denitrification controls. Here we present the concept for collecting model validation data which is be part of the DFG-research unit "Denitrification in Agricultural Soils: Integrated Control and Modelling at Various Scales (DASIM)" starting this year. We will use novel approaches including analysis of stable isotopes, microbial communities, pores structure and organic matter fractions to provide denitrification data sets comprising as much detail on activity and regulation as possible as a basis to validate existing and calibrate new denitrification models that are applied and/or developed by DASIM subprojects. The basic idea is to simulate "field-like" conditions as far as possible in an automated mesocosm system without plants in order to mimic processes in the soil parts not significantly influenced by the rhizosphere (rhizosphere soils are studied by other DASIM projects). Hence, to allow model testing in a wide range of conditions, denitrification control factors will be varied in the initial settings (pore volume, plant residues, mineral N, pH) but also over time, where moisture, temperature, and mineral N will be manipulated according to typical time patterns in the field. This will be realized by including precipitation events, fertilization (via irrigation), drainage (via water potential) and temperature in the course of incubations. Moreover, oxygen concentration will be varied to simulate anaerobic events. These data will be used to calibrate the newly to develop DASIM models as well as existing denitrification models. One goal of DASIM is to create a public data base as a joint basis for model testing by denitrification modellers. Therefore we invite contributions of suitable data-sets from the scientific community. Requirements will be briefly outlined.
Atmospheric stability and complex terrain: comparing measurements and CFD
NASA Astrophysics Data System (ADS)
Koblitz, T.; Bechmann, A.; Berg, J.; Sogachev, A.; Sørensen, N.; Réthoré, P.-E.
2014-12-01
For wind resource assessment, the wind industry is increasingly relying on Computational Fluid Dynamics models that focus on modeling the airflow in a neutrally stratified surface layer. So far, physical processes that are specific to the atmospheric boundary layer, for example the Coriolis force, buoyancy forces and heat transport, are mostly ignored in state-of-the-art flow solvers. In order to decrease the uncertainty of wind resource assessment, the effect of thermal stratification on the atmospheric boundary layer should be included in such models. The present work focuses on non-neutral atmospheric flow over complex terrain including physical processes like stability and Coriolis force. We examine the influence of these effects on the whole atmospheric boundary layer using the DTU Wind Energy flow solver EllipSys3D. To validate the flow solver, measurements from Benakanahalli hill, a field experiment that took place in India in early 2010, are used. The experiment was specifically designed to address the combined effects of stability and Coriolis force over complex terrain, and provides a dataset to validate flow solvers. Including those effects into EllipSys3D significantly improves the predicted flow field when compared against the measurements.
Auditory processing disorders: an update for speech-language pathologists.
DeBonis, David A; Moncrieff, Deborah
2008-02-01
Unanswered questions regarding the nature of auditory processing disorders (APDs), how best to identify at-risk students, how best to diagnose and differentiate APDs from other disorders, and concerns about the lack of valid treatments have resulted in ongoing confusion and skepticism about the diagnostic validity of this label. This poses challenges for speech-language pathologists (SLPs) who are working with school-age children and whose scope of practice includes APD screening and intervention. The purpose of this article is to address some of the questions commonly asked by SLPs regarding APDs in school-age children. This article is also intended to serve as a resource for SLPs to be used in deciding what role they will or will not play with respect to APDs in school-age children. The methodology used in this article included a computerized database review of the latest published information on APD, with an emphasis on the work of established researchers and expert panels, including articles from the American Speech-Language-Hearing Association and the American Academy of Audiology. The article concludes with the authors' recommendations for continued research and their views on the appropriate role of the SLP in performing careful screening, making referrals, and supporting intervention.
Oosterhaven, Jart A F; Schuttelaar, Marie L A; Apfelbacher, Christian; Diepgen, Thomas L; Ofenloch, Robert F
2017-08-01
There is a need for well-developed and validated questionnaires to measure patient reported outcomes. The Quality of Life in Hand Eczema Questionnaire (QOLHEQ) is such a validated instrument measuring disease-specific health-related quality of life in hand eczema patients. A re-validation of measurement properties is required before an instrument is used in a new population. With the objective of arriving at a guideline for translation and national validation of the QOLHEQ, we have developed the design of a reference study on how to adequately assess measurement properties of the QOLHEQ based on interdisciplinary discussions and current standards. We present a step-by-step guideline to assess translation (including cross-cultural adaptation), scale structure, validity, reproducibility, responsiveness, and interpretability. We describe which outcomes should be reported for each measurement property, and give advice on how to calculate these. It is also specified which sample size is needed, how to deal with missing data, and which cutoff values should be applied for the measurement properties assessed during the validation process. In conclusion, this guideline, presenting a reference validation study for the QOLHEQ, creates the possibility to harmonize the national validation of the various language versions of the QOLHEQ. © 2017 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.
Hoben, Matthias; Bär, Marion; Mahler, Cornelia; Berger, Sarah; Squires, Janet E; Estabrooks, Carole A; Kruse, Andreas; Behrens, Johann
2014-01-31
To study the association between organizational context and research utilization in German residential long term care (LTC), we translated three Canadian assessment instruments: the Alberta Context Tool (ACT), Estabrooks' Kinds of Research Utilization (RU) items and the Conceptual Research Utilization Scale. Target groups for the tools were health care aides (HCAs), registered nurses (RNs), allied health professionals (AHPs), clinical specialists and care managers. Through a cognitive debriefing process, we assessed response processes validity-an initial stage of validity, necessary before more advanced validity assessment. We included 39 participants (16 HCAs, 5 RNs, 7 AHPs, 5 specialists and 6 managers) from five residential LTC facilities. We created lists of questionnaire items containing problematic items plus items randomly selected from the pool of remaining items. After participants completed the questionnaires, we conducted individual semi-structured cognitive interviews using verbal probing. We asked participants to reflect on their answers for list items in detail. Participants' answers were compared to concept maps defining the instrument concepts in detail. If at least two participants gave answers not matching concept map definitions, items were revised and re-tested with new target group participants. Cognitive debriefings started with HCAs. Based on the first round, we modified 4 of 58 ACT items, 1 ACT item stem and all 8 items of the RU tools. All items were understood by participants after another two rounds. We included revised HCA ACT items in the questionnaires for the other provider groups. In the RU tools for the other provider groups, we used different wording than the HCA version, as was done in the original English instruments. Only one cognitive debriefing round was needed with each of the other provider groups. Cognitive debriefing is essential to detect and respond to problematic instrument items, particularly when translating instruments for heterogeneous, less well educated provider groups such as HCAs. Cognitive debriefing is an important step in research tool development and a vital component of establishing response process validity evidence. Publishing cognitive debriefing results helps researchers to determine potentially critical elements of the translated tools and assists with interpreting scores.
[Development and validation of quality standards for colonoscopy].
Sánchez Del Río, Antonio; Baudet, Juan Salvador; Naranjo Rodríguez, Antonio; Campo Fernández de Los Ríos, Rafael; Salces Franco, Inmaculada; Aparicio Tormo, Jose Ramón; Sánchez Muñoz, Diego; Llach, Joseph; Hervás Molina, Antonio; Parra-Blanco, Adolfo; Díaz Acosta, Juan Antonio
2010-01-30
Before starting programs for colorectal cancer screening it is necessary to evaluate the quality of colonoscopy. Our objectives were to develop a group of quality indicators of colonoscopy easily applicable and to determine the variability of their achievement. After reviewing the bibliography we prepared 21 potential indicators of quality that were submitted to a process of selection in which we measured their facial validity, content validity, reliability and viability of their measurement. We estimated the variability of their achievement by means of the coefficient of variability (CV) and the variability of the achievement of the standards by means of chi(2). Six indicators overcome the selection process: informed consent, medication administered, completed colonoscopy, complications, every polyp removed and recovered, and adenoma detection rate in patients older than 50 years. 1928 colonoscopies were included from eight endoscopy units. Every unit included the same number of colonoscopies selected by means of simple random sampling with substitution. There was an important variability in the achievement of some indicators and standards: medication administered (CV 43%, p<0.01), complications registered (CV 37%, p<0.01), every polyp removed and recovered (CV 12%, p<0.01) and adenoma detection rate in older than fifty years (CV 2%, p<0.01). We have validated six quality indicators for colonoscopy which are easily measurable. An important variability exists in the achievement of some indicators and standards. Our data highlight the importance of the development of continuous quality improvement programmes for colonoscopy before starting colorectal cancer screening. Copyright (c) 2009 Elsevier España, S.L. All rights reserved.
Tomoaia-Cotisel, Andrada; Scammon, Debra L; Waitzman, Norman J; Cronholm, Peter F; Halladay, Jacqueline R; Driscoll, David L; Solberg, Leif I; Hsu, Clarissa; Tai-Seale, Ming; Hiratsuka, Vanessa; Shih, Sarah C; Fetters, Michael D; Wise, Christopher G; Alexander, Jeffrey A; Hauser, Diane; McMullen, Carmit K; Scholle, Sarah Hudson; Tirodkar, Manasi A; Schmidt, Laura; Donahue, Katrina E; Parchman, Michael L; Stange, Kurt C
2013-01-01
We aimed to advance the internal and external validity of research by sharing our empirical experience and recommendations for systematically reporting contextual factors. Fourteen teams conducting research on primary care practice transformation retrospectively considered contextual factors important to interpreting their findings (internal validity) and transporting or reinventing their findings in other settings/situations (external validity). Each team provided a table or list of important contextual factors and interpretive text included as appendices to the articles in this supplement. Team members identified the most important contextual factors for their studies. We grouped the findings thematically and developed recommendations for reporting context. The most important contextual factors sorted into 5 domains: (1) the practice setting, (2) the larger organization, (3) the external environment, (4) implementation pathway, and (5) the motivation for implementation. To understand context, investigators recommend (1) engaging diverse perspectives and data sources, (2) considering multiple levels, (3) evaluating history and evolution over time, (4) looking at formal and informal systems and culture, and (5) assessing the (often nonlinear) interactions between contextual factors and both the process and outcome of studies. We include a template with tabular and interpretive elements to help study teams engage research participants in reporting relevant context. These findings demonstrate the feasibility and potential utility of identifying and reporting contextual factors. Involving diverse stakeholders in assessing context at multiple stages of the research process, examining their association with outcomes, and consistently reporting critical contextual factors are important challenges for a field interested in improving the internal and external validity and impact of health care research.
Validation of a pulsed electric field process to pasteurize strawberry puree
USDA-ARS?s Scientific Manuscript database
An inexpensive data acquisition method was developed to validate the exact number and shape of the pulses applied during pulsed electric fields (PEF) processing. The novel validation method was evaluated in conjunction with developing a pasteurization PEF process for strawberry puree. Both buffered...
Risk-based Methodology for Validation of Pharmaceutical Batch Processes.
Wiles, Frederick
2013-01-01
In January 2011, the U.S. Food and Drug Administration published new process validation guidance for pharmaceutical processes. The new guidance debunks the long-held industry notion that three consecutive validation batches or runs are all that are required to demonstrate that a process is operating in a validated state. Instead, the new guidance now emphasizes that the level of monitoring and testing performed during process performance qualification (PPQ) studies must be sufficient to demonstrate statistical confidence both within and between batches. In some cases, three qualification runs may not be enough. Nearly two years after the guidance was first published, little has been written defining a statistical methodology for determining the number of samples and qualification runs required to satisfy Stage 2 requirements of the new guidance. This article proposes using a combination of risk assessment, control charting, and capability statistics to define the monitoring and testing scheme required to show that a pharmaceutical batch process is operating in a validated state. In this methodology, an assessment of process risk is performed through application of a process failure mode, effects, and criticality analysis (PFMECA). The output of PFMECA is used to select appropriate levels of statistical confidence and coverage which, in turn, are used in capability calculations to determine when significant Stage 2 (PPQ) milestones have been met. The achievement of Stage 2 milestones signals the release of batches for commercial distribution and the reduction of monitoring and testing to commercial production levels. Individuals, moving range, and range/sigma charts are used in conjunction with capability statistics to demonstrate that the commercial process is operating in a state of statistical control. The new process validation guidance published by the U.S. Food and Drug Administration in January of 2011 indicates that the number of process validation batches or runs required to demonstrate that a pharmaceutical process is operating in a validated state should be based on sound statistical principles. The old rule of "three consecutive batches and you're done" is no longer sufficient. The guidance, however, does not provide any specific methodology for determining the number of runs required, and little has been published to augment this shortcoming. The paper titled "Risk-based Methodology for Validation of Pharmaceutical Batch Processes" describes a statistically sound methodology for determining when a statistically valid number of validation runs has been acquired based on risk assessment and calculation of process capability.
Lock, Irina; Seele, Peter
2017-01-01
Credibility is central to communication but often jeopardized by “credibility gaps.” This is especially true for communication about corporate social responsibility (CSR). To date, no tool has been available to analyze stakeholders’ credibility perceptions of CSR communication. This article presents a series of studies conducted to develop a scale to assess the perceived credibility of CSR reports, one of CSR communication’s most important tools. The scale provides a novel operationalization of credibility using validity claims of Habermas’s ideal speech situation as subdimensions. The scale development process, carried out in five studies including a literature review, a Delphi study, and three validation studies applying confirmatory factor analysis, resulted in the 16-item Perceived Credibility (PERCRED) scale. The scale shows convergent, discriminant, concurrent, and nomological validity and is the first validated measure for analyzing credibility perceptions of CSR reports. PMID:29278260
Lock, Irina; Seele, Peter
2017-11-01
Credibility is central to communication but often jeopardized by "credibility gaps." This is especially true for communication about corporate social responsibility (CSR). To date, no tool has been available to analyze stakeholders' credibility perceptions of CSR communication. This article presents a series of studies conducted to develop a scale to assess the perceived credibility of CSR reports, one of CSR communication's most important tools. The scale provides a novel operationalization of credibility using validity claims of Habermas's ideal speech situation as subdimensions. The scale development process, carried out in five studies including a literature review, a Delphi study, and three validation studies applying confirmatory factor analysis, resulted in the 16-item Perceived Credibility (PERCRED) scale. The scale shows convergent, discriminant, concurrent, and nomological validity and is the first validated measure for analyzing credibility perceptions of CSR reports.
Development of an interprofessional lean facilitator assessment scale.
Bravo-Sanchez, Cindy; Dorazio, Vincent; Denmark, Robert; Heuer, Albert J; Parrott, J Scott
2018-05-01
High reliability is important for optimising quality and safety in healthcare organisations. Reliability efforts include interprofessional collaborative practice (IPCP) and Lean quality/process improvement strategies, which require skilful facilitation. Currently, no validated Lean facilitator assessment tool for interprofessional collaboration exists. This article describes the development and pilot evaluation of such a tool; the Interprofessional Lean Facilitator Assessment Scale (ILFAS), which measures both technical and 'soft' skills, which have not been measured in other instruments. The ILFAS was developed using methodologies and principles from Lean/Shingo, IPCP, metacognition research and Bloom's Taxonomy of Learning Domains. A panel of experts confirmed the initial face validity of the instrument. Researchers independently assessed five facilitators, during six Lean sessions. Analysis included quantitative evaluation of rater agreement. Overall inter-rater agreement of the assessment of facilitator performance was high (92%), and discrepancies in the agreement statistics were analysed. Face and content validity were further established, and usability was evaluated, through primary stakeholder post-pilot feedback, uncovering minor concerns, leading to tool revision. The ILFAS appears comprehensive in the assessment of facilitator knowledge, skills, abilities, and may be useful in the discrimination between facilitators of different skill levels. Further study is needed to explore instrument performance and validity.
Gruzelier, John H
2014-07-01
As a continuation of a review of evidence of the validity of cognitive/affective gains following neurofeedback in healthy participants, including correlations in support of the gains being mediated by feedback learning (Gruzelier, 2014a), the focus here is on the impact on creativity, especially in the performing arts including music, dance and acting. The majority of research involves alpha/theta (A/T), sensory-motor rhythm (SMR) and heart rate variability (HRV) protocols. There is evidence of reliable benefits from A/T training with advanced musicians especially for creative performance, and reliable benefits from both A/T and SMR training for novice music performance in adults and in a school study with children with impact on creativity, communication/presentation and technique. Making the SMR ratio training context ecologically relevant for actors enhanced creativity in stage performance, with added benefits from the more immersive training context. A/T and HRV training have benefitted dancers. The neurofeedback evidence adds to the rapidly accumulating validation of neurofeedback, while performing arts studies offer an opportunity for ecological validity in creativity research for both creative process and product. Copyright © 2013 Elsevier Ltd. All rights reserved.
Viirs Land Science Investigator-Led Processing System
NASA Astrophysics Data System (ADS)
Devadiga, S.; Mauoka, E.; Roman, M. O.; Wolfe, R. E.; Kalb, V.; Davidson, C. C.; Ye, G.
2015-12-01
The objective of the NASA's Suomi National Polar Orbiting Partnership (S-NPP) Land Science Investigator-led Processing System (Land SIPS), housed at the NASA Goddard Space Flight Center (GSFC), is to produce high quality land products from the Visible Infrared Imaging Radiometer Suite (VIIRS) to extend the Earth System Data Records (ESDRs) developed from NASA's heritage Earth Observing System (EOS) Moderate Resolution Imaging Spectroradiometer (MODIS) onboard the EOS Terra and Aqua satellites. In this paper we will present the functional description and capabilities of the S-NPP Land SIPS, including system development phases and production schedules, timeline for processing, and delivery of land science products based on coordination with the S-NPP Land science team members. The Land SIPS processing stream is expected to be operational by December 2016, generating land products either using the NASA science team delivered algorithms, or the "best-of" science algorithms currently in operation at NASA's Land Product Evaluation and Algorithm Testing Element (PEATE). In addition to generating the standard land science products through processing of the NASA's VIIRS Level 0 data record, the Land SIPS processing system is also used to produce a suite of near-real time products for NASA's application community. Land SIPS will also deliver the standard products, ancillary data sets, software and supporting documentation (ATBDs) to the assigned Distributed Active Archive Centers (DAACs) for archival and distribution. Quality assessment and validation will be an integral part of the Land SIPS processing system; the former being performed at Land Data Operational Product Evaluation (LDOPE) facility, while the latter under the auspices of the CEOS Working Group on Calibration & Validation (WGCV) Land Product Validation (LPV) Subgroup; adopting the best-practices and tools used to assess the quality of heritage EOS-MODIS products generated at the MODIS Adaptive Processing System (MODAPS).
CFD Code Development for Combustor Flows
NASA Technical Reports Server (NTRS)
Norris, Andrew
2003-01-01
During the lifetime of this grant, work has been performed in the areas of model development, code development, code validation and code application. For model development, this has included the PDF combustion module, chemical kinetics based on thermodynamics, neural network storage of chemical kinetics, ILDM chemical kinetics and assumed PDF work. Many of these models were then implemented in the code, and in addition many improvements were made to the code, including the addition of new chemistry integrators, property evaluation schemes, new chemistry models and turbulence-chemistry interaction methodology. Validation of all new models and code improvements were also performed, while application of the code to the ZCET program and also the NPSS GEW combustor program were also performed. Several important items remain under development, including the NOx post processing, assumed PDF model development and chemical kinetic development. It is expected that this work will continue under the new grant.
Erdodi, Laszlo A; Sagar, Sanya; Seke, Kristian; Zuccato, Brandon G; Schwartz, Eben S; Roth, Robert M
2018-06-01
This study was designed to develop performance validity indicators embedded within the Delis-Kaplan Executive Function Systems (D-KEFS) version of the Stroop task. Archival data from a mixed clinical sample of 132 patients (50% male; M Age = 43.4; M Education = 14.1) clinically referred for neuropsychological assessment were analyzed. Criterion measures included the Warrington Recognition Memory Test-Words and 2 composites based on several independent validity indicators. An age-corrected scaled score ≤6 on any of the 4 trials reliably differentiated psychometrically defined credible and noncredible response sets with high specificity (.87-.94) and variable sensitivity (.34-.71). An inverted Stroop effect was less sensitive (.14-.29), but comparably specific (.85-90) to invalid performance. Aggregating the newly developed D-KEFS Stroop validity indicators further improved classification accuracy. Failing the validity cutoffs was unrelated to self-reported depression or anxiety. However, it was associated with elevated somatic symptom report. In addition to processing speed and executive function, the D-KEFS version of the Stroop task can function as a measure of performance validity. A multivariate approach to performance validity assessment is generally superior to univariate models. (PsycINFO Database Record (c) 2018 APA, all rights reserved).
Effective virus inactivation and removal by steps of Biotest Pharmaceuticals IGIV production process
Dichtelmüller, Herbert O.; Flechsig, Eckhard; Sananes, Frank; Kretschmar, Michael; Dougherty, Christopher J.
2012-01-01
The virus validation of three steps of Biotest Pharmaceuticals IGIV production process is described here. The steps validated are precipitation and removal of fraction III of the cold ethanol fractionation process, solvent/detergent treatment and 35 nm virus filtration. Virus validation was performed considering combined worst case conditions. By these validated steps sufficient virus inactivation/removal is achieved, resulting in a virus safe product. PMID:24371563
Validation and Comprehension: An Integrated Overview
ERIC Educational Resources Information Center
Kendeou, Panayiota
2014-01-01
In this article, I review and discuss the work presented in this special issue while focusing on a number of issues that warrant further investigation in validation research. These issues pertain to the nature of the validation processes, the processes and mechanisms that support validation during comprehension, the factors that influence…
NASA Technical Reports Server (NTRS)
Call, Jared A.; Kwok, John H.; Fisher, Forest W.
2013-01-01
This innovation is a tool used to verify and validate spacecraft sequences at the predicted events file (PEF) level for the GRAIL (Gravity Recovery and Interior Laboratory, see http://www.nasa. gov/mission_pages/grail/main/index. html) mission as part of the Multi-Mission Planning and Sequencing Team (MPST) operations process to reduce the possibility for errors. This tool is used to catch any sequence related errors or issues immediately after the seqgen modeling to streamline downstream processes. This script verifies and validates the seqgen modeling for the GRAIL MPST process. A PEF is provided as input, and dozens of checks are performed on it to verify and validate the command products including command content, command ordering, flight-rule violations, modeling boundary consistency, resource limits, and ground commanding consistency. By performing as many checks as early in the process as possible, grl_pef_check streamlines the MPST task of generating GRAIL command and modeled products on an aggressive schedule. By enumerating each check being performed, and clearly stating the criteria and assumptions made at each step, grl_pef_check can be used as a manual checklist as well as an automated tool. This helper script was written with a focus on enabling the user with the information they need in order to evaluate a sequence quickly and efficiently, while still keeping them informed and active in the overall sequencing process. grl_pef_check verifies and validates the modeling and sequence content prior to investing any more effort into the build. There are dozens of various items in the modeling run that need to be checked, which is a time-consuming and errorprone task. Currently, no software exists that provides this functionality. Compared to a manual process, this script reduces human error and saves considerable man-hours by automating and streamlining the mission planning and sequencing task for the GRAIL mission.
Mangueira, Suzana de Oliveira; Lopes, Marcos Venícios de Oliveira
2016-10-01
To evaluate the clinical validity indicators for the nursing diagnosis of dysfunctional family processes related to alcohol abuse. Alcoholism is a chronic disease that negatively affects family relationships. Studies on the nursing diagnosis of dysfunctional family processes are scarce in the literature. This diagnosis is currently composed of 115 defining characteristics, hindering their use in practice and highlighting the need for clinical validation. This was a diagnostic accuracy study. A sample of 110 alcoholics admitted to a reference centre for alcohol treatment was assessed during the second half of 2013 for the presence or absence of the defining characteristics of the diagnosis. Operational definitions were created for each defining characteristic based on concept analysis and experts evaluated the content of these definitions. Diagnostic accuracy measures were calculated from latent class models with random effects. All 89 clinical indicators were found in the sample and a set of 24 clinical indicators was identified as clinically valid for a diagnostic screening for family dysfunction from the report of alcoholics. Main clinical indicators with high specificity included sexual abuse, disturbance in academic performance in children and manipulation. The main indicators that showed high sensitivity values were distress, loss, anxiety, low self-esteem, confusion, embarrassment, insecurity, anger, loneliness, deterioration in family relationships and disturbance in family dynamics. Eighteen clinical indicators showed a high capacity for diagnostic screening for alcoholics (high sensitivity) and six indicators can be used for confirmatory diagnosis (high specificity). © 2016 John Wiley & Sons Ltd.
The Development and Validation of a New Land Surface Model for Regional and Global Climate Modeling
NASA Astrophysics Data System (ADS)
Lynch-Stieglitz, Marc
1995-11-01
A new land-surface scheme intended for use in mesoscale and global climate models has been developed and validated. The ground scheme consists of 6 soil layers. Diffusion and a modified tipping bucket model govern heat and water flow respectively. A 3 layer snow model has been incorporated into a modified BEST vegetation scheme. TOPMODEL equations and Digital Elevation Model data are used to generate baseflow which supports lowland saturated zones. Soil moisture heterogeneity represented by saturated lowlands subsequently impacts watershed evapotranspiration, the partitioning of surface fluxes, and the development of the storm hydrograph. Five years of meteorological and hydrological data from the Sleepers river watershed located in the eastern highlands of Vermont where winter snow cover is significant were then used to drive and validate the new scheme. Site validation data were sufficient to evaluate model performance with regard to various aspects of the watershed water balance, including snowpack growth/ablation, the spring snowmelt hydrograph, storm hydrographs, and the seasonal development of watershed evapotranspiration and soil moisture. By including topographic effects, not only are the main spring hydrographs and individual storm hydrographs adequately resolved, but the mechanisms generating runoff are consistent with current views of hydrologic processes. The seasonal movement of the mean water table depth and the saturated area of the watershed are consistent with site data and the overall model hydroclimatology, including the surface fluxes, seems reasonable.
DOE Office of Scientific and Technical Information (OSTI.GOV)
Linard, Joshua; Campbell, Sam
This event included annual sampling of groundwater and surface water locations at the Gunnison, Colorado, Processing Site. Sampling and analyses were conducted as specified in Sampling and Analysis Plan for US Department of Energy Office of Legacy Management Sites (LMS/PRO/S04351, continually updated, http://energy.gov/lm/downloads/sampling-and analysis-plan-us-department-energy-office-legacy-management-sites). Samples were collected from 28 monitoring wells, three domestic wells, and six surface locations in April at the processing site as specified in the draft 2010 Ground Water Compliance Action Plan for the Gunnison, Colorado, Processing Site. Planned monitoring locations are shown in Attachment 1, Sampling and Analysis Work Order. Domestic wells 0476 and 0477 weremore » sampled in June because the homes were unoccupied in April, and the wells were not in use. Duplicate samples were collected from locations 0126, 0477, and 0780. One equipment blank was collected during this sampling event. Water levels were measured at all monitoring wells that were sampled. See Attachment 2, Trip Reports for additional details. The analytical data and associated qualifiers can be viewed in environmental database reports and are also available for viewing with dynamic mapping via the GEMS (Geospatial Environmental Mapping System) website at http://gems.lm.doe.gov/#. No issues were identified during the data validation process that requires additional action or follow-up. An assessment of anomalous data is included in Attachment 3. Interpretation and presentation of results, including an assessment ofthe natural flushing compliance strategy, will be reported in the upcoming 2016 Verification Monitoring Report. U.S.« less
Atmospheric Science Data Center
2014-06-30
... Reports: TES Data Versions: TES Validation Report Version 6.0 (PDF) R13 processing version; F07_10 file versions TES Validation Report Version 5.0 (PDF) R12 processing version; F06_08, F06_09 file ...
The quality of instruments to assess the process of shared decision making: A systematic review
Bomhof-Roordink, Hanna; Smith, Ian P.; Scholl, Isabelle; Stiggelbout, Anne M.; Pieterse, Arwen H.
2018-01-01
Objective To inventory instruments assessing the process of shared decision making and appraise their measurement quality, taking into account the methodological quality of their validation studies. Methods In a systematic review we searched seven databases (PubMed, Embase, Emcare, Cochrane, PsycINFO, Web of Science, Academic Search Premier) for studies investigating instruments measuring the process of shared decision making. Per identified instrument, we assessed the level of evidence separately for 10 measurement properties following a three-step procedure: 1) appraisal of the methodological quality using the COnsensus-based Standards for the selection of health status Measurement INstruments (COSMIN) checklist, 2) appraisal of the psychometric quality of the measurement property using three possible quality scores, 3) best-evidence synthesis based on the number of studies, their methodological and psychometrical quality, and the direction and consistency of the results. The study protocol was registered at PROSPERO: CRD42015023397. Results We included 51 articles describing the development and/or evaluation of 40 shared decision-making process instruments: 16 patient questionnaires, 4 provider questionnaires, 18 coding schemes and 2 instruments measuring multiple perspectives. There is an overall lack of evidence for their measurement quality, either because validation is missing or methods are poor. The best-evidence synthesis indicated positive results for a major part of instruments for content validity (50%) and structural validity (53%) if these were evaluated, but negative results for a major part of instruments when inter-rater reliability (47%) and hypotheses testing (59%) were evaluated. Conclusions Due to the lack of evidence on measurement quality, the choice for the most appropriate instrument can best be based on the instrument’s content and characteristics such as the perspective that they assess. We recommend refinement and validation of existing instruments, and the use of COSMIN-guidelines to help guarantee high-quality evaluations. PMID:29447193
NASA Astrophysics Data System (ADS)
Trendafiloski, G.; Gaspa Rebull, O.; Ewing, C.; Podlaha, A.; Magee, B.
2012-04-01
Calibration and validation are crucial steps in the production of the catastrophe models for the insurance industry in order to assure the model's reliability and to quantify its uncertainty. Calibration is needed in all components of model development including hazard and vulnerability. Validation is required to ensure that the losses calculated by the model match those observed in past events and which could happen in future. Impact Forecasting, the catastrophe modelling development centre of excellence within Aon Benfield, has recently launched its earthquake model for Algeria as a part of the earthquake model for the Maghreb region. The earthquake model went through a detailed calibration process including: (1) the seismic intensity attenuation model by use of macroseismic observations and maps from past earthquakes in Algeria; (2) calculation of the country-specific vulnerability modifiers by use of past damage observations in the country. The use of Benouar, 1994 ground motion prediction relationship was proven as the most appropriate for our model. Calculation of the regional vulnerability modifiers for the country led to 10% to 40% larger vulnerability indexes for different building types compared to average European indexes. The country specific damage models also included aggregate damage models for residential, commercial and industrial properties considering the description of the buildings stock given by World Housing Encyclopaedia and the local rebuilding cost factors equal to 10% for damage grade 1, 20% for damage grade 2, 35% for damage grade 3, 75% for damage grade 4 and 100% for damage grade 5. The damage grades comply with the European Macroseismic Scale (EMS-1998). The model was validated by use of "as-if" historical scenario simulations of three past earthquake events in Algeria M6.8 2003 Boumerdes, M7.3 1980 El-Asnam and M7.3 1856 Djidjelli earthquake. The calculated return periods of the losses for client market portfolio align with the repeatability of such catastrophe losses in the country. The validation process also included collaboration between Aon Benfield and its client in order to consider the insurance market penetration in Algeria estimated approximately at 5%. Thus, we believe that the applied approach led towards the production of an earthquake model for Algeria that is scientifically sound and reliable from one side and market and client oriented on the other side.
Walters, Stephen John; Stern, Cindy; Robertson-Malt, Suzanne
2016-04-01
There is a growing call by consumers and governments for healthcare to adopt systems and approaches to care to improve patient safety. Collaboration within healthcare settings is an important factor for improving systems of care. By using validated measurement instruments a standardized approach to assessing collaboration is possible, otherwise it is only an assumption that collaboration is occurring in any healthcare setting. The objective of this review was to evaluate and compare measurement properties of instruments that measure collaboration within healthcare settings, specifically those which have been psychometrically tested and validated. Participants could be healthcare professionals, the patient or any non-professional who contributes to a patient's care, for example, family members, chaplains or orderlies. The term participant type means the designation of any one participant; for example 'nurse', 'social worker' or 'administrator'. More than two participant types was mandatory. The focus of this review was the validity of tools used to measure collaboration within healthcare settings. The types of studies considered for inclusion were validation studies, but quantitative study designs such as randomized controlled trials, controlled trials and case studies were also eligible for inclusion. Studies that focused on Interprofessional Education, were published as an abstract only, contained patient self-reporting only or were not about care delivery were excluded. The outcome of interest was validation and interpretability of the instrument being assessed and included content validity, construct validity and reliability. Interpretability is characterized by statistics such as mean and standard deviation which can be translated to a qualitative meaning. The search strategy aimed to find both published and unpublished studies. A three-step search strategy was utilized in this review. The databases searched included PubMed, CINAHL, Embase, Cochrane Central Register of Controlled Trials, Emerald Fulltext, MD Consult Australia, PsycARTICLES, Psychology and Behavioural Sciences Collection, PsycINFO, Informit Health Databases, Scopus, UpToDate and Web of Science. The search for unpublished studies included EThOS (Electronic Thesis Online Service), Index to Theses and ProQuest- Dissertations and Theses. The assessment of methodological quality of the included studies was undertaken using the COSMIN checklist which is a validated tool that assesses the process of design and validation of healthcare measurement instruments. An Excel spreadsheet version of COSMIN was developed for data collection which included a worksheet for extracting participant characteristics and interpretability data. Statistical pooling of data was not possible for this review. Therefore, the findings are presented in a narrative form including tables and figures to aid in data presentation. To make a synthesis of the assessments of methodological quality of the different studies, each instrument was rated by accounting for the number of studies performed with an instrument, the appraisal of methodological quality and the consistency of results between studies. Twenty-one studies of 12 instruments were included in the review. The studies were diverse in their theoretical underpinnings, target population/setting and measurement objectives. Measurement objectives included: investigating beliefs, behaviors, attitudes, perceptions and relationships associated with collaboration; measuring collaboration between different levels of care or within a multi-rater/target group; assessing collaboration across teams; or assessing internal participation of both teams and patients.Studies produced validity or interpretability data but none of the studies assessed all validity and reliability properties. However, most of the included studies produced a factor structure or referred to prior factor analysis. A narrative synthesis of the individual study factor structures was generated consisting of nine headings: organizational settings, support structures, purpose and goals; communication; reflection on process; cooperation; coordination; role interdependence and partnership; relationships; newly created professional activities; and professional flexibility. Among the many instruments that measure collaboration within healthcare settings, the quality of each instrument varies; instruments are designed for specific populations and purposes, and are validated in various settings. Selecting an instrument requires careful consideration of the qualities of each. Therefore, referring to systematic reviews of measurement properties of instruments may be helpful to clinicians or researchers in instrument selection. Systematic reviews of measurement properties of instruments are valuable in aiding in instrument selection. This systematic review may be useful in instrument selection for the measurement of collaboration within healthcare settings with a complex mix of participant types. Evaluating collaboration provides important information on the strengths and limitations of different healthcare settings and the opportunities for continuous improvement via any remedial actions initiated. Development of a tool that can be used to measure collaboration within teams of healthcare professionals and non-professionals is important for practice. The use of different statistical modelling techniques, such as Item Response Theory modelling and the translation of models into Computer Adaptive Tests, may prove useful. Measurement equivalence is an important consideration for future instrument development and validation. Further development of the COSMIN tool should include appraisal for measurement equivalence. Researchers developing and validating measurement tools should consider multi-method research designs.
Validity in the hiring and evaluation process.
Gregg, Robert E
2006-01-01
Validity means "based on sound principles." Hiring decisions, discharges, and layoffs are often challenged in court. Unfortunately the employer's defenses are too often found "invalid." The Americans With Disabilities Act requires the employer to show a "validated" hiring process. Defense of discharges or layoffs often focuses on validity of the employer's decision. This article explains the elements of validity needed for sound and defendable employment decisions.
Zhang, Ti; Cai, Shuang; Forrest, Wai Chee; Mohr, Eva; Yang, Qiuhong; Forrest, M Laird
2016-09-01
Cisplatin, a platinum chemotherapeutic, is one of the most commonly used chemotherapeutic agents for many solid tumors. In this work, we developed and validated an inductively coupled plasma mass spectrometry (ICP-MS) method for quantitative determination of platinum levels in rat urine, plasma, and tissue matrices including liver, brain, lungs, kidney, muscle, heart, spleen, bladder, and lymph nodes. The tissues were processed using a microwave accelerated reaction system (MARS) system prior to analysis on an Agilent 7500 ICP-MS. According to the Food and Drug Administration guidance for industry, bioanalytical validation parameters of the method, such as selectivity, accuracy, precision, recovery, and stability were evaluated in rat biological samples. Our data suggested that the method was selective for platinum without interferences caused by other presenting elements, and the lower limit of quantification was 0.5 ppb. The accuracy and precision of the method were within 15% variation and the recoveries of platinum for all tissue matrices examined were determined to be 85-115% of the theoretical values. The stability of the platinum-containing solutions, including calibration standards, stock solutions, and processed samples in rat biological matrices was investigated. Results indicated that the samples were stable after three cycles of freeze-thaw and for up to three months. © The Author(s) 2016.
Zhang, Ti; Cai, Shuang; Forrest, Wai Chee; Mohr, Eva; Yang, Qiuhong; Forrest, M. Laird
2016-01-01
Cisplatin, a platinum chemotherapeutic, is one of the most commonly used chemotherapeutic agents for many solid tumors. In this work, we developed and validated an inductively coupled plasma mass spectrometry (ICP-MS) method for quantitative determination of platinum levels in rat urine, plasma, and tissue matrices including liver, brain, lungs, kidney, muscle, heart, spleen, bladder, and lymph nodes. The tissues were processed using a microwave accelerated reaction system (MARS) system prior to analysis on an Agilent 7500 ICP-MS. According to the Food and Drug Administration guidance for industry, bioanalytical validation parameters of the method, such as selectivity, accuracy, precision, recovery, and stability were evaluated in rat biological samples. Our data suggested that the method was selective for platinum without interferences caused by other presenting elements, and the lower limit of quantification was 0.5 ppb. The accuracy and precision of the method were within 15% variation and the recoveries of platinum for all tissue matrices examined were determined to be 85–115% of the theoretical values. The stability of the platinum-containing solutions, including calibration standards, stock solutions, and processed samples in rat biological matrices was investigated. Results indicated that the samples were stable after three cycles of freeze–thaw and for up to three months. PMID:27527103
Motor assessment using the NIH Toolbox
Magasi, Susan; McCreath, Heather E.; Bohannon, Richard W.; Wang, Ying-Chih; Bubela, Deborah J.; Rymer, William Z.; Beaumont, Jennifer; Rine, Rose Marie; Lai, Jin-Shei; Gershon, Richard C.
2013-01-01
Motor function involves complex physiologic processes and requires the integration of multiple systems, including neuromuscular, musculoskeletal, and cardiopulmonary, and neural motor and sensory-perceptual systems. Motor-functional status is indicative of current physical health status, burden of disease, and long-term health outcomes, and is integrally related to daily functioning and quality of life. Given its importance to overall neurologic health and function, motor function was identified as a key domain for inclusion in the NIH Toolbox for Assessment of Neurological and Behavioral Function (NIH Toolbox). We engaged in a 3-stage developmental process to: 1) identify key subdomains and candidate measures for inclusion in the NIH Toolbox, 2) pretest candidate measures for feasibility across the age span of people aged 3 to 85 years, and 3) validate candidate measures against criterion measures in a sample of healthy individuals aged 3 to 85 years (n = 340). Based on extensive literature review and input from content experts, the 5 subdomains of dexterity, strength, balance, locomotion, and endurance were recommended for inclusion in the NIH Toolbox motor battery. Based on our validation testing, valid and reliable measures that are simultaneously low-cost and portable have been recommended to assess each subdomain, including the 9-hole peg board for dexterity, grip dynamometry for upper-extremity strength, standing balance test, 4-m walk test for gait speed, and a 2-minute walk test for endurance. PMID:23479547
Duz, Marco; Marshall, John F; Parkin, Tim
2017-06-29
The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free-text notes. Validation was performed by comparison of the computer-assisted method with manual analysis, which was used as the gold standard. Sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and F values of the computer-assisted process were calculated by comparing them with the manual classification. Lowest sensitivity, specificity, PPVs, NPVs, and F values were 99.82% (1128/1130), 99.88% (16410/16429), 94.6% (223/239), 100.00% (16410/16412), and 99.0% (100×2×0.983×0.998/[0.983+0.998]), respectively. The computer-assisted process required few seconds to run, although an estimated 30 h were required for dictionary creation. Manual classification required approximately 80 man-hours. The critical step in this work is the creation of accurate and inclusive dictionaries to ensure that no potential cases are missed. It is significantly easier to remove false positive terms from a SS/WS selected subset of a large database than search that original database for potential false negatives. The benefits of using this method are proportional to the size of the dataset to be analyzed. ©Marco Duz, John F Marshall, Tim Parkin. Originally published in JMIR Medical Informatics (http://medinform.jmir.org), 29.06.2017.
Marshall, John F; Parkin, Tim
2017-01-01
Background The use of electronic medical records (EMRs) offers opportunity for clinical epidemiological research. With large EMR databases, automated analysis processes are necessary but require thorough validation before they can be routinely used. Objective The aim of this study was to validate a computer-assisted technique using commercially available content analysis software (SimStat-WordStat v.6 (SS/WS), Provalis Research) for mining free-text EMRs. Methods The dataset used for the validation process included life-long EMRs from 335 patients (17,563 rows of data), selected at random from a larger dataset (141,543 patients, ~2.6 million rows of data) and obtained from 10 equine veterinary practices in the United Kingdom. The ability of the computer-assisted technique to detect rows of data (cases) of colic, renal failure, right dorsal colitis, and non-steroidal anti-inflammatory drug (NSAID) use in the population was compared with manual classification. The first step of the computer-assisted analysis process was the definition of inclusion dictionaries to identify cases, including terms identifying a condition of interest. Words in inclusion dictionaries were selected from the list of all words in the dataset obtained in SS/WS. The second step consisted of defining an exclusion dictionary, including combinations of words to remove cases erroneously classified by the inclusion dictionary alone. The third step was the definition of a reinclusion dictionary to reinclude cases that had been erroneously classified by the exclusion dictionary. Finally, cases obtained by the exclusion dictionary were removed from cases obtained by the inclusion dictionary, and cases from the reinclusion dictionary were subsequently reincluded using Rv3.0.2 (R Foundation for Statistical Computing, Vienna, Austria). Manual analysis was performed as a separate process by a single experienced clinician reading through the dataset once and classifying each row of data based on the interpretation of the free-text notes. Validation was performed by comparison of the computer-assisted method with manual analysis, which was used as the gold standard. Sensitivity, specificity, negative predictive values (NPVs), positive predictive values (PPVs), and F values of the computer-assisted process were calculated by comparing them with the manual classification. Results Lowest sensitivity, specificity, PPVs, NPVs, and F values were 99.82% (1128/1130), 99.88% (16410/16429), 94.6% (223/239), 100.00% (16410/16412), and 99.0% (100×2×0.983×0.998/[0.983+0.998]), respectively. The computer-assisted process required few seconds to run, although an estimated 30 h were required for dictionary creation. Manual classification required approximately 80 man-hours. Conclusions The critical step in this work is the creation of accurate and inclusive dictionaries to ensure that no potential cases are missed. It is significantly easier to remove false positive terms from a SS/WS selected subset of a large database than search that original database for potential false negatives. The benefits of using this method are proportional to the size of the dataset to be analyzed. PMID:28663163
Onboard Processing and Autonomous Operations on the IPEX Cubesat
NASA Technical Reports Server (NTRS)
Chien, Steve; Doubleday, Joshua; Ortega, Kevin; Flatley, Tom; Crum, Gary; Geist, Alessandro; Lin, Michael; Williams, Austin; Bellardo, John; Puig-Suari, Jordi;
2012-01-01
IPEX is a 1u Cubesat sponsored by NASA Earth Science Technology Office (ESTO), the goals or which are: (1) Flight validate high performance flight computing, (2) Flight validate onboard instrument data processing product generation software, (3) flight validate autonomous operations for instrument processing, (4) enhance NASA outreach and university ties.
Perry, Cary; LeMay, Nancy; Rodway, Greg; Tracy, Allison; Galer, Joan
2005-01-01
Background This article describes the validation of an instrument to measure work group climate in public health organizations in developing countries. The instrument, the Work Group Climate Assessment Tool (WCA), was applied in Brazil, Mozambique, and Guinea to assess the intermediate outcomes of a program to develop leadership for performance improvement. Data were collected from 305 individuals in 42 work groups, who completed a self-administered questionnaire. Methods The WCA was initially validated using Cronbach's alpha reliability coefficient and exploratory factor analysis. This article presents the results of a second validation study to refine the initial analyses to account for nested data, to provide item-level psychometrics, and to establish construct validity. Analyses included eigenvalue decomposition analysis, confirmatory factor analysis, and validity and reliability analyses. Results This study confirmed the validity and reliability of the WCA across work groups with different demographic characteristics (gender, education, management level, and geographical location). The study showed that there is agreement between the theoretical construct of work climate and the items in the WCA tool across different populations. The WCA captures a single perception of climate rather than individual sub-scales of clarity, support, and challenge. Conclusion The WCA is useful for comparing the climates of different work groups, tracking the changes in climate in a single work group over time, or examining differences among individuals' perceptions of their work group climate. Application of the WCA before and after a leadership development process can help work groups hold a discussion about current climate and select a target for improvement. The WCA provides work groups with a tool to take ownership of their own group climate through a process that is simple and objective and that protects individual confidentiality. PMID:16223447
NOAA Unique CrIS/ATMS Processing System (NUCAPS) Environmental Data Record and Validation
NASA Astrophysics Data System (ADS)
Liu, Q.; Nalli, N. R.; Gambacorta, A.; Iturbide, F.; Tan, C.; Zhang, K.; Wilson, M.; Reale, A.; Sun, B.; Mollner, A.
2015-12-01
This presentation introduces the NOAA sounding products to AGU community. The NOAA Unique CrIS/ATMS Processing System (NUCAPS) operationally generates vertical profiles of atmospheric temperature (AVTP), moisture (AVMP), carbonate products (CO, CO2, and CH4) and other trace gases as well as outgoing long-wave radiation (OLR). These products have been publicly released through NOAA CLASS from April 8, 2014 to present. This paper presents the validation of these products. For AVTP and AVMP are validated by comparing against ECMWF analysis data and dedicated radiosondes. The dedicated radiosondes achieve higher quality and reach higher altitudes than conventional radiosondes. In addition, the launch times of dedicated radiosondes specifically fit Suomi NPP overpass times within 1 hour generally. We also use ground based lidar data provided by collaborators (The Aerospace Corporation) to validate the retrieved temperature profiles above 100 hPa up to 1 hPa. Both NOAA VALAR and NPROVS validation systems are applied. The Suomi NPP FM5-Ed1A OLR from CERES prior to the end of May 2012 is available now for us to validate real-time CrIS OLR environmental data records (EDRs) for NOAA/CPC operational precipitation verification. However, the quality of CrIS sensor data records (SDRs) for this time frame on CLASS is suboptimal and many granules (more than three-quarters) are invalid. Using the current offline ADL reprocessed CrIS SDR data from NOAA/STAR AIT, which includes all CrIS SDR improvements to date, we have subsequently obtained a well-distributed OLR EDR. This paper will also discuss the validation of the CrIS infrared ozone profile.
Boysen, Guy A; VanBergen, Alexandra
2014-02-01
Dissociative Identity Disorder (DID) has long been surrounded by controversy due to disagreement about its etiology and the validity of its associated phenomena. Researchers have conducted studies comparing people diagnosed with DID and people simulating DID in order to better understand the disorder. The current research presents a systematic review of this DID simulation research. The literature consists of 20 studies and contains several replicated findings. Replicated differences between the groups include symptom presentation, identity presentation, and cognitive processing deficits. Replicated similarities between the groups include interidentity transfer of information as shown by measures of recall, recognition, and priming. Despite some consistent findings, this research literature is hindered by methodological flaws that reduce experimental validity. Copyright © 2013 Elsevier Ltd. All rights reserved.
NASA Astrophysics Data System (ADS)
Sosa, M.; Grundel, L.; Simini, F.
2016-04-01
Logical reasoning is part of medical practice since its origins. Modern Medicine has included information-intensive tools to refine diagnostics and treatment protocols. We are introducing formal logic teaching in Medical School prior to Clinical Internship, to foster medical practice. Two simple examples (Acute Myocardial Infarction and Diabetes Mellitus) are given in terms of formal logic expression and truth tables. Flowcharts of both diagnostic processes help understand the procedures and to validate them logically. The particularity of medical information is that it is often accompanied by “missing data” which suggests to adapt formal logic to a “three state” logic in the future. Medical Education must include formal logic to understand complex protocols and best practices, prone to mutual interactions.
The pros and cons of code validation
NASA Technical Reports Server (NTRS)
Bobbitt, Percy J.
1988-01-01
Computational and wind tunnel error sources are examined and quantified using specific calculations of experimental data, and a substantial comparison of theoretical and experimental results, or a code validation, is discussed. Wind tunnel error sources considered include wall interference, sting effects, Reynolds number effects, flow quality and transition, and instrumentation such as strain gage balances, electronically scanned pressure systems, hot film gages, hot wire anemometers, and laser velocimeters. Computational error sources include math model equation sets, the solution algorithm, artificial viscosity/dissipation, boundary conditions, the uniqueness of solutions, grid resolution, turbulence modeling, and Reynolds number effects. It is concluded that, although improvements in theory are being made more quickly than in experiments, wind tunnel research has the advantage of the more realistic transition process of a right turbulence model in a free-transition test.
Godiva, a European Project for Ozone and Trace Gas Measurements from GOME
NASA Astrophysics Data System (ADS)
Goede, A. P. H.; Tanzi, C. P.; Aben, I.; Burrows, J. P.; Weber, M.; Perner, D.; Monks, P. S.; Llewellyn-Jones, D.; Corlett, G. K.; Arlander, D. W.; Platt, U.; Wagner, T.; Pfeilsticker, K.; Taalas, P.; Kelder, H.; Piters, A.
GODIVA (GOME Data Interpretation, Validation and Application) is a European Commission project aimed at the improvement of GOME (Global Ozone Monitoring Experiment) data products. Existing data products include global ozone, NO2 columns and (ir)radiances. Advanced data products include O3 profiles, BrO, HCHO and OCIO columns. These data are validated by ground-based and balloon borne instruments. Calibration issues are investigated by in-flight monitoring using several complementary calibration sources, as well as an on-ground replica of the GOME instrument. The results will lead to specification of operational processing of the EUMETSAT ozone Satellite Application Facility as well as implementation of the improved and new GOME data products in the NILU database for use in the European THESEO (Third European Stratospheric Experiment on Ozone) campaign of 1999
Kale, Prashant; Shukla, Manoj; Soni, Gunjan; Patel, Ronak; Gupta, Shailendra
2014-01-01
Prashant Kale has 22 years of immense experience in the analytical and bioanalytical domain. He is Senior Vice President, Bioequivalence Operations of Lambda Therapeutic Research, India which includes Bioanalytical, Clinics, Clinical data management, Pharmacokinetics and Biostatistics, Protocol writing, Clinical lab and Quality Assurance departments. He has been with Lambda for over 14 years. By qualification he is a M.Sc. and an MBA. Mr. Kale is responsible for the management, technical and administrative functions of the BE unit located at Ahmedabad and Mumbai, India. He is also responsible for leading the process of integration between bioanalytical laboratories and services offered by Lambda at global locations (India and Canada). Mr. Kale has faced several regulatory audits and inspections from leading regulatory bodies including but not limited to DCGI, USFDA, ANVISA, Health Canada, UK MHRA, Turkey MoH, WHO. There are many challenges involved in the application of bioanalytical method on different populations. This includes difference in equipment, material and environment across laboratories, variations in the matrix characteristics in different populations, differences in techniques between analysts such as sample processing and handling and others. Additionally, there is variability in the PK of a drug in different populations. This article shows the effect of different populations on validated bioanalytical method and on the PK of a drug. Hence, the bioanalytical method developed and validated for a specific population may need required modification when applied to another population. Critical consideration of all such aspects is the key to successful implementation of a validated method on different populations.
Creation and validation of web-based food allergy audiovisual educational materials for caregivers.
Rosen, Jamie; Albin, Stephanie; Sicherer, Scott H
2014-01-01
Studies reveal deficits in caregivers' ability to prevent and treat food-allergic reactions with epinephrine and a consumer preference for validated educational materials in audiovisual formats. This study was designed to create brief, validated educational videos on food allergen avoidance and emergency management of anaphylaxis for caregivers of children with food allergy. The study used a stepwise iterative process including creation of a needs assessment survey consisting of 25 queries administered to caregivers and food allergy experts to identify curriculum content. Preliminary videos were drafted, reviewed, and revised based on knowledge and satisfaction surveys given to another cohort of caregivers and health care professionals. The final materials were tested for validation of their educational impact and user satisfaction using pre- and postknowledge tests and satisfaction surveys administered to a convenience sample of 50 caretakers who had not participated in the development stages. The needs assessment identified topics of importance including treatment of allergic reactions and food allergen avoidance. Caregivers in the final validation included mothers (76%), fathers (22%), and other caregivers (2%). Race/ethnicity were white (66%), black (12%), Asian (12%), Hispanic (8%), and other (2%). Knowledge tests (maximum score = 18) increased from a mean score of 12.4 preprogram to 16.7 postprogram (p < 0.0001). On a 7-point Likert scale, all satisfaction categories remained above a favorable mean score of 6, indicating participants were overall very satisfied, learned a lot, and found the materials to be informative, straightforward, helpful, and interesting. This web-based audiovisual curriculum on food allergy improved knowledge scores and was well received.
Nutakki, Kavitha; Varni, James W; Steinbrenner, Sheila; Draucker, Claire B; Swigonski, Nancy L
2017-03-01
Health-related quality of life (HRQOL) is arguably one of the most important measures in evaluating effectiveness of clinical treatments. At present, there is no disease-specific outcome measure to assess the HRQOL of children, adolescents and young adults with Neurofibromatosis Type 1 (NF1). This study aimed to develop the items and support the content validity for the Pediatric Quality of Life Inventory™ (PedsQL™) NF1 Module for children, adolescents and young adults. The iterative process included multiphase qualitative methods including a literature review, survey of expert opinions, semi-structured interviews, cognitive interviews and pilot testing. Fifteen domains were derived from the qualitative methods, with content saturation achieved, resulting in 115 items. The domains include skin, pain, pain impact, pain management, cognitive functioning, speech, fine motor, balance, vision, perceived physical appearance, communication, worry, treatment, medicines and gastrointestinal symptoms. This study is limited because all participants are recruited from a single-site. Qualitative methods support the content validity for the PedsQL™ NF1 Module for children, adolescents and young adults. The PedsQL™ NF1 Module is now undergoing national multisite field testing for the psychometric validation of the instrument development.
Development of a refractive error quality of life scale for Thai adults (the REQ-Thai).
Sukhawarn, Roongthip; Wiratchai, Nonglak; Tatsanavivat, Pyatat; Pitiyanuwat, Somwung; Kanato, Manop; Srivannaboon, Sabong; Guyatt, Gordon H
2011-08-01
To develop a scale for measuring refractive error quality of life (QOL) for Thai adults. The full survey comprised 424 respondents from 5 medical centers in Bangkok and from 3 medical centers in Chiangmai, Songkla and KhonKaen provinces. Participants were emmetropes and persons with refractive correction with visual acuity of 20/30 or better An item reduction process was employed by combining 3 methods-expert opinion, impact method and item-total correlation methods. The classical reliability testing and the validity testing including convergent, discriminative and construct validity was performed. The developed questionnaire comprised 87 items in 6 dimensions: 1) quality of vision, 2) visual function, 3) social function, 4) psychological function, 5) symptoms and 6) refractive correction problems. It is the 5-level Likert scale type. The Cronbach's Alpha coefficients of its dimensions ranged from 0.756 to 0. 979. All validity testing were shown to be valid. The construct validity was validated by the confirmatory factor analysis. A short version questionnaire comprised 48 items with good reliability and validity was also developed. This is the first validated instrument for measuring refractive error quality of life for Thai adults that was developed with strong research methodology and large sample size.
Richard, Gail J
2011-07-01
A summary of issues regarding auditory processing disorder (APD) is presented, including some of the remaining questions and challenges raised by the articles included in the clinical forum. Evolution of APD as a diagnostic entity within audiology and speech-language pathology is reviewed. A summary of treatment efficacy results and issues is provided, as well as the continuing dilemma for speech-language pathologists (SLPs) charged with providing treatment for referred APD clients. The role of the SLP in diagnosing and treating APD remains under discussion, despite lack of efficacy data supporting auditory intervention and questions regarding the clinical relevance and validity of APD.
Nimbus/TOMS Science Data Operations Support
NASA Technical Reports Server (NTRS)
1998-01-01
Projected goals include the following: (1) Participate in and provide analysis of laboratory and in-flight calibration of LTV sensors used for space observations of backscattered LTV radiation; (2) Provide support to the TOMS Science Operations Center, including generating instrument command lists and analysis of TOMS health and safety data; (3) Develop and maintain software and algorithms designed to capture and process raw spacecraft and instrument data, convert the instrument output into measured radiance and irradiances, and produce scientifically valid products; (4) Process the TOMS data into Level 1, Level 2, and Level 3 data products; (5) Provide analysis of the science data products in support of NASA GSFC Code 916's research.
ASRM process development in aqueous cleaning
NASA Technical Reports Server (NTRS)
Swisher, Bill
1992-01-01
Viewgraphs are included on process development in aqueous cleaning which is taking place at the Aerojet Advanced Solid Rocket Motor (ASRM) Division under a NASA Marshall Space and Flight Center contract for design, development, test, and evaluation of the ASRM including new production facilities. The ASRM will utilize aqueous cleaning in several manufacturing process steps to clean case segments, nozzle metal components, and igniter closures. ASRM manufacturing process development is underway, including agent selection, agent characterization, subscale process optimization, bonding verification, and scale-up validation. Process parameters are currently being tested for optimization utilizing a Taguci Matrix, including agent concentration, cleaning solution temperature, agitation and immersion time, rinse water amount and temperature, and use/non-use of drying air. Based on results of process development testing to date, several observations are offered: aqueous cleaning appears effective for steels and SermeTel-coated metals in ASRM processing; aqueous cleaning agents may stain and/or attack bare aluminum metals to various extents; aqueous cleaning appears unsuitable for thermal sprayed aluminum-coated steel; aqueous cleaning appears to adequately remove a wide range of contaminants from flat metal surfaces, but supplementary assistance may be needed to remove clumps of tenacious contaminants embedded in holes, etc.; and hot rinse water appears to be beneficial to aid in drying of bare steel and retarding oxidation rate.
The Alliance Negotiation Scale: A psychometric investigation.
Doran, Jennifer M; Safran, Jeremy D; Muran, J Christopher
2016-08-01
This study investigates the utility and psychometric properties of a new measure of psychotherapy process, the Alliance Negotiation Scale (ANS; Doran, Safran, Waizmann, Bolger, & Muran, 2012). The ANS was designed to operationalize the theoretical construct of negotiation (Safran & Muran, 2000), and to extend our current understanding of the working alliance concept (Bordin, 1979). The ANS was also intended to improve upon existing measures such as the Working Alliance Inventory (WAI; Horvath & Greenberg, 1986, 1989) and its short form (WAI-S; Tracey & Kokotovic, 1989) by expanding the emphasis on negative therapy process. The present study investigates the psychometric validity of the ANS test scores and interpretation-including confirming its original factor structure and evaluating its internal consistency and construct validity. Construct validity was examined through the ANS' convergence and divergence with several existing scales that measure theoretically related constructs. The results bolster and extend previous findings about the psychometric integrity of the ANS, and begin to illuminate the relationship between negotiation and other important variables in psychotherapy research. (PsycINFO Database Record (c) 2016 APA, all rights reserved).
NASA Technical Reports Server (NTRS)
Lindensmith, Chris A.; Briggs, H. Clark; Beregovski, Yuri; Feria, V. Alfonso; Goullioud, Renaud; Gursel, Yekta; Hahn, Inseob; Kinsella, Gary; Orzewalla, Matthew; Phillips, Charles
2006-01-01
SIM Planetquest (SIM) is a large optical interferometer for making microarcsecond measurements of the positions of stars, and to detect Earth-sized planets around nearby stars. To achieve this precision, SIM requires stability of optical components to tens of picometers per hour. The combination of SIM s large size (9 meter baseline) and the high stability requirement makes it difficult and costly to measure all aspects of system performance on the ground. To reduce risks, costs and to allow for a design with fewer intermediate testing stages, the SIM project is developing an integrated thermal, mechanical and optical modeling process that will allow predictions of the system performance to be made at the required high precision. This modeling process uses commercial, off-the-shelf tools and has been validated against experimental results at the precision of the SIM performance requirements. This paper presents the description of the model development, some of the models, and their validation in the Thermo-Opto-Mechanical (TOM3) testbed which includes full scale brassboard optical components and the metrology to test them at the SIM performance requirement levels.
Permanent Scatterer InSAR Analysis and Validation in the Gulf of Corinth.
Elias, Panagiotis; Kontoes, Charalabos; Papoutsis, Ioannis; Kotsis, Ioannis; Marinou, Aggeliki; Paradissis, Dimitris; Sakellariou, Dimitris
2009-01-01
The Permanent Scatterers Interferometric SAR technique (PSInSAR) is a method that accurately estimates the near vertical terrain deformation rates, of the order of ∼1 mm year(-1), overcoming the physical and technical restrictions of classic InSAR. In this paper the method is strengthened by creating a robust processing chain, incorporating PSInSAR analysis together with algorithmic adaptations for Permanent Scatterer Candidates (PSCs) and Permanent Scatterers (PSs) selection. The processing chain, called PerSePHONE, was applied and validated in the geophysically active area of the Gulf of Corinth. The analysis indicated a clear subsidence trend in the north-eastern part of the gulf, with the maximum deformation of ∼2.5 mm year(-1) occurring in the region north of the Gulf of Alkyonides. The validity of the results was assessed against geophysical/geological and geodetic studies conducted in the area, which include continuous seismic profiling data and GPS height measurements. All these observations converge to the same deformation pattern as the one derived by the PSInSAR technique.
Permanent Scatterer InSAR Analysis and Validation in the Gulf of Corinth
Elias, Panagiotis; Kontoes, Charalabos; Papoutsis, Ioannis; Kotsis, Ioannis; Marinou, Aggeliki; Paradissis, Dimitris; Sakellariou, Dimitris
2009-01-01
The Permanent Scatterers Interferometric SAR technique (PSInSAR) is a method that accurately estimates the near vertical terrain deformation rates, of the order of ∼1 mm year-1, overcoming the physical and technical restrictions of classic InSAR. In this paper the method is strengthened by creating a robust processing chain, incorporating PSInSAR analysis together with algorithmic adaptations for Permanent Scatterer Candidates (PSCs) and Permanent Scatterers (PSs) selection. The processing chain, called PerSePHONE, was applied and validated in the geophysically active area of the Gulf of Corinth. The analysis indicated a clear subsidence trend in the north-eastern part of the gulf, with the maximum deformation of ∼2.5 mm year-1 occurring in the region north of the Gulf of Alkyonides. The validity of the results was assessed against geophysical/geological and geodetic studies conducted in the area, which include continuous seismic profiling data and GPS height measurements. All these observations converge to the same deformation pattern as the one derived by the PSInSAR technique. PMID:22389587
Uncertainties in ecosystem service maps: a comparison on the European scale.
Schulp, Catharina J E; Burkhard, Benjamin; Maes, Joachim; Van Vliet, Jasper; Verburg, Peter H
2014-01-01
Safeguarding the benefits that ecosystems provide to society is increasingly included as a target in international policies. To support such policies, ecosystem service maps are made. However, there is little attention for the accuracy of these maps. We made a systematic review and quantitative comparison of ecosystem service maps on the European scale to generate insights in the uncertainty of ecosystem service maps and discuss the possibilities for quantitative validation. Maps of climate regulation and recreation were reasonably similar while large uncertainties among maps of erosion protection and flood regulation were observed. Pollination maps had a moderate similarity. Differences among the maps were caused by differences in indicator definition, level of process understanding, mapping aim, data sources and methodology. Absence of suitable observed data on ecosystem services provisioning hampers independent validation of the maps. Consequently, there are, so far, no accurate measures for ecosystem service map quality. Policy makers and other users need to be cautious when applying ecosystem service maps for decision-making. The results illustrate the need for better process understanding and data acquisition to advance ecosystem service mapping, modelling and validation.
Enhancement and Validation of an Arab Surname Database
Schwartz, Kendra; Beebani, Ganj; Sedki, Mai; Tahhan, Mamon; Ruterbusch, Julie J.
2015-01-01
Objectives Arab Americans constitute a large, heterogeneous, and quickly growing subpopulation in the United States. Health statistics for this group are difficult to find because US governmental offices do not recognize Arab as separate from white. The development and validation of an Arab- and Chaldean-American name database will enhance research efforts in this population subgroup. Methods A previously validated name database was supplemented with newly identified names gathered primarily from vital statistic records and then evaluated using a multistep process. This process included 1) review by 4 Arabic- and Chaldean-speaking reviewers, 2) ethnicity assessment by social media searches, and 3) self-report of ancestry obtained from a telephone survey. Results Our Arab- and Chaldean-American name algorithm has a positive predictive value of 91% and a negative predictive value of 100%. Conclusions This enhanced name database and algorithm can be used to identify Arab Americans in health statistics data, such as cancer and hospital registries, where they are often coded as white, to determine the extent of health disparities in this population. PMID:24625771
15 CFR 30.70 - Violation of the Clean Diamond Trade Act.
Code of Federal Regulations, 2013 CFR
2013-01-01
... 15 Commerce and Foreign Trade 1 2013-01-01 2013-01-01 false Violation of the Clean Diamond Trade... Clean Diamond Trade Act. Public Law 108-19, the Clean Diamond Trade Act (the Act), section 8(c... diamonds, including those with respect to the validation of the Kimberley Process Certificate by the...
15 CFR 30.70 - Violation of the Clean Diamond Trade Act.
Code of Federal Regulations, 2012 CFR
2012-01-01
... 15 Commerce and Foreign Trade 1 2012-01-01 2012-01-01 false Violation of the Clean Diamond Trade... Clean Diamond Trade Act. Public Law 108-19, the Clean Diamond Trade Act (the Act), section 8(c... diamonds, including those with respect to the validation of the Kimberley Process Certificate by the...
15 CFR 30.70 - Violation of the Clean Diamond Trade Act.
Code of Federal Regulations, 2010 CFR
2010-01-01
... 15 Commerce and Foreign Trade 1 2010-01-01 2010-01-01 false Violation of the Clean Diamond Trade... Clean Diamond Trade Act. Public Law 108-19, the Clean Diamond Trade Act (the Act), section 8(c... diamonds, including those with respect to the validation of the Kimberley Process Certificate by the...
15 CFR 30.70 - Violation of the Clean Diamond Trade Act.
Code of Federal Regulations, 2014 CFR
2014-01-01
... 15 Commerce and Foreign Trade 1 2014-01-01 2014-01-01 false Violation of the Clean Diamond Trade... Clean Diamond Trade Act. Public Law 108-19, the Clean Diamond Trade Act (the Act), section 8(c... diamonds, including those with respect to the validation of the Kimberley Process Certificate by the...
CMOS array design automation techniques. [metal oxide semiconductors
NASA Technical Reports Server (NTRS)
Ramondetta, P.; Feller, A.; Noto, R.; Lombardi, T.
1975-01-01
A low cost, quick turnaround technique for generating custom metal oxide semiconductor arrays using the standard cell approach was developed, implemented, tested and validated. Basic cell design topology and guidelines are defined based on an extensive analysis that includes circuit, layout, process, array topology and required performance considerations particularly high circuit speed.
The Design and Validation of an Early Childhood STEM Classroom Observational Protocol
ERIC Educational Resources Information Center
Milford, Todd; Tippett, Christine
2015-01-01
Across K-12 education, there has been recent attention to the learning opportunities available to students in science, technology, engineering, and mathematics (STEM) learning. Early childhood education (ECE) has been excluded from this process. The scholarly literature contains good evidence for including science teaching and learning at the ECE…
A Preliminary Investigation of the Empirical Validity of Study Quality Appraisal
ERIC Educational Resources Information Center
Cook, Bryan G.; Dupuis, Danielle N.; Jitendra, Asha K.
2017-01-01
When classifying the evidence base of practices, special education scholars typically appraise study quality to identify and exclude from consideration in their reviews unacceptable-quality studies that are likely biased and might bias review findings if included. However, study quality appraisals used in the process of identifying evidence-based…
ERIC Educational Resources Information Center
Maynard, Jennifer Leigh
2012-01-01
Emphasis on regular mathematics skill assessment, intervention, and progress monitoring under the RTI model has created a need for the development of assessment instruments that are psychometrically sound, reliable, universal, and brief. Important factors to consider when developing or selecting assessments for the school environment include what…
The Use of Modeling-Based Text to Improve Students' Modeling Competencies
ERIC Educational Resources Information Center
Jong, Jing-Ping; Chiu, Mei-Hung; Chung, Shiao-Lan
2015-01-01
This study investigated the effects of a modeling-based text on 10th graders' modeling competencies. Fifteen 10th graders read a researcher-developed modeling-based science text on the ideal gas law that included explicit descriptions and representations of modeling processes (i.e., model selection, model construction, model validation, model…
USDA-ARS?s Scientific Manuscript database
Accurate and complete reporting of study methods, results, and interpretation are essential components of the scientific process, allowing end-users to evaluate the internal and external validity of a study. Several reporting guidelines are now publicly available for animal researchers including the...
Perceived Coach Attitudes and Behaviors Scale: Development and Validation Study
ERIC Educational Resources Information Center
Üzüm, Hanifi; Karli, Ünal; Yildiz, Nuh Osman
2018-01-01
The purpose of the study was to develop a scale, which will serve to determine how attitudes and behaviors of the coaches are perceived by the athletes. The scale, named as "Perceived Coach Attitudes and Behaviors Scale" (PCABS) was developed through various processes including exploratory and confirmatory factor analysis. Following the…
ERIC Educational Resources Information Center
Garcia-Santillán, Arturo; Moreno-Garcia, Elena; Escalera-Chávez, Milka E.; Rojas-Kramer, Carlos A.; Pozos-Texon, Felipe
2016-01-01
Most mathematics students show a definite tendency toward an attitudinal deficiency, which can be primarily understood as intolerance to the matter, affecting their scholar performance adversely. In addition, information and communication technologies have been gradually included within the process of teaching mathematics. Such adoption of…
NASA Technical Reports Server (NTRS)
Belcastro, Christine M.; Newman, Richard L.; Crider, Dennis A.; Klyde, David H.; Foster, John V.; Groff, Loren
2016-01-01
Aircraft loss of control (LOC) is a leading cause of fatal accidents across all transport airplane and operational classes. LOC can result from a wide spectrum of precursors (or hazards), often occurring in combination. Technologies developed for LOC prevention and recovery must therefore be effective under a wide variety of conditions and uncertainties, including multiple hazards, and the validation process must provide a means of assessing system effectiveness and coverage of these hazards. This paper provides a detailed description of a methodology for analyzing LOC as a dynamics and control problem for the purpose of developing effective technology solutions. The paper includes a definition of LOC based on several recent publications, a detailed description of a refined LOC accident analysis process that is illustrated via selected example cases, and a description of planned follow-on activities for identifying future potential LOC risks and the development of LOC test scenarios. Some preliminary considerations for LOC of Unmanned Aircraft Systems (UAS) and for their safe integration into the National Airspace System (NAS) are also discussed.
Advanced High Temperature Polymer Matrix Composites for Gas Turbine Engines Program Expansion
NASA Technical Reports Server (NTRS)
Hanley, David; Carella, John
1999-01-01
This document, submitted by AlliedSignal Engines (AE), a division of AlliedSignal Aerospace Company, presents the program final report for the Advanced High Temperature Polymer Matrix Composites for Gas Turbine Engines Program Expansion in compliance with data requirements in the statement of work, Contract No. NAS3-97003. This document includes: 1 -Technical Summary: a) Component Design, b) Manufacturing Process Selection, c) Vendor Selection, and d) Testing Validation: 2-Program Conclusion and Perspective. Also, see the Appendix at the back of this report. This report covers the program accomplishments from December 1, 1996, to August 24, 1998. The Advanced High Temperature PMC's for Gas Turbine Engines Program Expansion was a one year long, five task technical effort aimed at designing, fabricating and testing a turbine engine component using NASA's high temperature resin system AMB-21. The fiber material chosen was graphite T650-35, 3K, 8HS with UC-309 sizing. The first four tasks included component design and manufacturing, process selection, vendor selection, component fabrication and validation testing. The final task involved monthly financial and technical reports.
2004-01-09
KENNEDY SPACE CENTER, FLA. -- After Endeavour’s rollout from inside the Orbiter Processing Facility, the transporter (foreground) prepares to tow it to the Vehicle Assembly Building for temporary transfer. A protective cover surrounds the nose of Endeavour. The move to the VAB allows work to be performed in the OPF that can only be accomplished while the bay is empty. Work scheduled in the OPF includes annual validation of the bay’s cranes, work platforms, lifting mechanisms and jack stands. Endeavour will remain in the VAB for approximately 12 days, then return to the OPF.
Using Resin-Based 3D Printing to Build Geometrically Accurate Proxies of Porous Sedimentary Rocks.
Ishutov, Sergey; Hasiuk, Franciszek J; Jobe, Dawn; Agar, Susan
2018-05-01
Three-dimensional (3D) printing is capable of transforming intricate digital models into tangible objects, allowing geoscientists to replicate the geometry of 3D pore networks of sedimentary rocks. We provide a refined method for building scalable pore-network models ("proxies") using stereolithography 3D printing that can be used in repeated flow experiments (e.g., core flooding, permeametry, porosimetry). Typically, this workflow involves two steps, model design and 3D printing. In this study, we explore how the addition of post-processing and validation can reduce uncertainty in the 3D-printed proxy accuracy (difference of proxy geometry from the digital model). Post-processing is a multi-step cleaning of porous proxies involving pressurized ethanol flushing and oven drying. Proxies are validated by: (1) helium porosimetry and (2) digital measurements of porosity from thin-section images of 3D-printed proxies. 3D printer resolution was determined by measuring the smallest open channel in 3D-printed "gap test" wafers. This resolution (400 µm) was insufficient to build porosity of Fontainebleau sandstone (∼13%) from computed tomography data at the sample's natural scale, so proxies were printed at 15-, 23-, and 30-fold magnifications to validate the workflow. Helium porosities of the 3D-printed proxies differed from digital calculations by up to 7% points. Results improved after pressurized flushing with ethanol (e.g., porosity difference reduced to ∼1% point), though uncertainties remain regarding the nature of sub-micron "artifact" pores imparted by the 3D printing process. This study shows the benefits of including post-processing and validation in any workflow to produce porous rock proxies. © 2017, National Ground Water Association.
Bajwa, Nadia M; Yudkowsky, Rachel; Belli, Dominique; Vu, Nu Viet; Park, Yoon Soo
2017-03-01
The purpose of this study was to provide validity and feasibility evidence in measuring professionalism using the Professionalism Mini-Evaluation Exercise (P-MEX) scores as part of a residency admissions process. In 2012 and 2013, three standardized-patient-based P-MEX encounters were administered to applicants invited for an interview at the University of Geneva Pediatrics Residency Program. Validity evidence was gathered for P-MEX content (item analysis); response process (qualitative feedback); internal structure (inter-rater reliability with intraclass correlation and Generalizability); relations to other variables (correlations); and consequences (logistic regression to predict admission). To improve reliability, Kane's formula was used to create an applicant composite score using P-MEX, structured letter of recommendation (SLR), and structured interview (SI) scores. Applicant rank lists using composite scores versus faculty global ratings were compared using the Wilcoxon signed-rank test. Seventy applicants were assessed. Moderate associations were found between pairwise correlations of P-MEX scores and SLR (r = 0.25, P = .036), SI (r = 0.34, P = .004), and global ratings (r = 0.48, P < .001). Generalizability of the P-MEX using three cases was moderate (G-coefficient = 0.45). P-MEX scores had the greatest correlation with acceptance (r = 0.56, P < .001), were the strongest predictor of acceptance (OR 4.37, P < .001), and increased pseudo R-squared by 0.20 points. Including P-MEX scores increased composite score reliability from 0.51 to 0.74. Rank lists of applicants using composite score versus global rating differed significantly (z = 5.41, P < .001). Validity evidence supports the use of P-MEX scores to improve the reliability of the residency admissions process by improving applicant composite score reliability.
A System for Cost and Reimbursement Control in Hospitals
Fetter, Robert B.; Thompson, John D.; Mills, Ronald E.
1976-01-01
This paper approaches the design of a regional or statewide hospital rate-setting system as the underpinning of a larger system which permits a regulatory agency to satisfy the requirements of various public laws now on the books or in process. It aims to generate valid interinstitutional monitoring on the three parameters of cost, utilization, and quality review. Such an approach requires the extension of the usual departmental cost and budgeting system to include consideration of the mix of patients treated and the utilization of various resources, including patient days, in the treatment of these patients. A sampling framework for the application of process-based quality studies and the generation of selected performance measurements is also included. PMID:941461
Engineering the Future: Cell 6
NASA Technical Reports Server (NTRS)
Stahl, P. H.
2010-01-01
This slide presentation reviews the development of the James Webb Space Telescope (JWST), explaining the development using a systems engineering methodology. Included are slides showing the organizational chart, the JWST Science Goals, the size of the primary mirror, and full scale mockups of the JSWT. Also included is a review of the JWST Optical Telescope Requirements, a review of the preliminary design and analysis, the technology development required to create the JWST, with particular interest in the specific mirror technology that was required, and views of the mirror manufacturing process. Several slides review the process of verification and validation by testing and analysis, including a diagram of the Cryogenic Test Facility at Marshall, and views of the primary mirror while being tested in the cryogenic facility.
Automatic, semi-automatic and manual validation of urban drainage data.
Branisavljević, N; Prodanović, D; Pavlović, D
2010-01-01
Advances in sensor technology and the possibility of automated long distance data transmission have made continuous measurements the preferable way of monitoring urban drainage processes. Usually, the collected data have to be processed by an expert in order to detect and mark the wrong data, remove them and replace them with interpolated data. In general, the first step in detecting the wrong, anomaly data is called the data quality assessment or data validation. Data validation consists of three parts: data preparation, validation scores generation and scores interpretation. This paper will present the overall framework for the data quality improvement system, suitable for automatic, semi-automatic or manual operation. The first two steps of the validation process are explained in more detail, using several validation methods on the same set of real-case data from the Belgrade sewer system. The final part of the validation process, which is the scores interpretation, needs to be further investigated on the developed system.
Richter, Tobias; Schroeder, Sascha; Wöhrmann, Britta
2009-03-01
In social cognition, knowledge-based validation of information is usually regarded as relying on strategic and resource-demanding processes. Research on language comprehension, in contrast, suggests that validation processes are involved in the construction of a referential representation of the communicated information. This view implies that individuals can use their knowledge to validate incoming information in a routine and efficient manner. Consistent with this idea, Experiments 1 and 2 demonstrated that individuals are able to reject false assertions efficiently when they have validity-relevant beliefs. Validation processes were carried out routinely even when individuals were put under additional cognitive load during comprehension. Experiment 3 demonstrated that the rejection of false information occurs automatically and interferes with affirmative responses in a nonsemantic task (epistemic Stroop effect). Experiment 4 also revealed complementary interference effects of true information with negative responses in a nonsemantic task. These results suggest the existence of fast and efficient validation processes that protect mental representations from being contaminated by false and inaccurate information.
NASA Technical Reports Server (NTRS)
Bosilovich, Michael G.; Schubert, Siegfried; Molod, Andrea; Houser, Paul R.
1999-01-01
Land-surface processes in a data assimilation system influence the lower troposphere and must be properly represented. With the recent incorporation of the Mosaic Land-surface Model (LSM) into the GEOS Data Assimilation System (DAS), the detailed land-surface processes require strict validation. While global data sources can identify large-scale systematic biases at the monthly timescale, the diurnal cycle is difficult to validate. Moreover, global data sets rarely include variables such as evaporation, sensible heat and soil water. Intensive field experiments, on the other hand, can provide high temporal resolution energy budget and vertical profile data for sufficiently long periods, without global coverage. Here, we evaluate the GEOS DAS against several intensive field experiments. The field experiments are First ISLSCP Field Experiment (FIFE, Kansas, summer 1987), Cabauw (as used in PILPS, Netherlands, summer 1987), Atmospheric Radiation Measurement (ARM, Southern Great Plains, winter and summer 1998) and the Surface Heat Budget of the Arctic Ocean (SHEBA, Arctic ice sheet, winter and summer 1998). The sites provide complete surface energy budget data for periods of at least one year, and some periods of vertical profiles. This comparison provides a detailed validation of the Mosaic LSM within the GEOS DAS for a variety of climatologic and geographic conditions.
Elvén, Maria; Hochwälder, Jacek; Dean, Elizabeth; Söderlund, Anne
2015-05-01
A biopsychosocial approach and behaviour change strategies have long been proposed to serve as a basis for addressing current multifaceted health problems. This emphasis has implications for clinical reasoning of health professionals. This study's aim was to develop and validate a conceptual model to guide physiotherapists' clinical reasoning focused on clients' behaviour change. Phase 1 consisted of the exploration of existing research and the research team's experiences and knowledge. Phases 2a and 2b consisted of validation and refinement of the model based on input from physiotherapy students in two focus groups (n = 5 per group) and from experts in behavioural medicine (n = 9). Phase 1 generated theoretical and evidence bases for the first version of a model. Phases 2a and 2b established the validity and value of the model. The final model described clinical reasoning focused on clients' behaviour change as a cognitive, reflective, collaborative and iterative process with multiple interrelated levels that included input from the client and physiotherapist, a functional behavioural analysis of the activity-related target behaviour and the selection of strategies for behaviour change. This unique model, theory- and evidence-informed, has been developed to help physiotherapists to apply clinical reasoning systematically in the process of behaviour change with their clients.
Copenhagen Psychosocial Questionnaire - A validation study using the Job Demand-Resources model.
Berthelsen, Hanne; Hakanen, Jari J; Westerlund, Hugo
2018-01-01
This study aims at investigating the nomological validity of the Copenhagen Psychosocial Questionnaire (COPSOQ II) by using an extension of the Job Demands-Resources (JD-R) model with aspects of work ability as outcome. The study design is cross-sectional. All staff working at public dental organizations in four regions of Sweden were invited to complete an electronic questionnaire (75% response rate, n = 1345). The questionnaire was based on COPSOQ II scales, the Utrecht Work Engagement scale, and the one-item Work Ability Score in combination with a proprietary item. The data was analysed by Structural Equation Modelling. This study contributed to the literature by showing that: A) The scale characteristics were satisfactory and the construct validity of COPSOQ instrument could be integrated in the JD-R framework; B) Job resources arising from leadership may be a driver of the two processes included in the JD-R model; and C) Both the health impairment and motivational processes were associated with WA, and the results suggested that leadership may impact WA, in particularly by securing task resources. In conclusion, the nomological validity of COPSOQ was supported as the JD-R model-can be operationalized by the instrument. This may be helpful for transferral of complex survey results and work life theories to practitioners in the field.
NASA Technical Reports Server (NTRS)
Lange, R. Connor
2012-01-01
Ever since Explorer-1, the United States' first Earth satellite, was developed and launched in 1958, JPL has developed many more spacecraft, including landers and orbiters. While these spacecraft vary greatly in their missions, capabilities,and destination, they all have something in common. All of the components of these spacecraft had to be comprehensively tested. While thorough testing is important to mitigate risk, it is also a very expensive and time consuming process. Thankfully,since virtually all of the software testing procedures for SMAP are computer controlled, these procedures can be automated. Most people testing SMAP flight software (FSW) would only need to write tests that exercise specific requirements and then check the filtered results to verify everything occurred as planned. This gives developers the ability to automatically launch tests on the testbed, distill the resulting logs into only the important information, generate validation documentation, and then deliver the documentation to management. With many of the steps in FSW testing automated, developers can use their limited time more effectively and can validate SMAP FSW modules quicker and test them more rigorously. As a result of the various benefits of automating much of the testing process, management is considering this automated tools use in future FSW validation efforts.
Nardi, Bernardo; Arimatea, Emidio; Giovagnoli, Sara; Blasi, Stefano; Bellantuono, Cesario; Rezzonico, Giorgio
2012-01-01
The Mini Questionnaire of Personal Organization (MQPO) has been constructed in order to comply with the inward/outward Personal Meaning Organization's (PMO) theory. According to Nardi's Adaptive Post-Rationalist approach, predictable and invariable caregivers' behaviours allow inward focus and a physical sight of reciprocity; non-predictable and variable caregivers' behaviours allow outward focus and a semantic sight of reciprocity. The 20 items of MQPO have been selected from 29 intermediate (n = 160) and 40 initial items (n = 204). Psychometric validation has been conducted (n = 296), including Internal Validity (Item-Total Correlation; Factor Analysis), Internal Coherence by Factor Analysis, two analyses in Discriminant Validity (n = 132 and n = 80) and Reliability by Test-Retest Analysis (n = 49). All subjects have been given their written informed consent before beginning the test. The validation of the MQPO shows that the ultimate version is consistent with its post-rationalist paradigm. Four different factors have been found, one for each PMO. Validity of the construct and the internal reliability index are satisfying (Alpha = 0.73). Moreover, the results obtained are constant (from r = 0.80 to r = 0.89). There is an adequate agreement between the MQPO scales and the clinical evaluations (72.5%), as well as an excellent agreement (80.0%) between the scores of the MQPO and those of the Personal Meaning Questionnaire. The MQPO is a tool able to study personality as a process by focusing on the relationships between personality and developmental process axes, which are the bases of the PMO's theory, according to the APR approach. Copyright © 2011 John Wiley & Sons, Ltd.
Ayón, Cecilia
2018-04-26
The study describes multiple steps taken to develop and test the Latino Immigrant Family Socialization (LIFS) scale. Scale items were developed based on qualitative interviews, and feedback on the items was solicited from content experts including an academic, practitioner, and a group of promotoras (or lay health workers). The scale was completed by 300 Latino immigrant parents in the state of Arizona. Exploratory and confirmatory factor analysis confirmed a six-factor model. The six factors ware cultural socialization, adapt, advocate, value diversity, promote mistrust, and educate about nativity and documentation. Follow-up studies are needed to continue the measurement validation process and assess how strategies are used in conjunction with each other, the application of the six strategies across different policy contexts, and how the ethnic-racial socialization process supports children's health and well-being.
Noncognitive constructs in graduate admissions: an integrative review of available instruments.
Megginson, Lucy
2009-01-01
In the graduate admission process, both cognitive and noncognitive instruments evaluate a candidate's potential success in a program of study. Traditional cognitive measures include the Graduate Record Examination or graduate grade point average, while noncognitive constructs such as personality, attitude, and motivation are generally measured through letters of recommendation, interviews, or personality inventories. Little consensus exists as to what criteria constitute valid and effective measurements of graduate student potential. This integrative review of available tools to measure noncognitive constructs will assist graduate faculty in identifying valid and reliable instruments that will enhance a more holistic assessment of nursing graduate candidates. Finally, as evidence-based practice begins to penetrate academic processes and as graduate faculty realize the predictive significance of noncognitive attributes, faculty can use the information in this integrative review to guide future research.
Design for validation: An approach to systems validation
NASA Technical Reports Server (NTRS)
Carter, William C.; Dunham, Janet R.; Laprie, Jean-Claude; Williams, Thomas; Howden, William; Smith, Brian; Lewis, Carl M. (Editor)
1989-01-01
Every complex system built is validated in some manner. Computer validation begins with review of the system design. As systems became too complicated for one person to review, validation began to rely on the application of adhoc methods by many individuals. As the cost of the changes mounted and the expense of failure increased, more organized procedures became essential. Attempts at devising and carrying out those procedures showed that validation is indeed a difficult technical problem. The successful transformation of the validation process into a systematic series of formally sound, integrated steps is necessary if the liability inherent in the future digita-system-based avionic and space systems is to be minimized. A suggested framework and timetable for the transformtion are presented. Basic working definitions of two pivotal ideas (validation and system life-cyle) are provided and show how the two concepts interact. Many examples are given of past and present validation activities by NASA and others. A conceptual framework is presented for the validation process. Finally, important areas are listed for ongoing development of the validation process at NASA Langley Research Center.
Land Ice Verification and Validation Kit
DOE Office of Scientific and Technical Information (OSTI.GOV)
2015-07-15
To address a pressing need to better understand the behavior and complex interaction of ice sheets within the global Earth system, significant development of continental-scale, dynamical ice-sheet models is underway. The associated verification and validation process of these models is being coordinated through a new, robust, python-based extensible software package, the Land Ice Verification and Validation toolkit (LIVV). This release provides robust and automated verification and a performance evaluation on LCF platforms. The performance V&V involves a comprehensive comparison of model performance relative to expected behavior on a given computing platform. LIVV operates on a set of benchmark and testmore » data, and provides comparisons for a suite of community prioritized tests, including configuration and parameter variations, bit-4-bit evaluation, and plots of tests where differences occur.« less
Post-processing of seismic parameter data based on valid seismic event determination
McEvilly, Thomas V.
1985-01-01
An automated seismic processing system and method are disclosed, including an array of CMOS microprocessors for unattended battery-powered processing of a multi-station network. According to a characterizing feature of the invention, each channel of the network is independently operable to automatically detect, measure times and amplitudes, and compute and fit Fast Fourier transforms (FFT's) for both P- and S- waves on analog seismic data after it has been sampled at a given rate. The measured parameter data from each channel are then reviewed for event validity by a central controlling microprocessor and if determined by preset criteria to constitute a valid event, the parameter data are passed to an analysis computer for calculation of hypocenter location, running b-values, source parameters, event count, P- wave polarities, moment-tensor inversion, and Vp/Vs ratios. The in-field real-time analysis of data maximizes the efficiency of microearthquake surveys allowing flexibility in experimental procedures, with a minimum of traditional labor-intensive postprocessing. A unique consequence of the system is that none of the original data (i.e., the sensor analog output signals) are necessarily saved after computation, but rather, the numerical parameters generated by the automatic analysis are the sole output of the automated seismic processor.
Bueno, Justin; Sikirzhytski, Vitali; Lednev, Igor K
2013-08-06
The ability to link a suspect to a particular shooting incident is a principal task for many forensic investigators. Here, we attempt to achieve this goal by analysis of gunshot residue (GSR) through the use of attenuated total reflectance (ATR) Fourier transform infrared spectroscopy (FT-IR) combined with statistical analysis. The firearm discharge process is analogous to a complex chemical process. Therefore, the products of this process (GSR) will vary based upon numerous factors, including the specific combination of the firearm and ammunition which was discharged. Differentiation of FT-IR data, collected from GSR particles originating from three different firearm-ammunition combinations (0.38 in., 0.40 in., and 9 mm calibers), was achieved using projection to latent structures discriminant analysis (PLS-DA). The technique was cross (leave-one-out), both internally and externally, validated. External validation was achieved via assignment (caliber identification) of unknown FT-IR spectra from unknown GSR particles. The results demonstrate great potential for ATR-FT-IR spectroscopic analysis of GSR for forensic purposes.
Spindler, A
2014-06-15
Although data reconciliation is intensely applied in process engineering, almost none of its powerful methods are employed for validation of operational data from wastewater treatment plants. This is partly due to some prerequisites that are difficult to meet including steady state, known variances of process variables and absence of gross errors. However, an algorithm can be derived from the classical approaches to data reconciliation that allows to find a comprehensive set of equations describing redundancy in the data when measured and unmeasured variables (flows and concentrations) are defined. This is a precondition for methods of data validation based on individual mass balances such as CUSUM charts. The procedure can also be applied to verify the necessity of existing or additional measurements with respect to the improvement of the data's redundancy. Results are given for a large wastewater treatment plant. The introduction aims at establishing a link between methods known from data reconciliation in process engineering and their application in wastewater treatment. Copyright © 2014 Elsevier Ltd. All rights reserved.
Mena, Marisa; Lloveras, Belen; Tous, Sara; Bogers, Johannes; Maffini, Fausto; Gangane, Nitin; Kumar, Rekha Vijay; Somanathan, Thara; Lucas, Eric; Anantharaman, Devasena; Gheit, Tarik; Castellsagué, Xavier; Pawlita, Michael; de Sanjosé, Silvia; Alemany, Laia; Tommasino, Massimo
2017-01-01
Worldwide use of formalin-fixed paraffin-embedded blocks (FFPE) is extensive in diagnosis and research. Yet, there is a lack of optimized/standardized protocols to process the blocks and verify the quality and presence of the targeted tissue. In the context of an international study on head and neck cancer (HNC)-HPV-AHEAD, a standardized protocol for optimizing the use of FFPEs in molecular epidemiology was developed and validated. First, a protocol for sectioning the FFPE was developed to prevent cross-contamination and distributed between participating centers. Before processing blocks, all sectioning centers underwent a quality control to guarantee a satisfactory training process. The first and last sections of the FFPEs were used for histopathological assessment. A consensus histopathology evaluation form was developed by an international panel of pathologists and evaluated for four indicators in a pilot analysis in order to validate it: 1) presence/type of tumor tissue, 2) identification of other tissue components that could affect the molecular diagnosis and 3) quality of the tissue. No HPV DNA was found in sections from empty FFPE generated in any histology laboratories of HPV-AHEAD consortium and all centers passed quality assurance for processing after quality control. The pilot analysis to validate the histopathology form included 355 HNC cases. The form was filled by six pathologists and each case was randomly assigned to two of them. Most samples (86%) were considered satisfactory. Presence of >50% of invasive carcinoma was observed in all sections of 66% of cases. Substantial necrosis (>50%) was present in <2% of samples. The concordance for the indicators targeted to validate the histopathology form was very high (kappa > 0.85) between first and last sections and fair to high between pathologists (kappa/pabak 0.21-0.72). The protocol allowed to correctly process without signs of contamination all FFPE of the study. The histopathology evaluation of the cases assured the presence of the targeted tissue, identified the presence of other tissues that could disturb the molecular diagnosis and allowed the assessment of tissue quality.
Validation, Edits, and Application Processing System Report: Phase I.
ERIC Educational Resources Information Center
Gray, Susan; And Others
Findings of phase 1 of a study of the 1979-1980 Basic Educational Opportunity Grants validation, edits, and application processing system are presented. The study was designed to: assess the impact of the validation effort and processing system edits on the correct award of Basic Grants; and assess the characteristics of students most likely to…
Framework for the quality assurance of 'omics technologies considering GLP requirements.
Kauffmann, Hans-Martin; Kamp, Hennicke; Fuchs, Regine; Chorley, Brian N; Deferme, Lize; Ebbels, Timothy; Hackermüller, Jörg; Perdichizzi, Stefania; Poole, Alan; Sauer, Ursula G; Tollefsen, Knut E; Tralau, Tewes; Yauk, Carole; van Ravenzwaay, Ben
2017-12-01
'Omics technologies are gaining importance to support regulatory toxicity studies. Prerequisites for performing 'omics studies considering GLP principles were discussed at the European Centre for Ecotoxicology and Toxicology of Chemicals (ECETOC) Workshop Applying 'omics technologies in Chemical Risk Assessment. A GLP environment comprises a standard operating procedure system, proper pre-planning and documentation, and inspections of independent quality assurance staff. To prevent uncontrolled data changes, the raw data obtained in the respective 'omics data recording systems have to be specifically defined. Further requirements include transparent and reproducible data processing steps, and safe data storage and archiving procedures. The software for data recording and processing should be validated, and data changes should be traceable or disabled. GLP-compliant quality assurance of 'omics technologies appears feasible for many GLP requirements. However, challenges include (i) defining, storing, and archiving the raw data; (ii) transparent descriptions of data processing steps; (iii) software validation; and (iv) ensuring complete reproducibility of final results with respect to raw data. Nevertheless, 'omics studies can be supported by quality measures (e.g., GLP principles) to ensure quality control, reproducibility and traceability of experiments. This enables regulators to use 'omics data in a fit-for-purpose context, which enhances their applicability for risk assessment. Copyright © 2017 Elsevier Inc. All rights reserved.
Tripathi, Vandana; Stanton, Cynthia; Strobino, Donna; Bartlett, Linda
2015-01-01
Background High quality care is crucial in ensuring that women and newborns receive interventions that may prevent and treat birth-related complications. As facility deliveries increase in developing countries, there are concerns about service quality. Observation is the gold standard for clinical quality assessment, but existing observation-based measures of obstetric quality of care are lengthy and difficult to administer. There is a lack of consensus on quality indicators for routine intrapartum and immediate postpartum care, including essential newborn care. This study identified key dimensions of the quality of the process of intrapartum and immediate postpartum care (QoPIIPC) in facility deliveries and developed a quality assessment measure representing these dimensions. Methods and Findings Global maternal and neonatal care experts identified key dimensions of QoPIIPC through a modified Delphi process. Experts also rated indicators of these dimensions from a comprehensive delivery observation checklist used in quality surveys in sub-Saharan African countries. Potential QoPIIPC indices were developed from combinations of highly-rated indicators. Face, content, and criterion validation of these indices was conducted using data from observations of 1,145 deliveries in Kenya, Madagascar, and Tanzania (including Zanzibar). A best-performing index was selected, composed of 20 indicators of intrapartum/immediate postpartum care, including essential newborn care. This index represented most dimensions of QoPIIPC and effectively discriminated between poorly and well-performed deliveries. Conclusions As facility deliveries increase and the global community pays greater attention to the role of care quality in achieving further maternal and newborn mortality reduction, the QoPIIPC index may be a valuable measure. This index complements and addresses gaps in currently used quality assessment tools. Further evaluation of index usability and reliability is needed. The availability of a streamlined, comprehensive, and validated index may enable ongoing and efficient observation-based assessment of care quality during labor and delivery in sub-Saharan Africa, facilitating targeted quality improvement. PMID:26107655
Optimization of the Switch Mechanism in a Circuit Breaker Using MBD Based Simulation
Jang, Jin-Seok; Yoon, Chang-Gyu; Ryu, Chi-Young; Kim, Hyun-Woo; Bae, Byung-Tae; Yoo, Wan-Suk
2015-01-01
A circuit breaker is widely used to protect electric power system from fault currents or system errors; in particular, the opening mechanism in a circuit breaker is important to protect current overflow in the electric system. In this paper, multibody dynamic model of a circuit breaker including switch mechanism was developed including the electromagnetic actuator system. Since the opening mechanism operates sequentially, optimization of the switch mechanism was carried out to improve the current breaking time. In the optimization process, design parameters were selected from length and shape of each latch, which changes pivot points of bearings to shorten the breaking time. To validate optimization results, computational results were compared to physical tests with a high speed camera. Opening time of the optimized mechanism was decreased by 2.3 ms, which was proved by experiments. Switch mechanism design process can be improved including contact-latch system by using this process. PMID:25918740
The New Millenium Program: Serving Earth and Space Sciences
NASA Technical Reports Server (NTRS)
Li, Fuk K.
2000-01-01
NASA has exciting plans for space science and Earth observations during the next decade. A broad range of advanced spacecraft and measurement technologies will be needed to support these plans within the existing budget and schedule constraints. Many of these technology needs are common to both NASA's Office of Earth Science (OES) and Office of Space Sciences (OSS). Even though some breakthrough technologies have been identified to address these needs, project managers have traditionally been reluctant to incorporate them into flight programs because their inherent development risk. To accelerate the infusion of new technologies into its OES and OSS missions, NASA established the New Millennium Program (NMP). This program analyzes the capability needs of these enterprises, identifies candidate technologies to address these needs, incorporates advanced technology suites into validation flights, validates them in the relevant space environment, and then proactively infuses the validated technologies into future missions to enhance their capabilities while reducing their life cycle cost. The NMP employs a cross-enterprise Science Working Group, the NASA Enterprise science and technology roadmaps to define the capabilities needed by future Earth and Space science missions. Additional input from the science community is gathered through open workshops and peer-reviewed NASA Research Announcement (NRAs) for advanced measurement concepts. Technology development inputs from the technology organizations within NASA, other government agencies, federally funded research and development centers (FFRDC's), U.S. industry, and academia are sought to identify breakthrough technologies that might address these needs. This approach significantly extends NASA's technology infrastructure. To complement other flight test programs that develop or validate of individual components, the NMP places its highest priority on system-level validations of technology suites in the relevant space environment. This approach is not needed for all technologies, but it is usually essential to validate advanced system architectures or new measurement concepts. The NMP has recently revised its processes for defining candidate validation flights, and selecting technologies for these flights. The NMP now employs integrated project formulation teams, 'Which include scientists, technologists, and mission planners, to incorporate technology suites into candidate validation flights. These teams develop competing concepts, which can be rigorously evaluated prior to selection for flight. The technology providers for each concept are selected through an open, competitive, process during the project formulation phase. If their concept is selected for flight, they are incorporated into the Project Implementation Team, which develops, integrates, tests, launches, and operates the technology validation flight. Throughout the project implementation phase, the Implementation Team will document and disseminate their validation results to facilitate the infusion of their validated technologies into future OSS and OES science missions. The NMP has successfully launched its first two Deep Space flights for the OSS, and is currently implementing its first two Earth Orbiting flights for the OES. The next OSS and OES flights are currently being defined. Even though these flights are focused on specific Space Science and Earth Science themes, they are designed to validate a range of technologies that could benefit both enterprises, including advanced propulsion, communications, autonomous operations and navigation, multifunctional structures, microelectronics, and advanced instruments. Specific examples of these technologies will be provided in our presentation. The processes developed by the NMP also provide benefits across the Space and Earth Science enterprises. In particular, the extensive, nation-wide technology infrastructure developed by the NMP enhances the access to breakthrough technologies for both enterprises.
Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H.; Clerici, Libero; Coecke, Sandra; Douglas, George R.; Gribaldo, Laura; Groten, John P.; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R.; Toda, Eisaku; Tong, Weida; van Delft, Joost H.; Weis, Brenda; Schechtman, Leonard M.
2006-01-01
This is the report of the first workshop “Validation of Toxicogenomics-Based Test Systems” held 11–12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities. PMID:16507466
Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H; Clerici, Libero; Coecke, Sandra; Douglas, George R; Gribaldo, Laura; Groten, John P; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R; Toda, Eisaku; Tong, Weida; van Delft, Joost H; Weis, Brenda; Schechtman, Leonard M
2006-03-01
This is the report of the first workshop "Validation of Toxicogenomics-Based Test Systems" held 11-12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities.
Validation of the F-18 high alpha research vehicle flight control and avionics systems modifications
NASA Technical Reports Server (NTRS)
Chacon, Vince; Pahle, Joseph W.; Regenie, Victoria A.
1990-01-01
The verification and validation process is a critical portion of the development of a flight system. Verification, the steps taken to assure the system meets the design specification, has become a reasonably understood and straightforward process. Validation is the method used to ensure that the system design meets the needs of the project. As systems become more integrated and more critical in their functions, the validation process becomes more complex and important. The tests, tools, and techniques which are being used for the validation of the high alpha research vehicle (HARV) turning valve control system (TVCS) are discussed, and their solutions are documented. The emphasis of this paper is on the validation of integrated systems.
Validation of the F-18 high alpha research vehicle flight control and avionics systems modifications
NASA Technical Reports Server (NTRS)
Chacon, Vince; Pahle, Joseph W.; Regenie, Victoria A.
1990-01-01
The verification and validation process is a critical portion of the development of a flight system. Verification, the steps taken to assure the system meets the design specification, has become a reasonably understood and straightforward process. Validation is the method used to ensure that the system design meets the needs of the project. As systems become more integrated and more critical in their functions, the validation process becomes more complex and important. The tests, tools, and techniques which are being used for the validation of the high alpha research vehicle (HARV) turning vane control system (TVCS) are discussed and the problems and their solutions are documented. The emphasis of this paper is on the validation of integrated system.
Alonso-Torres, Beatriz; Hernández-Pérez, José Alfredo; Sierra-Espinoza, Fernando; Schenker, Stefan; Yeretzian, Chahan
2013-01-01
Heat and mass transfer in individual coffee beans during roasting were simulated using computational fluid dynamics (CFD). Numerical equations for heat and mass transfer inside the coffee bean were solved using the finite volume technique in the commercial CFD code Fluent; the software was complemented with specific user-defined functions (UDFs). To experimentally validate the numerical model, a single coffee bean was placed in a cylindrical glass tube and roasted by a hot air flow, using the identical geometrical 3D configuration and hot air flow conditions as the ones used for numerical simulations. Temperature and humidity calculations obtained with the model were compared with experimental data. The model predicts the actual process quite accurately and represents a useful approach to monitor the coffee roasting process in real time. It provides valuable information on time-resolved process variables that are otherwise difficult to obtain experimentally, but critical to a better understanding of the coffee roasting process at the individual bean level. This includes variables such as time-resolved 3D profiles of bean temperature and moisture content, and temperature profiles of the roasting air in the vicinity of the coffee bean.
Validation study and routine control monitoring of moist heat sterilization procedures.
Shintani, Hideharu
2012-06-01
The proposed approach to validation of steam sterilization in autoclaves follows the basic life cycle concepts applicable to all validation programs. Understand the function of sterilization process, develop and understand the cycles to carry out the process, and define a suitable test or series of tests to confirm that the function of the process is suitably ensured by the structure provided. Sterilization of product and components and parts that come in direct contact with sterilized product is the most critical of pharmaceutical processes. Consequently, this process requires a most rigorous and detailed approach to validation. An understanding of the process requires a basic understanding of microbial death, the parameters that facilitate that death, the accepted definition of sterility, and the relationship between the definition and sterilization parameters. Autoclaves and support systems need to be designed, installed, and qualified in a manner that ensures their continued reliability. Lastly, the test program must be complete and definitive. In this paper, in addition to validation study, documentation of IQ, OQ and PQ concretely were described.
Essential elements of the nursing practice environment in nursing homes: Psychometric evaluation.
de Brouwer, Brigitte Johanna Maria; Kaljouw, Marian J; Schoonhoven, Lisette; van Achterberg, Theo
2017-06-01
To develop and psychometrically test the Essentials of Magnetism II in nursing homes. Increasing numbers and complex needs of older people in nursing homes strain the nursing workforce. Fewer adequately trained staff and increased care complexity raise concerns about declining quality. Nurses' practice environment has been reported to affect quality of care and productivity. The Essentials of Magnetism II © measures processes and relationships of practice environments that contribute to productivity and quality of care and can therefore be useful in identifying processes requiring change to pursue excellent practice environments. However, this instrument was not explicitly evaluated for its use in nursing home settings so far. In a preparatory phase, a cross-sectional survey study focused on face validity of the essentials of magnetism in nursing homes. A second cross-sectional survey design was then used to further test the instrument's validity and reliability. Psychometric testing included evaluation of content and construct validity, and reliability. Nurses (N = 456) working at 44 units of three nursing homes were included. Respondent acceptance, relevance and clarity were adequate. Five of the eight subscales and 54 of the 58 items did meet preset psychometric criteria. All essentials of magnetism are considered relevant for nursing homes. The subscales Adequacy of Staffing, Clinically Competent Peers, Patient Centered Culture, Autonomy and Nurse Manager Support can be used in nursing homes without problems. The other subscales cannot be directly applied to this setting. The valid subscales of the Essentials of Magnetism II instrument can be used to design excellent nursing practice environments that support nurses' delivery of care. Before using the entire instrument, however, the other subscales have to be improved. © 2016 John Wiley & Sons Ltd.
Tomoaia-Cotisel, Andrada; Scammon, Debra L.; Waitzman, Norman J.; Cronholm, Peter F.; Halladay, Jacqueline R.; Driscoll, David L.; Solberg, Leif I.; Hsu, Clarissa; Tai-Seale, Ming; Hiratsuka, Vanessa; Shih, Sarah C.; Fetters, Michael D.; Wise, Christopher G.; Alexander, Jeffrey A.; Hauser, Diane; McMullen, Carmit K.; Scholle, Sarah Hudson; Tirodkar, Manasi A.; Schmidt, Laura; Donahue, Katrina E.; Parchman, Michael L.; Stange, Kurt C.
2013-01-01
PURPOSE We aimed to advance the internal and external validity of research by sharing our empirical experience and recommendations for systematically reporting contextual factors. METHODS Fourteen teams conducting research on primary care practice transformation retrospectively considered contextual factors important to interpreting their findings (internal validity) and transporting or reinventing their findings in other settings/situations (external validity). Each team provided a table or list of important contextual factors and interpretive text included as appendices to the articles in this supplement. Team members identified the most important contextual factors for their studies. We grouped the findings thematically and developed recommendations for reporting context. RESULTS The most important contextual factors sorted into 5 domains: (1) the practice setting, (2) the larger organization, (3) the external environment, (4) implementation pathway, and (5) the motivation for implementation. To understand context, investigators recommend (1) engaging diverse perspectives and data sources, (2) considering multiple levels, (3) evaluating history and evolution over time, (4) looking at formal and informal systems and culture, and (5) assessing the (often nonlinear) interactions between contextual factors and both the process and outcome of studies. We include a template with tabular and interpretive elements to help study teams engage research participants in reporting relevant context. CONCLUSIONS These findings demonstrate the feasibility and potential utility of identifying and reporting contextual factors. Involving diverse stakeholders in assessing context at multiple stages of the research process, examining their association with outcomes, and consistently reporting critical contextual factors are important challenges for a field interested in improving the internal and external validity and impact of health care research. PMID:23690380
Models of protein–ligand crystal structures: trust, but verify
Deller, Marc C.
2015-01-01
X-ray crystallography provides the most accurate models of protein–ligand structures. These models serve as the foundation of many computational methods including structure prediction, molecular modelling, and structure-based drug design. The success of these computational methods ultimately depends on the quality of the underlying protein–ligand models. X-ray crystallography offers the unparalleled advantage of a clear mathematical formalism relating the experimental data to the protein–ligand model. In the case of X-ray crystallography, the primary experimental evidence is the electron density of the molecules forming the crystal. The first step in the generation of an accurate and precise crystallographic model is the interpretation of the electron density of the crystal, typically carried out by construction of an atomic model. The atomic model must then be validated for fit to the experimental electron density and also for agreement with prior expectations of stereochemistry. Stringent validation of protein–ligand models has become possible as a result of the mandatory deposition of primary diffraction data, and many computational tools are now available to aid in the validation process. Validation of protein–ligand complexes has revealed some instances of overenthusiastic interpretation of ligand density. Fundamental concepts and metrics of protein–ligand quality validation are discussed and we highlight software tools to assist in this process. It is essential that end users select high quality protein–ligand models for their computational and biological studies, and we provide an overview of how this can be achieved. PMID:25665575
Models of protein-ligand crystal structures: trust, but verify.
Deller, Marc C; Rupp, Bernhard
2015-09-01
X-ray crystallography provides the most accurate models of protein-ligand structures. These models serve as the foundation of many computational methods including structure prediction, molecular modelling, and structure-based drug design. The success of these computational methods ultimately depends on the quality of the underlying protein-ligand models. X-ray crystallography offers the unparalleled advantage of a clear mathematical formalism relating the experimental data to the protein-ligand model. In the case of X-ray crystallography, the primary experimental evidence is the electron density of the molecules forming the crystal. The first step in the generation of an accurate and precise crystallographic model is the interpretation of the electron density of the crystal, typically carried out by construction of an atomic model. The atomic model must then be validated for fit to the experimental electron density and also for agreement with prior expectations of stereochemistry. Stringent validation of protein-ligand models has become possible as a result of the mandatory deposition of primary diffraction data, and many computational tools are now available to aid in the validation process. Validation of protein-ligand complexes has revealed some instances of overenthusiastic interpretation of ligand density. Fundamental concepts and metrics of protein-ligand quality validation are discussed and we highlight software tools to assist in this process. It is essential that end users select high quality protein-ligand models for their computational and biological studies, and we provide an overview of how this can be achieved.
Hervás, Gonzalo; Vázquez, Carmelo
2013-04-22
We introduce the Pemberton Happiness Index (PHI), a new integrative measure of well-being in seven languages, detailing the validation process and presenting psychometric data. The scale includes eleven items related to different domains of remembered well-being (general, hedonic, eudaimonic, and social well-being) and ten items related to experienced well-being (i.e., positive and negative emotional events that possibly happened the day before); the sum of these items produces a combined well-being index. A distinctive characteristic of this study is that to construct the scale, an initial pool of items, covering the remembered and experienced well-being domains, were subjected to a complete selection and validation process. These items were based on widely used scales (e.g., PANAS, Satisfaction With Life Scale, Subjective Happiness Scale, and Psychological Well-Being Scales). Both the initial items and reference scales were translated into seven languages and completed via Internet by participants (N = 4,052) aged 16 to 60 years from nine countries (Germany, India, Japan, Mexico, Russia, Spain, Sweden, Turkey, and USA). Results from this initial validation study provided very good support for the psychometric properties of the PHI (i.e., internal consistency, a single-factor structure, and convergent and incremental validity). Given the PHI's good psychometric properties, this simple and integrative index could be used as an instrument to monitor changes in well-being. We discuss the utility of this integrative index to explore well-being in individuals and communities.
NASA Technical Reports Server (NTRS)
Atamanova, O. M.; Vodyakova, L. M.; Gvozdeva, N. I.; Davydova, S. A.; Ignasheva, L. P.; Rogozkin, V. D.; Sbitneva, M. F.; Ostroumova, L. M.; Tikhomirova, M. V.; Fedotenkov, A. G.
1974-01-01
Experimental clinical studies show that early pathogenetic treatment against the effects of prolonged radiation includes amitetravit as a means of increasing natural radio resistance, ATP as protective therapeutic agent, and automyelotransplantation for early pathogenetic treatment. The high effectiveness of the combined use of ATP and amitetravit in tests on dogs indicates an ability to prevent primary damages to genetic structures and accelerated processes of reparation in the first stages of radiopathological processes.
Low Cost Manufacturing of Composite Cryotanks
NASA Technical Reports Server (NTRS)
Meredith, Brent; Palm, Tod; Deo, Ravi; Munafo, Paul M. (Technical Monitor)
2002-01-01
This viewgraph presentation reviews research and development of cryotank manufacturing conducted by Northrup Grumman. The objectives of the research and development included the development and validation of manufacturing processes and technology for fabrication of large scale cryogenic tanks, the establishment of a scale-up and facilitization plan for full scale cryotanks, the development of non-autoclave composite manufacturing processes, the fabrication of subscale tank joints for element tests, the performance of manufacturing risk reduction trials for the subscale tank, and the development of full-scale tank manufacturing concepts.
PHM for Ground Support Systems Case Study: From Requirements to Integration
NASA Technical Reports Server (NTRS)
Teubert, Chris
2015-01-01
This session will detail the experience of members of the NASA Ames Prognostic Center of Excellence (PCoE) producing PHM tools for NASA Advanced Ground Support Systems, including the challenges in applying their research in a production environment. Specifically, we will 1) go over the systems engineering and review process used; 2) Discuss the challenges and pitfalls in this process; 3) discuss software architecting, documentation, verification and validation activities and 4) discuss challenges in communicating the benefits and limitations of PHM Technologies.
Guidance for updating clinical practice guidelines: a systematic review of methodological handbooks.
Vernooij, Robin W M; Sanabria, Andrea Juliana; Solà, Ivan; Alonso-Coello, Pablo; Martínez García, Laura
2014-01-02
Updating clinical practice guidelines (CPGs) is a crucial process for maintaining the validity of recommendations. Methodological handbooks should provide guidance on both developing and updating CPGs. However, little is known about the updating guidance provided by these handbooks. We conducted a systematic review to identify and describe the updating guidance provided by CPG methodological handbooks and included handbooks that provide updating guidance for CPGs. We searched in the Guidelines International Network library, US National Guidelines Clearinghouse and MEDLINE (PubMed) from 1966 to September 2013. Two authors independently selected the handbooks and extracted the data. We used descriptive statistics to analyze the extracted data and conducted a narrative synthesis. We included 35 handbooks. Most handbooks (97.1%) focus mainly on developing CPGs, including variable degrees of information about updating. Guidance on identifying new evidence and the methodology of assessing the need for an update is described in 11 (31.4%) and eight handbooks (22.8%), respectively. The period of time between two updates is described in 25 handbooks (71.4%), two to three years being the most frequent (40.0%). The majority of handbooks do not provide guidance for the literature search, evidence selection, assessment, synthesis, and external review of the updating process. Guidance for updating CPGs is poorly described in methodological handbooks. This guidance should be more rigorous and explicit. This could lead to a more optimal updating process, and, ultimately to valid trustworthy guidelines.
2012-06-27
of the critical contributors to deviation include structural relaxation of the glass, thermal expansion of the molds, TRS and viscoelastic behavior...the critical contributors to deviation include structural relaxation of the glass, thermal expansion of the molds, TRS and viscoelastic behavior of the...data. In that article glass was modeled as purely viscous and thermal expansion was accounted for with a constant coefficient of thermal expansion (CTE
NASA Astrophysics Data System (ADS)
Maragos, Petros
The topics discussed at the conference include hierarchical image coding, motion analysis, feature extraction and image restoration, video coding, and morphological and related nonlinear filtering. Attention is also given to vector quantization, morphological image processing, fractals and wavelets, architectures for image and video processing, image segmentation, biomedical image processing, and model-based analysis. Papers are presented on affine models for motion and shape recovery, filters for directly detecting surface orientation in an image, tracking of unresolved targets in infrared imagery using a projection-based method, adaptive-neighborhood image processing, and regularized multichannel restoration of color images using cross-validation. (For individual items see A93-20945 to A93-20951)
GRRATS: A New Approach to Inland Altimetry Processing for Major World Rivers
NASA Astrophysics Data System (ADS)
Coss, S. P.
2016-12-01
Here we present work-in-progress results aimed at generating a new radar altimetry dataset GRRATS (Global River Radar Altimetry Time Series) extracted over global ocean-draining rivers wider than 900 m. GRATTS was developed as a component of the NASA MEaSUREs project (PI: Dennis Lettenmaier, UCLA) to generate pre-SWOT data products for decadal or longer global river elevation changes from multi-mission satellite radar altimetry data. The dataset at present includes 909 time series from 39 rivers. A new method of filtering VS (virtual station) height time series is presented where, DEM based heights were used to establish limits for the ice1 retracked Jason2 and Envisat heights at present. While GRRATS is following in the footsteps of several predecessors, it contributes to one of the critical climate data records in generating a validated and comprehensive hydrologic observations in river height. The current data product includes VSs in north and south Americas, Africa and Eurasia, with the most comprehensive set of Jason-2 and Envisat RA time series available for North America and Eurasia. We present a semi-automated procedure to process returns from river locations, identified with Landsat images and updated water mask extent. Consistent methodologies for flagging ice cover are presented. DEM heights used in height filtering were retained and can be used as river height profiles. All non-validated VS have been assigned a letter grade A-D to aid end users in selection of data. Validated VS are accompanied with a suite of fit statistics. Due to the inclusiveness of the dataset, not all VS were able to undergo validation (415 of 909), but those that were demonstrate that confidence in the data product is warranted. Validation was accomplished using records from 45 in situ gauges from 12 rivers. Meta-analysis was performed to compare each gauge with each VS by relative height. Preliminary validation results are as follows. 89.3% of the data have positive Nash Sutcliff Efficiency (NES) values, and the median NSE value is 0.73. The median standard deviation of error (STDE) is .92 m. GRRATS will soon be publicly available in NetCDF format with CF compliant metadata.
Jones, Kelly K; Zenk, Shannon N; Tarlov, Elizabeth; Powell, Lisa M; Matthews, Stephen A; Horoi, Irina
2017-01-07
Food environment characterization in health studies often requires data on the location of food stores and restaurants. While commercial business lists are commonly used as data sources for such studies, current literature provides little guidance on how to use validation study results to make decisions on which commercial business list to use and how to maximize the accuracy of those lists. Using data from a retrospective cohort study [Weight And Veterans' Environments Study (WAVES)], we (a) explain how validity and bias information from existing validation studies (count accuracy, classification accuracy, locational accuracy, as well as potential bias by neighborhood racial/ethnic composition, economic characteristics, and urbanicity) were used to determine which commercial business listing to purchase for retail food outlet data and (b) describe the methods used to maximize the quality of the data and results of this approach. We developed data improvement methods based on existing validation studies. These methods included purchasing records from commercial business lists (InfoUSA and Dun and Bradstreet) based on store/restaurant names as well as standard industrial classification (SIC) codes, reclassifying records by store type, improving geographic accuracy of records, and deduplicating records. We examined the impact of these procedures on food outlet counts in US census tracts. After cleaning and deduplicating, our strategy resulted in a 17.5% reduction in the count of food stores that were valid from those purchased from InfoUSA and 5.6% reduction in valid counts of restaurants purchased from Dun and Bradstreet. Locational accuracy was improved for 7.5% of records by applying street addresses of subsequent years to records with post-office (PO) box addresses. In total, up to 83% of US census tracts annually experienced a change (either positive or negative) in the count of retail food outlets between the initial purchase and the final dataset. Our study provides a step-by-step approach to purchase and process business list data obtained from commercial vendors. The approach can be followed by studies of any size, including those with datasets too large to process each record by hand and will promote consistency in characterization of the retail food environment across studies.
NASA Astrophysics Data System (ADS)
Hassinen, S.; Balis, D.; Bauer, H.; Begoin, M.; Delcloo, A.; Eleftheratos, K.; Gimeno Garcia, S.; Granville, J.; Grossi, M.; Hao, N.; Hedelt, P.; Hendrick, F.; Hess, M.; Heue, K.-P.; Hovila, J.; Jønch-Sørensen, H.; Kalakoski, N.; Kiemle, S.; Kins, L.; Koukouli, M. E.; Kujanpää, J.; Lambert, J.-C.; Lerot, C.; Loyola, D.; Määttä, A.; Pedergnana, M.; Pinardi, G.; Romahn, F.; van Roozendael, M.; Lutz, R.; De Smedt, I.; Stammes, P.; Steinbrecht, W.; Tamminen, J.; Theys, N.; Tilstra, L. G.; Tuinder, O. N. E.; Valks, P.; Zerefos, C.; Zimmer, W.; Zyrichidou, I.
2015-07-01
The three GOME-2 instruments will provide unique and long data sets for atmospheric research and applications. The complete time period will be 2007-2022, including the period of ozone depletion as well as the beginning of ozone layer recovery. Besides ozone chemistry, the GOME-2 products are important e.g. for air quality studies, climate modeling, policy monitoring and hazard warnings. The heritage for GOME-2 is in the ERS/GOME and Envisat/SCIAMACHY instruments. The current Level 2 (L2) data cover a wide range of products such as trace gas columns (NO2, BrO, H2CO, H2O, SO2), tropospheric columns of NO2, total ozone columns and vertical ozone profiles in high and low spatial resolution, absorbing aerosol indices from the main science channels as well as from the polarization channels (AAI, AAI-PMD), Lambertian-equivalent reflectivity database, clear-sky and cloud-corrected UV indices and surface UV fields with different weightings and photolysis rates. The Ozone Monitoring and Atmospheric Composition Satellite Application Facility (O3M SAF) processing and data dissemination is operational and running 24/7. Data quality is quarantined by the detailed review processes for the algorithms, validation of the products as well as by a continuous quality monitoring of the products and processing. This is an overview paper providing the O3M SAF project background, current status and future plans to utilization of the GOME-2 data. An important focus is the provision of summaries of the GOME-2 products including product principles and validation examples together with the product sample images. Furthermore, this paper collects the references to the detailed product algorithm and validation papers.
Kopit, Lauren M.; Kim, Eun Bae; Siezen, Roland J.; Harris, Linda J.
2014-01-01
Enterococcus faecium NRRL B-2354 is a surrogate microorganism used in place of pathogens for validation of thermal processing technologies and systems. We evaluated the safety of strain NRRL B-2354 based on its genomic and functional characteristics. The genome of E. faecium NRRL B-2354 was sequenced and found to comprise a 2,635,572-bp chromosome and a 214,319-bp megaplasmid. A total of 2,639 coding sequences were identified, including 45 genes unique to this strain. Hierarchical clustering of the NRRL B-2354 genome with 126 other E. faecium genomes as well as pbp5 locus comparisons and multilocus sequence typing (MLST) showed that the genotype of this strain is most similar to commensal, or community-associated, strains of this species. E. faecium NRRL B-2354 lacks antibiotic resistance genes, and both NRRL B-2354 and its clonal relative ATCC 8459 are sensitive to clinically relevant antibiotics. This organism also lacks, or contains nonfunctional copies of, enterococcal virulence genes including acm, cyl, the ebp operon, esp, gelE, hyl, IS16, and associated phenotypes. It does contain scm, sagA, efaA, and pilA, although either these genes were not expressed or their roles in enterococcal virulence are not well understood. Compared with the clinical strains TX0082 and 1,231,502, E. faecium NRRL B-2354 was more resistant to acidic conditions (pH 2.4) and high temperatures (60°C) and was able to grow in 8% ethanol. These findings support the continued use of E. faecium NRRL B-2354 in thermal process validation of food products. PMID:24413604
An audit of diabetes care at 3 centres in Alexandria.
Abou El-Enein, N Y; Abolfotouh, M A
2008-01-01
Selected indicators for structure, process and outcome of care were used to audit diabetes care in 3 centres in Alexandria. Structure was poor: main problems included absence of appointment and recall system, deficiencies in laboratory resources and lack of educational material. Process of care was poor for 69.2% of patients: deficiencies included absence of essential information in records and missing some essential clinical examinations. Degree of control was poor for 49.2% of patients and only 30.6% had no complications. Compliance to appointment was good for about 80% of patients. Better outcome (fewer complications and higher compliance) was significantly associated with poor process of care. This cannot, however, be considered a valid predictor of outcome as good care might be initiated by the presence of complications.
Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W
2012-01-01
The Food and Drug Administration's (FDA) Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of acute respiratory failure (ARF). PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify ARF, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on ARF algorithms and validation estimates. Only two studies provided codes for ARF, each using related yet different ICD-9 codes (i.e., ICD-9 codes 518.8, "other diseases of lung," and 518.81, "acute respiratory failure"). Neither study provided validation estimates. Research needs to be conducted on designing validation studies to test ARF algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.
Perception of competence in middle school physical education: instrument development and validation.
Scrabis-Fletcher, Kristin; Silverman, Stephen
2010-03-01
Perception of Competence (POC) has been studied extensively in physical activity (PA) research with similar instruments adapted for physical education (PE) research. Such instruments do not account for the unique PE learning environment. Therefore, an instrument was developed and the scores validated to measure POC in middle school PE. A multiphase design was used consisting of an intensive theoretical review, elicitation study, prepilot study, pilot study, content validation study, and final validation study (N=1281). Data analysis included a multistep iterative process to identify the best model fit. A three-factor model for POC was tested and resulted in root mean square error of approximation = .09, root mean square residual = .07, goodness offit index = .90, and adjusted goodness offit index = .86 values in the acceptable range (Hu & Bentler, 1999). A two-factor model was also tested and resulted in a good fit (two-factor fit indexes values = .05, .03, .98, .97, respectively). The results of this study suggest that an instrument using a three- or two-factor model provides reliable and valid scores ofPOC measurement in middle school PE.
Mohammadsalehi, Narges; Mohammadbeigi, Abolfazl; Jadidi, Rahmatollah; Anbari, Zohreh; Ghaderi, Ebrahim; Akbari, Mojtaba
2015-09-01
Reliability and validity are the key concepts in measurement processes. Young internet addiction test (YIAT) is regarded as a valid and reliable questionnaire in English speaking countries for diagnosis of Internet-related behavior disorders. This study aimed at validating the Persian version of YIAT in the Iranian society. A pilot and a cross-sectional study were conducted on 28 and 254 students of Qom University of Medical Sciences, respectively, in order to validate the Persian version of YIAT. Forward and backward translations were conducted to develop a Persian version of the scale. Reliability was measured by test-retest, Cronbach's alpha and interclass correlation coefficient (ICC). Face, content and construct validity were approved by the importance score index, content validity ratio (CVR), content validity index (CVI), correlation matrix and factor analysis. The SPSS software was used for data analysis. The Cronbach's alpha was 0.917 (CI 95%; 0.901 - 0.931). The average of scale-level CVI was calculated to be 0.74; the CVI index for each item was higher than 0.83 and the average of CVI index was equal to 0.89. Factor analysis extracted three factors including personal activities disorder (PAD), emotional and mood disorder (EMD) and social activities disorder (SAD), with more than 55.8% of total variances. The ICC for different factors of Persian version of Young Questionnaire including PAD, EMD and for SAD was r = 0.884; CI 95%; 0.861 - 0.904, r = 0.766; CI 95%; 0.718 - 0.808 and r = 0.745; CI 95%; 0.686 - 0.795, respectively. Our study showed that the Persian version of YIAT is good and usable on Iranian people. The reliability of the instrument was very good. Moreover, the validity of the Persian translated version of the scale was sufficient. In addition, the reliability and validity of the three extracted factors of YIAT were evaluated and were acceptable.
Mohammadsalehi, Narges; Mohammadbeigi, Abolfazl; Jadidi, Rahmatollah; Anbari, Zohreh; Ghaderi, Ebrahim; Akbari, Mojtaba
2015-01-01
Background: Reliability and validity are the key concepts in measurement processes. Young internet addiction test (YIAT) is regarded as a valid and reliable questionnaire in English speaking countries for diagnosis of Internet-related behavior disorders. Objectives: This study aimed at validating the Persian version of YIAT in the Iranian society. Patients and Methods: A pilot and a cross-sectional study were conducted on 28 and 254 students of Qom University of Medical Sciences, respectively, in order to validate the Persian version of YIAT. Forward and backward translations were conducted to develop a Persian version of the scale. Reliability was measured by test-retest, Cronbach’s alpha and interclass correlation coefficient (ICC). Face, content and construct validity were approved by the importance score index, content validity ratio (CVR), content validity index (CVI), correlation matrix and factor analysis. The SPSS software was used for data analysis. Results: The Cronbach’s alpha was 0.917 (CI 95%; 0.901 - 0.931). The average of scale-level CVI was calculated to be 0.74; the CVI index for each item was higher than 0.83 and the average of CVI index was equal to 0.89. Factor analysis extracted three factors including personal activities disorder (PAD), emotional and mood disorder (EMD) and social activities disorder (SAD), with more than 55.8% of total variances. The ICC for different factors of Persian version of Young Questionnaire including PAD, EMD and for SAD was r = 0.884; CI 95%; 0.861 - 0.904, r = 0.766; CI 95%; 0.718 - 0.808 and r = 0.745; CI 95%; 0.686 - 0.795, respectively. Conclusions: Our study showed that the Persian version of YIAT is good and usable on Iranian people. The reliability of the instrument was very good. Moreover, the validity of the Persian translated version of the scale was sufficient. In addition, the reliability and validity of the three extracted factors of YIAT were evaluated and were acceptable. PMID:26495253
Mining Twitter Data to Augment NASA GPM Validation
NASA Technical Reports Server (NTRS)
Teng, Bill; Albayrak, Arif; Huffman, George; Vollmer, Bruce; Loeser, Carlee; Acker, Jim
2017-01-01
The Twitter data stream is an important new source of real-time and historical global information for potentially augmenting the validation program of NASA's Global Precipitation Measurement (GPM) mission. There have been other similar uses of Twitter, though mostly related to natural hazards monitoring and management. The validation of satellite precipitation estimates is challenging, because many regions lack data or access to data, especially outside of the U.S. and in remote and developing areas. The time-varying set of "precipitation" tweets can be thought of as an organic network of rain gauges, potentially providing a widespread view of precipitation occurrence. Twitter provides a large source of crowd for crowdsourcing. During a 24-hour period in the middle of the snow storm this past March in the U.S. Northeast, we collected more than 13,000 relevant precipitation tweets with exact geolocation. The overall objective of our project is to determine the extent to which processed tweets can provide additional information that improves the validation of GPM data. Though our current effort focuses on tweets and precipitation, our approach is general and applicable to other social media and other geophysical measurements. Specifically, we have developed an operational infrastructure for processing tweets, in a format suitable for analysis with GPM data; engaged with potential participants, both passive and active, to "enrich" the Twitter stream; and inter-compared "precipitation" tweet data, ground station data, and GPM retrievals. In this presentation, we detail the technical capabilities of our tweet processing infrastructure, including data abstraction, feature extraction, search engine, context-awareness, real-time processing, and high volume (big) data processing; various means for "enriching" the Twitter stream; and results of inter-comparisons. Our project should bring a new kind of visibility to Twitter and engender a new kind of appreciation of the value of Twitter by the science research communities.
Mining Twitter Data Stream to Augment NASA GPM Validation
NASA Astrophysics Data System (ADS)
Teng, W. L.; Albayrak, A.; Huffman, G. J.; Vollmer, B.
2017-12-01
The Twitter data stream is an important new source of real-time and historical global information for potentially augmenting the validation program of NASA's Global Precipitation Measurement (GPM) mission. There have been other similar uses of Twitter, though mostly related to natural hazards monitoring and management. The validation of satellite precipitation estimates is challenging, because many regions lack data or access to data, especially outside of the U.S. and in remote and developing areas. The time-varying set of "precipitation" tweets can be thought of as an organic network of rain gauges, potentially providing a widespread view of precipitation occurrence. Twitter provides a large source of crowd for crowdsourcing. During a 24-hour period in the middle of the snow storm this past March in the U.S. Northeast, we collected more than 13,000 relevant precipitation tweets with exact geolocation. The overall objective of our project is to determine the extent to which processed tweets can provide additional information that improves the validation of GPM data. Though our current effort focuses on tweets and precipitation, our approach is general and applicable to other social media and other geophysical measurements. Specifically, we have developed an operational infrastructure for processing tweets, in a format suitable for analysis with GPM data; engaged with potential participants, both passive and active, to "enrich" the Twitter stream; and inter-compared "precipitation" tweet data, ground station data, and GPM retrievals. In this presentation, we detail the technical capabilities of our tweet processing infrastructure, including data abstraction, feature extraction, search engine, context-awareness, real-time processing, and high volume (big) data processing; various means for "enriching" the Twitter stream; and results of inter-comparisons. Our project should bring a new kind of visibility to Twitter and engender a new kind of appreciation of the value of Twitter by the science research communities.
Verification and Validation of Residual Stresses in Bi-Material Composite Rings
DOE Office of Scientific and Technical Information (OSTI.GOV)
Nelson, Stacy Michelle; Hanson, Alexander Anthony; Briggs, Timothy
Process-induced residual stresses commonly occur in composite structures composed of dissimilar materials. These residual stresses form due to differences in the composite materials’ coefficients of thermal expansion and the shrinkage upon cure exhibited by polymer matrix materials. Depending upon the specific geometric details of the composite structure and the materials’ curing parameters, it is possible that these residual stresses could result in interlaminar delamination or fracture within the composite. Therefore, the consideration of potential residual stresses is important when designing composite parts and their manufacturing processes. However, the experimental determination of residual stresses in prototype parts can be time andmore » cost prohibitive. As an alternative to physical measurement, it is possible for computational tools to be used to quantify potential residual stresses in composite prototype parts. Therefore, the objectives of the presented work are to demonstrate a simplistic method for simulating residual stresses in composite parts, as well as the potential value of sensitivity and uncertainty quantification techniques during analyses for which material property parameters are unknown. Specifically, a simplified residual stress modeling approach, which accounts for coefficient of thermal expansion mismatch and polymer shrinkage, is implemented within the Sandia National Laboratories’ developed SIERRA/SolidMechanics code. Concurrent with the model development, two simple, bi-material structures composed of a carbon fiber/epoxy composite and aluminum, a flat plate and a cylinder, are fabricated and the residual stresses are quantified through the measurement of deformation. Then, in the process of validating the developed modeling approach with the experimental residual stress data, manufacturing process simulations of the two simple structures are developed and undergo a formal verification and validation process, including a mesh convergence study, sensitivity analysis, and uncertainty quantification. The simulations’ final results show adequate agreement with the experimental measurements, indicating the validity of a simple modeling approach, as well as a necessity for the inclusion of material parameter uncertainty in the final residual stress predictions.« less
Schnyer, Rosa N; Conboy, Lisa A; Jacobson, Eric; McKnight, Patrick; Goddard, Thomas; Moscatelli, Francesca; Legedza, Anna T R; Kerr, Catherine; Kaptchuk, Ted J; Wayne, Peter M
2005-12-01
The diagnostic framework and clinical reasoning process in Chinese medicine emphasizes the contextual and qualitative nature of a patient's illness. Chinese medicine assessment data may help interpret clinical outcomes. As part of a study aimed at assessing the validity and improving the inter-rater reliability of the Chinese diagnostic process, a structured assessment instrument was developed for use in clinical trials of acupuncture and other Chinese medical therapies. To foster collaboration and maximize resources and information, an interdisciplinary advisory team was assembled. Under the guidance of two group process facilitators, and in order to establish whether the assessment instrument was consistent with accepted Chinese medicine diagnostic categories (face validity) and included the full range of each concept's meaning (content validity), a panel of Traditional Chinese Medicine (TCM) expert clinicians was convened and their responses were organized using the Delphi process, an iterative, anonymous, idea-generating and consensus-building process. An aggregate rating measure was obtained by taking the mean of mean ratings for each question across all 10 experts. Over three rounds, the overall rating increased from 7.4 (SD = 1.3) in Round 1 to 9.1 (SD = 0.5) in Round 3. The level of agreement among clinicians was measured by a decrease in SD. The final instrument TEAMSI-TCM (Traditional East Asian Medicine Structured Interview, TCM version) uses the pattern differentiation model characteristic of TCM. This modular, dynamic version was specifically designed to assess women, with a focus on gynecologic conditions; with modifications it can be adapted for use with other populations and conditions. TEAMSI-TCM is a prescriptive instrument that guides clinicians to use the proper indicators, combine them in a systematic manner, and generate conclusions. In conjunction with treatment manualization and training it may serve to increase inter-rater reliability and inter-trial reproducibility in Chinese medicine clinical trials. Testing of the validity and reliability of this instrument currently is underway.
ATtRACT-a database of RNA-binding proteins and associated motifs.
Giudice, Girolamo; Sánchez-Cabo, Fátima; Torroja, Carlos; Lara-Pezzi, Enrique
2016-01-01
RNA-binding proteins (RBPs) play a crucial role in key cellular processes, including RNA transport, splicing, polyadenylation and stability. Understanding the interaction between RBPs and RNA is key to improve our knowledge of RNA processing, localization and regulation in a global manner. Despite advances in recent years, a unified non-redundant resource that includes information on experimentally validated motifs, RBPs and integrated tools to exploit this information is lacking. Here, we developed a database named ATtRACT (available athttp://attract.cnic.es) that compiles information on 370 RBPs and 1583 RBP consensus binding motifs, 192 of which are not present in any other database. To populate ATtRACT we (i) extracted and hand-curated experimentally validated data from CISBP-RNA, SpliceAid-F, RBPDB databases, (ii) integrated and updated the unavailable ASD database and (iii) extracted information from Protein-RNA complexes present in Protein Data Bank database through computational analyses. ATtRACT provides also efficient algorithms to search a specific motif and scan one or more RNA sequences at a time. It also allows discoveringde novomotifs enriched in a set of related sequences and compare them with the motifs included in the database.Database URL:http:// attract. cnic. es. © The Author(s) 2016. Published by Oxford University Press.
Analytical difficulties facing today's regulatory laboratories: issues in method validation.
MacNeil, James D
2012-08-01
The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.
Errors in reporting on dissolution research: methodological and statistical implications.
Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria
2017-02-01
In vitro dissolution testing provides useful information at clinical and preclinical stages of the drug development process. The study includes pharmaceutical papers on dissolution research published in Polish journals between 2010 and 2015. They were analyzed with regard to information provided by authors about chosen methods, performed validation, statistical reporting or assumptions used to properly compare release profiles considering the present guideline documents addressed to dissolution methodology and its validation. Of all the papers included in the study, 23.86% presented at least one set of validation parameters, 63.64% gave the results of the weight uniformity test, 55.68% content determination, 97.73% dissolution testing conditions, and 50% discussed a comparison of release profiles. The assumptions for methods used to compare dissolution profiles were discussed in 6.82% of papers. By means of example analyses, we demonstrate that the outcome can be influenced by the violation of several assumptions or selection of an improper method to compare dissolution profiles. A clearer description of the procedures would undoubtedly increase the quality of papers in this area.
[The Confusion Assessment Method: Transcultural adaptation of a French version].
Antoine, V; Belmin, J; Blain, H; Bonin-Guillaume, S; Goldsmith, L; Guerin, O; Kergoat, M-J; Landais, P; Mahmoudi, R; Morais, J A; Rataboul, P; Saber, A; Sirvain, S; Wolfklein, G; de Wazieres, B
2018-05-01
The Confusion Assessment Method (CAM) is a validated key tool in clinical practice and research programs to diagnose delirium and assess its severity. There is no validated French version of the CAM training manual and coding guide (Inouye SK). The aim of this study was to establish a consensual French version of the CAM and its manual. Cross-cultural adaptation to achieve equivalence between the original version and a French adapted version of the CAM manual. A rigorous process was conducted including control of cultural adequacy of the tool's components, double forward and back translations, reconciliation, expert committee review (including bilingual translators with different nationalities, a linguist, highly qualified clinicians, methodologists) and pretesting. A consensual French version of the CAM was achieved. Implementation of the CAM French version in daily clinical practice will enable optimal diagnosis of delirium diagnosis and enhance communication between health professionals in French speaking countries. Validity and psychometric properties are being tested in a French multicenter cohort, opening up new perspectives for improved quality of care and research programs in French speaking countries. Copyright © 2018 Elsevier Masson SAS. All rights reserved.
A review of the solar array manufacturing industry costing standards
NASA Technical Reports Server (NTRS)
1977-01-01
The solar array manufacturing industry costing standards model is designed to compare the cost of producing solar arrays using alternative manufacturing processes. Constructive criticism of the methodology used is intended to enhance its implementation as a practical design tool. Three main elements of the procedure include workbook format and presentation, theoretical model validity and standard financial parameters.
Validating Alternative Modes of Scoring for Coloured Progressive Matrices.
ERIC Educational Resources Information Center
Razel, Micha; Eylon, Bat-Sheva
Conventional scoring of the Coloured Progressive Matrices (CPM) was compared with three methods of multiple weight scoring. The methods include: (1) theoretical weighting in which the weights were based on a theory of cognitive processing; (2) judged weighting in which the weights were given by a group of nine adult expert judges; and (3)…