Sample records for method validation included

  1. 78 FR 20695 - Walk-Through Metal Detectors and Hand-Held Metal Detectors Test Method Validation

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-04-05

    ... Detectors and Hand-Held Metal Detectors Test Method Validation AGENCY: National Institute of Justice, DOJ... ensure that the test methods in the standards are properly documented, NIJ is requesting proposals (including price quotes) for test method validation efforts from testing laboratories. NIJ is also seeking...

  2. Software validation applied to spreadsheets used in laboratories working under ISO/IEC 17025

    NASA Astrophysics Data System (ADS)

    Banegas, J. M.; Orué, M. W.

    2016-07-01

    Several documents deal with software validation. Nevertheless, more are too complex to be applied to validate spreadsheets - surely the most used software in laboratories working under ISO/IEC 17025. The method proposed in this work is intended to be directly applied to validate spreadsheets. It includes a systematic way to document requirements, operational aspects regarding to validation, and a simple method to keep records of validation results and modifications history. This method is actually being used in an accredited calibration laboratory, showing to be practical and efficient.

  3. Fault-tolerant clock synchronization validation methodology. [in computer systems

    NASA Technical Reports Server (NTRS)

    Butler, Ricky W.; Palumbo, Daniel L.; Johnson, Sally C.

    1987-01-01

    A validation method for the synchronization subsystem of a fault-tolerant computer system is presented. The high reliability requirement of flight-crucial systems precludes the use of most traditional validation methods. The method presented utilizes formal design proof to uncover design and coding errors and experimentation to validate the assumptions of the design proof. The experimental method is described and illustrated by validating the clock synchronization system of the Software Implemented Fault Tolerance computer. The design proof of the algorithm includes a theorem that defines the maximum skew between any two nonfaulty clocks in the system in terms of specific system parameters. Most of these parameters are deterministic. One crucial parameter is the upper bound on the clock read error, which is stochastic. The probability that this upper bound is exceeded is calculated from data obtained by the measurement of system parameters. This probability is then included in a detailed reliability analysis of the system.

  4. Experimental validation of structural optimization methods

    NASA Technical Reports Server (NTRS)

    Adelman, Howard M.

    1992-01-01

    The topic of validating structural optimization methods by use of experimental results is addressed. The need for validating the methods as a way of effecting a greater and an accelerated acceptance of formal optimization methods by practicing engineering designers is described. The range of validation strategies is defined which includes comparison of optimization results with more traditional design approaches, establishing the accuracy of analyses used, and finally experimental validation of the optimization results. Examples of the use of experimental results to validate optimization techniques are described. The examples include experimental validation of the following: optimum design of a trussed beam; combined control-structure design of a cable-supported beam simulating an actively controlled space structure; minimum weight design of a beam with frequency constraints; minimization of the vibration response of helicopter rotor blade; minimum weight design of a turbine blade disk; aeroelastic optimization of an aircraft vertical fin; airfoil shape optimization for drag minimization; optimization of the shape of a hole in a plate for stress minimization; optimization to minimize beam dynamic response; and structural optimization of a low vibration helicopter rotor.

  5. A Critical Review of Methods to Evaluate the Impact of FDA Regulatory Actions

    PubMed Central

    Briesacher, Becky A.; Soumerai, Stephen B.; Zhang, Fang; Toh, Sengwee; Andrade, Susan E.; Wagner, Joann L.; Shoaibi, Azadeh; Gurwitz, Jerry H.

    2013-01-01

    Purpose To conduct a synthesis of the literature on methods to evaluate the impacts of FDA regulatory actions, and identify best practices for future evaluations. Methods We searched MEDLINE for manuscripts published between January 1948 and August 2011 that included terms related to FDA, regulatory actions, and empirical evaluation; the review additionally included FDA-identified literature. We used a modified Delphi method to identify preferred methodologies. We included studies with explicit methods to address threats to validity, and identified designs and analytic methods with strong internal validity that have been applied to other policy evaluations. Results We included 18 studies out of 243 abstracts and papers screened. Overall, analytic rigor in prior evaluations of FDA regulatory actions varied considerably; less than a quarter of studies (22%) included control groups. Only 56% assessed changes in the use of substitute products/services, and 11% examined patient health outcomes. Among studies meeting minimal criteria of rigor, 50% found no impact or weak/modest impacts of FDA actions and 33% detected unintended consequences. Among those studies finding significant intended effects of FDA actions, all cited the importance of intensive communication efforts. There are preferred methods with strong internal validity that have yet to be applied to evaluations of FDA regulatory actions. Conclusions Rigorous evaluations of the impact of FDA regulatory actions have been limited and infrequent. Several methods with strong internal validity are available to improve trustworthiness of future evaluations of FDA policies. PMID:23847020

  6. Specification and Preliminary Validation of IAT (Integrated Analysis Techniques) Methods: Executive Summary.

    DTIC Science & Technology

    1985-03-01

    conceptual framwork , and preliminary validation of IAT concepts. Planned work for FY85, including more extensive validation, is also described. 20...Developments: Required Capabilities .... ......... 10 2-1 IAT Conceptual Framework - FY85 (FEO) ..... ........... 11 2-2 Recursive Nature of Decomposition...approach: 1) Identify needs & requirements for IAT. 2) Develop IAT conceptual framework. 3) Validate IAT methods. 4) Develop applications materials. To

  7. Analyzing the Validity of the Adult-Adolescent Parenting Inventory for Low-Income Populations

    ERIC Educational Resources Information Center

    Lawson, Michael A.; Alameda-Lawson, Tania; Byrnes, Edward

    2017-01-01

    Objectives: The purpose of this study was to examine the construct and predictive validity of the Adult-Adolescent Parenting Inventory (AAPI-2). Methods: The validity of the AAPI-2 was evaluated using multiple statistical methods, including exploratory factor analysis, confirmatory factor analysis, and latent class analysis. These analyses were…

  8. Analytical difficulties facing today's regulatory laboratories: issues in method validation.

    PubMed

    MacNeil, James D

    2012-08-01

    The challenges facing analytical laboratories today are not unlike those faced in the past, although both the degree of complexity and the rate of change have increased. Challenges such as development and maintenance of expertise, maintenance and up-dating of equipment, and the introduction of new test methods have always been familiar themes for analytical laboratories, but international guidelines for laboratories involved in the import and export testing of food require management of such changes in a context which includes quality assurance, accreditation, and method validation considerations. Decisions as to when a change in a method requires re-validation of the method or on the design of a validation scheme for a complex multi-residue method require a well-considered strategy, based on a current knowledge of international guidance documents and regulatory requirements, as well the laboratory's quality system requirements. Validation demonstrates that a method is 'fit for purpose', so the requirement for validation should be assessed in terms of the intended use of a method and, in the case of change or modification of a method, whether that change or modification may affect a previously validated performance characteristic. In general, method validation involves method scope, calibration-related parameters, method precision, and recovery. Any method change which may affect method scope or any performance parameters will require re-validation. Some typical situations involving change in methods are discussed and a decision process proposed for selection of appropriate validation measures. © 2012 John Wiley & Sons, Ltd.

  9. Directed Design of Experiments for Validating Probability of Detection Capability of a Testing System

    NASA Technical Reports Server (NTRS)

    Generazio, Edward R. (Inventor)

    2012-01-01

    A method of validating a probability of detection (POD) testing system using directed design of experiments (DOE) includes recording an input data set of observed hit and miss or analog data for sample components as a function of size of a flaw in the components. The method also includes processing the input data set to generate an output data set having an optimal class width, assigning a case number to the output data set, and generating validation instructions based on the assigned case number. An apparatus includes a host machine for receiving the input data set from the testing system and an algorithm for executing DOE to validate the test system. The algorithm applies DOE to the input data set to determine a data set having an optimal class width, assigns a case number to that data set, and generates validation instructions based on the case number.

  10. Method validation for chemical composition determination by electron microprobe with wavelength dispersive spectrometer

    NASA Astrophysics Data System (ADS)

    Herrera-Basurto, R.; Mercader-Trejo, F.; Muñoz-Madrigal, N.; Juárez-García, J. M.; Rodriguez-López, A.; Manzano-Ramírez, A.

    2016-07-01

    The main goal of method validation is to demonstrate that the method is suitable for its intended purpose. One of the advantages of analytical method validation is translated into a level of confidence about the measurement results reported to satisfy a specific objective. Elemental composition determination by wavelength dispersive spectrometer (WDS) microanalysis has been used over extremely wide areas, mainly in the field of materials science, impurity determinations in geological, biological and food samples. However, little information is reported about the validation of the applied methods. Herein, results of the in-house method validation for elemental composition determination by WDS are shown. SRM 482, a binary alloy Cu-Au of different compositions, was used during the validation protocol following the recommendations for method validation proposed by Eurachem. This paper can be taken as a reference for the evaluation of the validation parameters more frequently requested to get the accreditation under the requirements of the ISO/IEC 17025 standard: selectivity, limit of detection, linear interval, sensitivity, precision, trueness and uncertainty. A model for uncertainty estimation was proposed including systematic and random errors. In addition, parameters evaluated during the validation process were also considered as part of the uncertainty model.

  11. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A

    Method validation is the process of evaluating whether an analytical method is acceptable for its intended purpose. For pharmaceutical methods, guidelines from the United States Pharmacopeia (USP), International Conference on Harmonisation (ICH), and the United States Food and Drug Administration (USFDA) provide a framework for performing such valications. In general, methods for regulatory compliance must include studies on specificity, linearity, accuracy, precision, range, detection limit, quantitation limit, and robustness. Elements of these guidelines are readily adapted to the issue of validation for beryllium sampling and analysis. This document provides a listing of available sources which can be used to validatemore » analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers and books reviewed is given in the Appendix. Available validation documents and guides are listed therein; each has a brief description of application and use. In the referenced sources, there are varying approches to validation and varying descriptions of the valication process at different stages in method development. This discussion focuses on valication and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all referenced documents were published in English.« less

  12. The Method Effect in Communicative Testing.

    ERIC Educational Resources Information Center

    Canale, Michael

    1981-01-01

    A focus on test validity includes a consideration of the way a test measures that which it proposes to test; in other words, the validity of a test depends on method as well as content. This paper examines three areas of concern: (1) some features of communication that test method should reflect, (2) the main components of method, and (3) some…

  13. Determination of serum levels of imatinib mesylate in patients with chronic myeloid leukemia: validation and application of a new analytical method to monitor treatment compliance

    PubMed Central

    Rezende, Vinícius Marcondes; Rivellis, Ariane Julio; Gomes, Melissa Medrano; Dörr, Felipe Augusto; Novaes, Mafalda Megumi Yoshinaga; Nardinelli, Luciana; Costa, Ariel Lais de Lima; Chamone, Dalton de Alencar Fisher; Bendit, Israel

    2013-01-01

    Objective The goal of this study was to monitor imatinib mesylate therapeutically in the Tumor Biology Laboratory, Department of Hematology and Hemotherapy, Hospital das Clínicas, Faculdade de Medicina, Universidade de São Paulo (USP). A simple and sensitive method to quantify imatinib and its metabolite (CGP74588) in human serum was developed and fully validated in order to monitor treatment compliance. Methods The method used to quantify these compounds in serum included protein precipitation extraction followed by instrumental analysis using high performance liquid chromatography coupled with mass spectrometry. The method was validated for several parameters, including selectivity, precision, accuracy, recovery and linearity. Results The parameters evaluated during the validation stage exhibited satisfactory results based on the Food and Drug Administration and the Brazilian Health Surveillance Agency (ANVISA) guidelines for validating bioanalytical methods. These parameters also showed a linear correlation greater than 0.99 for the concentration range between 0.500 µg/mL and 10.0 µg/mL and a total analysis time of 13 minutes per sample. This study includes results (imatinib serum concentrations) for 308 samples from patients being treated with imatinib mesylate. Conclusion The method developed in this study was successfully validated and is being efficiently used to measure imatinib concentrations in samples from chronic myeloid leukemia patients to check treatment compliance. The imatinib serum levels of patients achieving a major molecular response were significantly higher than those of patients who did not achieve this result. These results are thus consistent with published reports concerning other populations. PMID:23741187

  14. Validation of an Instrument to Measure High School Students' Attitudes toward Fitness Testing

    ERIC Educational Resources Information Center

    Mercier, Kevin; Silverman, Stephen

    2014-01-01

    Purpose: The purpose of this investigation was to develop an instrument that has scores that are valid and reliable for measuring students' attitudes toward fitness testing. Method: The method involved the following steps: (a) an elicitation study, (b) item development, (c) a pilot study, and (d) a validation study. The pilot study included 427…

  15. [Validation of measurement methods and estimation of uncertainty of measurement of chemical agents in the air at workstations].

    PubMed

    Dobecki, Marek

    2012-01-01

    This paper reviews the requirements for measurement methods of chemical agents in the air at workstations. European standards, which have a status of Polish standards, comprise some requirements and information on sampling strategy, measuring techniques, type of samplers, sampling pumps and methods of occupational exposure evaluation at a given technological process. Measurement methods, including air sampling and analytical procedure in a laboratory, should be appropriately validated before intended use. In the validation process, selected methods are tested and budget of uncertainty is set up. The validation procedure that should be implemented in the laboratory together with suitable statistical tools and major components of uncertainity to be taken into consideration, were presented in this paper. Methods of quality control, including sampling and laboratory analyses were discussed. Relative expanded uncertainty for each measurement expressed as a percentage, should not exceed the limit of values set depending on the type of occupational exposure (short-term or long-term) and the magnitude of exposure to chemical agents in the work environment.

  16. Demography of Principals' Work and School Improvement: Content Validity of Kentucky's Standards and Indicators for School Improvement (SISI)

    ERIC Educational Resources Information Center

    Lindle, Jane Clark; Stalion, Nancy; Young, Lu

    2005-01-01

    Kentucky's accountability system includes a school-processes audit known as Standards and Indicators for School Improvement (SISI), which is in a nascent stage of validation. Content validity methods include comparison to instruments measuring similar constructs as well as other techniques such as job analysis. This study used a two-phase process…

  17. Novel Automated Morphometric and Kinematic Handwriting Assessment: A Validity Study in Children with ASD and ADHD

    ERIC Educational Resources Information Center

    Dirlikov, Benjamin; Younes, Laurent; Nebel, Mary Beth; Martinelli, Mary Katherine; Tiedemann, Alyssa Nicole; Koch, Carolyn A.; Fiorilli, Diana; Bastian, Amy J.; Denckla, Martha Bridge; Miller, Michael I.; Mostofsky, Stewart H.

    2017-01-01

    This study presents construct validity for a novel automated morphometric and kinematic handwriting assessment, including (1) convergent validity, establishing reliability of automated measures with traditional manual-derived Minnesota Handwriting Assessment (MHA), and (2) discriminant validity, establishing that the automated methods distinguish…

  18. Comparative assessment of bioanalytical method validation guidelines for pharmaceutical industry.

    PubMed

    Kadian, Naveen; Raju, Kanumuri Siva Rama; Rashid, Mamunur; Malik, Mohd Yaseen; Taneja, Isha; Wahajuddin, Muhammad

    2016-07-15

    The concepts, importance, and application of bioanalytical method validation have been discussed for a long time and validation of bioanalytical methods is widely accepted as pivotal before they are taken into routine use. United States Food and Drug Administration (USFDA) guidelines issued in 2001 have been referred for every guideline released ever since; may it be European Medical Agency (EMA) Europe, National Health Surveillance Agency (ANVISA) Brazil, Ministry of Health and Labour Welfare (MHLW) Japan or any other guideline in reference to bioanalytical method validation. After 12 years, USFDA released its new draft guideline for comments in 2013, which covers the latest parameters or topics encountered in bioanalytical method validation and approached towards the harmonization of bioanalytical method validation across the globe. Even though the regulatory agencies have general agreement, significant variations exist in acceptance criteria and methodology. The present review highlights the variations, similarities and comparison between bioanalytical method validation guidelines issued by major regulatory authorities worldwide. Additionally, other evaluation parameters such as matrix effect, incurred sample reanalysis including other stability aspects have been discussed to provide an ease of access for designing a bioanalytical method and its validation complying with the majority of drug authority guidelines. Copyright © 2016. Published by Elsevier B.V.

  19. Simulation verification techniques study: Simulation performance validation techniques document. [for the space shuttle system

    NASA Technical Reports Server (NTRS)

    Duncan, L. M.; Reddell, J. P.; Schoonmaker, P. B.

    1975-01-01

    Techniques and support software for the efficient performance of simulation validation are discussed. Overall validation software structure, the performance of validation at various levels of simulation integration, guidelines for check case formulation, methods for real time acquisition and formatting of data from an all up operational simulator, and methods and criteria for comparison and evaluation of simulation data are included. Vehicle subsystems modules, module integration, special test requirements, and reference data formats are also described.

  20. USING CFD TO ANALYZE NUCLEAR SYSTEMS BEHAVIOR: DEFINING THE VALIDATION REQUIREMENTS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Richard Schultz

    2012-09-01

    A recommended protocol to formulate numeric tool specifications and validation needs in concert with practices accepted by regulatory agencies for advanced reactors is described. The protocol is based on the plant type and perceived transient and accident envelopes that translates to boundary conditions for a process that gives the: (a) key phenomena and figures-of-merit which must be analyzed to ensure that the advanced plant can be licensed, (b) specification of the numeric tool capabilities necessary to perform the required analyses—including bounding calculational uncertainties, and (c) specification of the validation matrices and experiments--including the desired validation data. The result of applyingmore » the process enables a complete program to be defined, including costs, for creating and benchmarking transient and accident analysis methods for advanced reactors. By following a process that is in concert with regulatory agency licensing requirements from the start to finish, based on historical acceptance of past licensing submittals, the methods derived and validated have a high probability of regulatory agency acceptance.« less

  1. Developing and validating a nutrition knowledge questionnaire: key methods and considerations.

    PubMed

    Trakman, Gina Louise; Forsyth, Adrienne; Hoye, Russell; Belski, Regina

    2017-10-01

    To outline key statistical considerations and detailed methodologies for the development and evaluation of a valid and reliable nutrition knowledge questionnaire. Literature on questionnaire development in a range of fields was reviewed and a set of evidence-based guidelines specific to the creation of a nutrition knowledge questionnaire have been developed. The recommendations describe key qualitative methods and statistical considerations, and include relevant examples from previous papers and existing nutrition knowledge questionnaires. Where details have been omitted for the sake of brevity, the reader has been directed to suitable references. We recommend an eight-step methodology for nutrition knowledge questionnaire development as follows: (i) definition of the construct and development of a test plan; (ii) generation of the item pool; (iii) choice of the scoring system and response format; (iv) assessment of content validity; (v) assessment of face validity; (vi) purification of the scale using item analysis, including item characteristics, difficulty and discrimination; (vii) evaluation of the scale including its factor structure and internal reliability, or Rasch analysis, including assessment of dimensionality and internal reliability; and (viii) gathering of data to re-examine the questionnaire's properties, assess temporal stability and confirm construct validity. Several of these methods have previously been overlooked. The measurement of nutrition knowledge is an important consideration for individuals working in the nutrition field. Improved methods in the development of nutrition knowledge questionnaires, such as the use of factor analysis or Rasch analysis, will enable more confidence in reported measures of nutrition knowledge.

  2. VALIDATION OF ANALYTICAL METHODS AND INSTRUMENTATION FOR BERYLLIUM MEASUREMENT: REVIEW AND SUMMARY OF AVAILABLE GUIDES, PROCEDURES, AND PROTOCOLS

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ekechukwu, A.

    This document proposes to provide a listing of available sources which can be used to validate analytical methods and/or instrumentation for beryllium determination. A literature review was conducted of available standard methods and publications used for method validation and/or quality control. A comprehensive listing of the articles, papers, and books reviewed is given in Appendix 1. Available validation documents and guides are listed in the appendix; each has a brief description of application and use. In the referenced sources, there are varying approaches to validation and varying descriptions of validation at different stages in method development. This discussion focuses onmore » validation and verification of fully developed methods and instrumentation that have been offered up for use or approval by other laboratories or official consensus bodies such as ASTM International, the International Standards Organization (ISO) and the Association of Official Analytical Chemists (AOAC). This review was conducted as part of a collaborative effort to investigate and improve the state of validation for measuring beryllium in the workplace and the environment. Documents and publications from the United States and Europe are included. Unless otherwise specified, all documents were published in English.« less

  3. Meeting Report: Validation of Toxicogenomics-Based Test Systems: ECVAM–ICCVAM/NICEATM Considerations for Regulatory Use

    PubMed Central

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H.; Clerici, Libero; Coecke, Sandra; Douglas, George R.; Gribaldo, Laura; Groten, John P.; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R.; Toda, Eisaku; Tong, Weida; van Delft, Joost H.; Weis, Brenda; Schechtman, Leonard M.

    2006-01-01

    This is the report of the first workshop “Validation of Toxicogenomics-Based Test Systems” held 11–12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities. PMID:16507466

  4. Meeting report: Validation of toxicogenomics-based test systems: ECVAM-ICCVAM/NICEATM considerations for regulatory use.

    PubMed

    Corvi, Raffaella; Ahr, Hans-Jürgen; Albertini, Silvio; Blakey, David H; Clerici, Libero; Coecke, Sandra; Douglas, George R; Gribaldo, Laura; Groten, John P; Haase, Bernd; Hamernik, Karen; Hartung, Thomas; Inoue, Tohru; Indans, Ian; Maurici, Daniela; Orphanides, George; Rembges, Diana; Sansone, Susanna-Assunta; Snape, Jason R; Toda, Eisaku; Tong, Weida; van Delft, Joost H; Weis, Brenda; Schechtman, Leonard M

    2006-03-01

    This is the report of the first workshop "Validation of Toxicogenomics-Based Test Systems" held 11-12 December 2003 in Ispra, Italy. The workshop was hosted by the European Centre for the Validation of Alternative Methods (ECVAM) and organized jointly by ECVAM, the U.S. Interagency Coordinating Committee on the Validation of Alternative Methods (ICCVAM), and the National Toxicology Program (NTP) Interagency Center for the Evaluation of Alternative Toxicological Methods (NICEATM). The primary aim of the workshop was for participants to discuss and define principles applicable to the validation of toxicogenomics platforms as well as validation of specific toxicologic test methods that incorporate toxicogenomics technologies. The workshop was viewed as an opportunity for initiating a dialogue between technologic experts, regulators, and the principal validation bodies and for identifying those factors to which the validation process would be applicable. It was felt that to do so now, as the technology is evolving and associated challenges are identified, would be a basis for the future validation of the technology when it reaches the appropriate stage. Because of the complexity of the issue, different aspects of the validation of toxicogenomics-based test methods were covered. The three focus areas include a) biologic validation of toxicogenomics-based test methods for regulatory decision making, b) technical and bioinformatics aspects related to validation, and c) validation issues as they relate to regulatory acceptance and use of toxicogenomics-based test methods. In this report we summarize the discussions and describe in detail the recommendations for future direction and priorities.

  5. Development and Validation of High Performance Liquid Chromatography Method for Determination Atorvastatin in Tablet

    NASA Astrophysics Data System (ADS)

    Yugatama, A.; Rohmani, S.; Dewangga, A.

    2018-03-01

    Atorvastatin is the primary choice for dyslipidemia treatment. Due to patent expiration of atorvastatin, the pharmaceutical industry makes copy of the drug. Therefore, the development methods for tablet quality tests involving atorvastatin concentration on tablets needs to be performed. The purpose of this research was to develop and validate the simple atorvastatin tablet analytical method by HPLC. HPLC system used in this experiment consisted of column Cosmosil C18 (150 x 4,6 mm, 5 µm) as the stationary reverse phase chomatography, a mixture of methanol-water at pH 3 (80:20 v/v) as the mobile phase, flow rate of 1 mL/min, and UV detector at wavelength of 245 nm. Validation methods were including: selectivity, linearity, accuracy, precision, limit of detection (LOD), and limit of quantitation (LOQ). The results of this study indicate that the developed method had good validation including selectivity, linearity, accuracy, precision, LOD, and LOQ for analysis of atorvastatin tablet content. LOD and LOQ were 0.2 and 0.7 ng/mL, and the linearity range were 20 - 120 ng/mL.

  6. Current issues involving screening and identification of chemical contaminants in foods by mass spectrometry

    USDA-ARS?s Scientific Manuscript database

    Although quantitative analytical methods must be empirically validated prior to their actual use in a variety of applications, including regulatory monitoring of chemical adulterants in foods, validation of qualitative method performance for the analytes and matrices of interest is frequently ignore...

  7. VALUE - A Framework to Validate Downscaling Approaches for Climate Change Studies

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilke, Renate A. I.

    2015-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. Here, we present the key ingredients of this framework. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.

  8. VALUE: A framework to validate downscaling approaches for climate change studies

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutiérrez, José M.; Kotlarski, Sven; Chandler, Richard E.; Hertig, Elke; Wibig, Joanna; Huth, Radan; Wilcke, Renate A. I.

    2015-01-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. VALUE aims to foster collaboration and knowledge exchange between climatologists, impact modellers, statisticians, and stakeholders to establish an interdisciplinary downscaling community. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. In this paper, we present the key ingredients of this framework. VALUE's main approach to validation is user- focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur: what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Do methods fail in representing regional climate change? How is the overall representation of regional climate, including errors inherited from global climate models? The framework will be the basis for a comprehensive community-open downscaling intercomparison study, but is intended also to provide general guidance for other validation studies.

  9. The NIH analytical methods and reference materials program for dietary supplements.

    PubMed

    Betz, Joseph M; Fisher, Kenneth D; Saldanha, Leila G; Coates, Paul M

    2007-09-01

    Quality of botanical products is a great uncertainty that consumers, clinicians, regulators, and researchers face. Definitions of quality abound, and include specifications for sanitation, adventitious agents (pesticides, metals, weeds), and content of natural chemicals. Because dietary supplements (DS) are often complex mixtures, they pose analytical challenges and method validation may be difficult. In response to product quality concerns and the need for validated and publicly available methods for DS analysis, the US Congress directed the Office of Dietary Supplements (ODS) at the National Institutes of Health (NIH) to accelerate an ongoing methods validation process, and the Dietary Supplements Methods and Reference Materials Program was created. The program was constructed from stakeholder input and incorporates several federal procurement and granting mechanisms in a coordinated and interlocking framework. The framework facilitates validation of analytical methods, analytical standards, and reference materials.

  10. Interlaboratory validation of an improved U.S. Food and Drug Administration method for detection of Cyclospora cayetanensis in produce using TaqMan real-time PCR

    USDA-ARS?s Scientific Manuscript database

    A collaborative validation study was performed to evaluate the performance of a new U.S. Food and Drug Administration method developed for detection of the protozoan parasite, Cyclospora cayetanensis, on cilantro and raspberries. The method includes a sample preparation step in which oocysts are re...

  11. Content validity across methods of malnutrition assessment in patients with cancer is limited.

    PubMed

    Sealy, Martine J; Nijholt, Willemke; Stuiver, Martijn M; van der Berg, Marit M; Roodenburg, Jan L N; van der Schans, Cees P; Ottery, Faith D; Jager-Wittenaar, Harriët

    2016-08-01

    To identify malnutrition assessment methods in cancer patients and assess their content validity based on internationally accepted definitions for malnutrition. Systematic review of studies in cancer patients that operationalized malnutrition as a variable, published since 1998. Eleven key concepts, within the three domains reflected by the malnutrition definitions acknowledged by European Society for Clinical Nutrition and Metabolism (ESPEN) and the American Society for Parenteral and Enteral Nutrition (ASPEN): A: nutrient balance; B: changes in body shape, body area and body composition; and C: function, were used to classify content validity of methods to assess malnutrition. Content validity indices (M-CVIA-C) were calculated per assessment method. Acceptable content validity was defined as M-CVIA-C ≥ 0.80. Thirty-seven assessment methods were identified in the 160 included articles. Mini Nutritional Assessment (M-CVIA-C = 0.72), Scored Patient-Generated Subjective Global Assessment (M-CVIA-C = 0.61), and Subjective Global Assessment (M-CVIA-C = 0.53) scored highest M-CVIA-C. A large number of malnutrition assessment methods are used in cancer research. Content validity of these methods varies widely. None of these assessment methods has acceptable content validity, when compared against a construct based on ESPEN and ASPEN definitions of malnutrition. Copyright © 2016 Elsevier Inc. All rights reserved.

  12. Validation sampling can reduce bias in health care database studies: an illustration using influenza vaccination effectiveness.

    PubMed

    Nelson, Jennifer Clark; Marsh, Tracey; Lumley, Thomas; Larson, Eric B; Jackson, Lisa A; Jackson, Michael L

    2013-08-01

    Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased owing to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. We applied two such methods, namely imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method's ability to reduce bias using the control time period before influenza circulation. Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not use the validation sample confounders. Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from health care database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which the data can be imputed or reweighted using the additional validation sample information. Copyright © 2013 Elsevier Inc. All rights reserved.

  13. Global Land Product Validation Protocols: An Initiative of the CEOS Working Group on Calibration and Validation to Evaluate Satellite-derived Essential Climate Variables

    NASA Astrophysics Data System (ADS)

    Guillevic, P. C.; Nickeson, J. E.; Roman, M. O.; camacho De Coca, F.; Wang, Z.; Schaepman-Strub, G.

    2016-12-01

    The Global Climate Observing System (GCOS) has specified the need to systematically produce and validate Essential Climate Variables (ECVs). The Committee on Earth Observation Satellites (CEOS) Working Group on Calibration and Validation (WGCV) and in particular its subgroup on Land Product Validation (LPV) is playing a key coordination role leveraging the international expertise required to address actions related to the validation of global land ECVs. The primary objective of the LPV subgroup is to set standards for validation methods and reporting in order to provide traceable and reliable uncertainty estimates for scientists and stakeholders. The Subgroup is comprised of 9 focus areas that encompass 10 land surface variables. The activities of each focus area are coordinated by two international co-leads and currently include leaf area index (LAI) and fraction of absorbed photosynthetically active radiation (FAPAR), vegetation phenology, surface albedo, fire disturbance, snow cover, land cover and land use change, soil moisture, land surface temperature (LST) and emissivity. Recent additions to the focus areas include vegetation indices and biomass. The development of best practice validation protocols is a core activity of CEOS LPV with the objective to standardize the evaluation of land surface products. LPV has identified four validation levels corresponding to increasing spatial and temporal representativeness of reference samples used to perform validation. Best practice validation protocols (1) provide the definition of variables, ancillary information and uncertainty metrics, (2) describe available data sources and methods to establish reference validation datasets with SI traceability, and (3) describe evaluation methods and reporting. An overview on validation best practice components will be presented based on the LAI and LST protocol efforts to date.

  14. Use of Bayesian Networks to Probabilistically Model and Improve the Likelihood of Validation of Microarray Findings by RT-PCR

    PubMed Central

    English, Sangeeta B.; Shih, Shou-Ching; Ramoni, Marco F.; Smith, Lois E.; Butte, Atul J.

    2014-01-01

    Though genome-wide technologies, such as microarrays, are widely used, data from these methods are considered noisy; there is still varied success in downstream biological validation. We report a method that increases the likelihood of successfully validating microarray findings using real time RT-PCR, including genes at low expression levels and with small differences. We use a Bayesian network to identify the most relevant sources of noise based on the successes and failures in validation for an initial set of selected genes, and then improve our subsequent selection of genes for validation based on eliminating these sources of noise. The network displays the significant sources of noise in an experiment, and scores the likelihood of validation for every gene. We show how the method can significantly increase validation success rates. In conclusion, in this study, we have successfully added a new automated step to determine the contributory sources of noise that determine successful or unsuccessful downstream biological validation. PMID:18790084

  15. Safer Conception Methods and Counseling: Psychometric Evaluation of New Measures of Attitudes and Beliefs Among HIV Clients and Providers.

    PubMed

    Woldetsadik, Mahlet Atakilt; Goggin, Kathy; Staggs, Vincent S; Wanyenze, Rhoda K; Beyeza-Kashesya, Jolly; Mindry, Deborah; Finocchario-Kessler, Sarah; Khanakwa, Sarah; Wagner, Glenn J

    2016-06-01

    With data from 400 HIV clients with fertility intentions and 57 HIV providers in Uganda, we evaluated the psychometrics of new client and provider scales measuring constructs related to safer conception methods (SCM) and safer conception counselling (SCC). Several forms of validity (i.e., content, face, and construct validity) were examined using standard methods including exploratory and confirmatory factor analysis. Internal consistency was established using Cronbach's alpha correlation coefficient. The final scales consisted of measures of attitudes towards use of SCM and delivery of SCC, including measures of self-efficacy and motivation to use SCM, and perceived community stigma towards childbearing. Most client and all provider measures had moderate to high internal consistency (alphas 0.60-0.94), most had convergent validity (associations with other SCM or SCC-related measures), and client measures had divergent validity (poor associations with depression). These findings establish preliminary psychometric properties of these scales and should facilitate future studies of SCM and SCC.

  16. Remote Patron Validation: Posting a Proxy Server at the Digital Doorway.

    ERIC Educational Resources Information Center

    Webster, Peter

    2002-01-01

    Discussion of remote access to library services focuses on proxy servers as a method for remote access, based on experiences at Saint Mary's University (Halifax). Topics include Internet protocol user validation; browser-directed proxies; server software proxies; vendor alternatives for validating remote users; and Internet security issues. (LRW)

  17. Evaluation of PDA Technical Report No 33. Statistical Testing Recommendations for a Rapid Microbiological Method Case Study.

    PubMed

    Murphy, Thomas; Schwedock, Julie; Nguyen, Kham; Mills, Anna; Jones, David

    2015-01-01

    New recommendations for the validation of rapid microbiological methods have been included in the revised Technical Report 33 release from the PDA. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This case study applies those statistical methods to accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological methods system being evaluated for water bioburden testing. Results presented demonstrate that the statistical methods described in the PDA Technical Report 33 chapter can all be successfully applied to the rapid microbiological method data sets and gave the same interpretation for equivalence to the standard method. The rapid microbiological method was in general able to pass the requirements of PDA Technical Report 33, though the study shows that there can be occasional outlying results and that caution should be used when applying statistical methods to low average colony-forming unit values. Prior to use in a quality-controlled environment, any new method or technology has to be shown to work as designed by the manufacturer for the purpose required. For new rapid microbiological methods that detect and enumerate contaminating microorganisms, additional recommendations have been provided in the revised PDA Technical Report No. 33. The changes include a more comprehensive review of the statistical methods to be used to analyze data obtained during validation. This paper applies those statistical methods to analyze accuracy, precision, ruggedness, and equivalence data obtained using a rapid microbiological method system being validated for water bioburden testing. The case study demonstrates that the statistical methods described in the PDA Technical Report No. 33 chapter can be successfully applied to rapid microbiological method data sets and give the same comparability results for similarity or difference as the standard method. © PDA, Inc. 2015.

  18. CONTENT VALIDITY OF SYMPTOM-BASED MEASURES FOR DIABETIC, CHEMOTHERAPY, AND HIV PERIPHERAL NEUROPATHY

    PubMed Central

    GEWANDTER, JENNIFER S.; BURKE, LAURIE; CAVALETTI, GUIDO; DWORKIN, ROBERT H.; GIBBONS, CHRISTOPHER; GOVER, TONY D.; HERRMANN, DAVID N.; MCARTHUR, JUSTIN C.; MCDERMOTT, MICHAEL P.; RAPPAPORT, BOB A.; REEVE, BRYCE B.; RUSSELL, JAMES W.; SMITH, A. GORDON; SMITH, SHANNON M.; TURK, DENNIS C.; VINIK, AARON I.; FREEMAN, ROY

    2017-01-01

    Introduction No treatments for axonal peripheral neuropathy are approved by the United States Food and Drug Administration (FDA). Although patient- and clinician-reported outcomes are central to evaluating neuropathy symptoms, they can be difficult to assess accurately. The inability to identify efficacious treatments for peripheral neuropathies could be due to invalid or inadequate outcome measures. Methods This systematic review examined the content validity of symptom-based measures of diabetic peripheral neuropathy, HIV neuropathy, and chemotherapy-induced peripheral neuropathy. Results Use of all FDA-recommended methods to establish content validity was only reported for 2 of 18 measures. Multiple sensory and motor symptoms were included in measures for all 3 conditions; these included numbness, tingling, pain, allodynia, difficulty walking, and cramping. Autonomic symptoms were less frequently included. Conclusions Given significant overlap in symptoms between neuropathy etiologies, a measure with content validity for multiple neuropathies with supplemental disease-specific modules could be of great value in the development of disease-modifying treatments for peripheral neuropathies. PMID:27447116

  19. Validation sampling can reduce bias in healthcare database studies: an illustration using influenza vaccination effectiveness

    PubMed Central

    Nelson, Jennifer C.; Marsh, Tracey; Lumley, Thomas; Larson, Eric B.; Jackson, Lisa A.; Jackson, Michael

    2014-01-01

    Objective Estimates of treatment effectiveness in epidemiologic studies using large observational health care databases may be biased due to inaccurate or incomplete information on important confounders. Study methods that collect and incorporate more comprehensive confounder data on a validation cohort may reduce confounding bias. Study Design and Setting We applied two such methods, imputation and reweighting, to Group Health administrative data (full sample) supplemented by more detailed confounder data from the Adult Changes in Thought study (validation sample). We used influenza vaccination effectiveness (with an unexposed comparator group) as an example and evaluated each method’s ability to reduce bias using the control time period prior to influenza circulation. Results Both methods reduced, but did not completely eliminate, the bias compared with traditional effectiveness estimates that do not utilize the validation sample confounders. Conclusion Although these results support the use of validation sampling methods to improve the accuracy of comparative effectiveness findings from healthcare database studies, they also illustrate that the success of such methods depends on many factors, including the ability to measure important confounders in a representative and large enough validation sample, the comparability of the full sample and validation sample, and the accuracy with which data can be imputed or reweighted using the additional validation sample information. PMID:23849144

  20. Survey of Verification and Validation Techniques for Small Satellite Software Development

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen A.

    2015-01-01

    The purpose of this paper is to provide an overview of the current trends and practices in small-satellite software verification and validation. This document is not intended to promote a specific software assurance method. Rather, it seeks to present an unbiased survey of software assurance methods used to verify and validate small satellite software and to make mention of the benefits and value of each approach. These methods include simulation and testing, verification and validation with model-based design, formal methods, and fault-tolerant software design with run-time monitoring. Although the literature reveals that simulation and testing has by far the longest legacy, model-based design methods are proving to be useful for software verification and validation. Some work in formal methods, though not widely used for any satellites, may offer new ways to improve small satellite software verification and validation. These methods need to be further advanced to deal with the state explosion problem and to make them more usable by small-satellite software engineers to be regularly applied to software verification. Last, it is explained how run-time monitoring, combined with fault-tolerant software design methods, provides an important means to detect and correct software errors that escape the verification process or those errors that are produced after launch through the effects of ionizing radiation.

  1. Alternative method to validate the seasonal land cover regions of the conterminous United States

    Treesearch

    Zhiliang Zhu; Donald O. Ohlen; Raymond L. Czaplewski; Robert E. Burgan

    1996-01-01

    An accuracy assessment method involving double sampling and the multivariate composite estimator has been used to validate the prototype seasonal land cover characteristics database of the conterminous United States. The database consists of 159 land cover classes, classified using time series of 1990 1-km satellite data and augmented with ancillary data including...

  2. Chemometric and biological validation of a capillary electrophoresis metabolomic experiment of Schistosoma mansoni infection in mice.

    PubMed

    Garcia-Perez, Isabel; Angulo, Santiago; Utzinger, Jürg; Holmes, Elaine; Legido-Quigley, Cristina; Barbas, Coral

    2010-07-01

    Metabonomic and metabolomic studies are increasingly utilized for biomarker identification in different fields, including biology of infection. The confluence of improved analytical platforms and the availability of powerful multivariate analysis software have rendered the multiparameter profiles generated by these omics platforms a user-friendly alternative to the established analysis methods where the quality and practice of a procedure is well defined. However, unlike traditional assays, validation methods for these new multivariate profiling tools have yet to be established. We propose a validation for models obtained by CE fingerprinting of urine from mice infected with the blood fluke Schistosoma mansoni. We have analysed urine samples from two sets of mice infected in an inter-laboratory experiment where different infection methods and animal husbandry procedures were employed in order to establish the core biological response to a S. mansoni infection. CE data were analysed using principal component analysis. Validation of the scores consisted of permutation scrambling (100 repetitions) and a manual validation method, using a third of the samples (not included in the model) as a test or prediction set. The validation yielded 100% specificity and 100% sensitivity, demonstrating the robustness of these models with respect to deciphering metabolic perturbations in the mouse due to a S. mansoni infection. A total of 20 metabolites across the two experiments were identified that significantly discriminated between S. mansoni-infected and noninfected control samples. Only one of these metabolites, allantoin, was identified as manifesting different behaviour in the two experiments. This study shows the reproducibility of CE-based metabolic profiling methods for disease characterization and screening and highlights the importance of much needed validation strategies in the emerging field of metabolomics.

  3. Reliability and validity of non-radiographic methods of thoracic kyphosis measurement: a systematic review.

    PubMed

    Barrett, Eva; McCreesh, Karen; Lewis, Jeremy

    2014-02-01

    A wide array of instruments are available for non-invasive thoracic kyphosis measurement. Guidelines for selecting outcome measures for use in clinical and research practice recommend that properties such as validity and reliability are considered. This systematic review reports on the reliability and validity of non-invasive methods for measuring thoracic kyphosis. A systematic search of 11 electronic databases located studies assessing reliability and/or validity of non-invasive thoracic kyphosis measurement techniques. Two independent reviewers used a critical appraisal tool to assess the quality of retrieved studies. Data was extracted by the primary reviewer. The results were synthesized qualitatively using a level of evidence approach. 27 studies satisfied the eligibility criteria and were included in the review. The reliability, validity and both reliability and validity were investigated by sixteen, two and nine studies respectively. 17/27 studies were deemed to be of high quality. In total, 15 methods of thoracic kyphosis were evaluated in retrieved studies. All investigated methods showed high (ICC ≥ .7) to very high (ICC ≥ .9) levels of reliability. The validity of the methods ranged from low to very high. The strongest levels of evidence for reliability exists in support of the Debrunner kyphometer, Spinal Mouse and Flexicurve index, and for validity supports the arcometer and Flexicurve index. Further reliability and validity studies are required to strengthen the level of evidence for the remaining methods of measurement. This should be addressed by future research. Copyright © 2013 Elsevier Ltd. All rights reserved.

  4. Validation of microbiological testing in cardiovascular tissue banks: results of a quality round trial.

    PubMed

    de By, Theo M M H; McDonald, Carl; Süßner, Susanne; Davies, Jill; Heng, Wee Ling; Jashari, Ramadan; Bogers, Ad J J C; Petit, Pieter

    2017-11-01

    Surgeons needing human cardiovascular tissue for implantation in their patients are confronted with cardiovascular tissue banks that use different methods to identify and decontaminate micro-organisms. To elucidate these differences, we compared the quality of processing methods in 20 tissue banks and 1 reference laboratory. We did this to validate the results for accepting or rejecting tissue. We included the decontamination methods used and the influence of antibiotic cocktails and residues with results and controls. The minor details of the processes were not included. To compare the outcomes of microbiological testing and decontamination methods of heart valve allografts in cardiovascular tissue banks, an international quality round was organized. Twenty cardiovascular tissue banks participated in this quality round. The quality round method was validated first and consisted of sending purposely contaminated human heart valve tissue samples with known micro-organisms to the participants. The participants identified the micro-organisms using their local decontamination methods. Seventeen of the 20 participants correctly identified the micro-organisms; if these samples were heart valves to be released for implantation, 3 of the 20 participants would have decided to accept their result for release. Decontamination was shown not to be effective in 13 tissue banks because of growth of the organisms after decontamination. Articles in the literature revealed that antibiotics are effective at 36°C and not, or less so, at 2-8°C. The decontamination procedure, if it is validated, will ensure that the tissue contains no known micro-organisms. This study demonstrates that the quality round method of sending contaminated tissues and assessing the results of the microbiological cultures is an effective way of validating the processes of tissue banks. Only when harmonization, based on validated methods, has been achieved, will surgeons be able to fully rely on the methods used and have confidence in the consistent sterility of the tissue grafts. Tissue banks should validate their methods so that all stakeholders can trust the outcomes. © The Author 2017. Published by Oxford University Press on behalf of the European Association for Cardio-Thoracic Surgery. All rights reserved.

  5. Semiquantitative determination of mesophilic, aerobic microorganisms in cocoa products using the Soleris NF-TVC method.

    PubMed

    Montei, Carolyn; McDougal, Susan; Mozola, Mark; Rice, Jennifer

    2014-01-01

    The Soleris Non-fermenting Total Viable Count method was previously validated for a wide variety of food products, including cocoa powder. A matrix extension study was conducted to validate the method for use with cocoa butter and cocoa liquor. Test samples included naturally contaminated cocoa liquor and cocoa butter inoculated with natural microbial flora derived from cocoa liquor. A probability of detection statistical model was used to compare Soleris results at multiple test thresholds (dilutions) with aerobic plate counts determined using the AOAC Official Method 966.23 dilution plating method. Results of the two methods were not statistically different at any dilution level in any of the three trials conducted. The Soleris method offers the advantage of results within 24 h, compared to the 48 h required by standard dilution plating methods.

  6. Standardization, evaluation and early-phase method validation of an analytical scheme for batch-consistency N-glycosylation analysis of recombinant produced glycoproteins.

    PubMed

    Zietze, Stefan; Müller, Rainer H; Brecht, René

    2008-03-01

    In order to set up a batch-to-batch-consistency analytical scheme for N-glycosylation analysis, several sample preparation steps including enzyme digestions and fluorophore labelling and two HPLC-methods were established. The whole method scheme was standardized, evaluated and validated according to the requirements on analytical testing in early clinical drug development by usage of a recombinant produced reference glycoprotein (RGP). The standardization of the methods was performed by clearly defined standard operation procedures. During evaluation of the methods, the major interest was in the loss determination of oligosaccharides within the analytical scheme. Validation of the methods was performed with respect to specificity, linearity, repeatability, LOD and LOQ. Due to the fact that reference N-glycan standards were not available, a statistical approach was chosen to derive accuracy from the linearity data. After finishing the validation procedure, defined limits for method variability could be calculated and differences observed in consistency analysis could be separated into significant and incidental ones.

  7. 42 CFR 441.472 - Budget methodology.

    Code of Federal Regulations, 2011 CFR

    2011-10-01

    ... based utilizing valid, reliable cost data. (2) The State's method is applied consistently to participants. (3) The State's method is open for public inspection. (4) The State's method includes a...

  8. 42 CFR 441.472 - Budget methodology.

    Code of Federal Regulations, 2013 CFR

    2013-10-01

    ... based utilizing valid, reliable cost data. (2) The State's method is applied consistently to participants. (3) The State's method is open for public inspection. (4) The State's method includes a...

  9. Measuring Adolescent Social and Academic Self-Efficacy: Cross-Ethnic Validity of the SEQ-C

    ERIC Educational Resources Information Center

    Minter, Anthony; Pritzker, Suzanne

    2017-01-01

    Objective: This study examines the psychometric strength, including cross-ethnic validity, of two subscales of Muris' Self-Efficacy Questionnaire for Children: Academic Self-Efficacy (ASE) and Social Self-Efficacy (SSE). Methods: A large ethnically diverse sample of 3,358 early and late adolescents completed surveys including the ASE and SSE.…

  10. Accuracy of the visual estimation method as a predictor of food intake in Alzheimer's patients provided with different types of food.

    PubMed

    Amano, Nobuko; Nakamura, Tomiyo

    2018-02-01

    The visual estimation method is commonly used in hospitals and other care facilities to evaluate food intake through estimation of plate waste. In Japan, no previous studies have investigated the validity and reliability of this method under the routine conditions of a hospital setting. The present study aimed to evaluate the validity and reliability of the visual estimation method, in long-term inpatients with different levels of eating disability caused by Alzheimer's disease. The patients were provided different therapeutic diets presented in various food types. This study was performed between February and April 2013, and 82 patients with Alzheimer's disease were included. Plate waste was evaluated for the 3 main daily meals, for a total of 21 days, 7 consecutive days during each of the 3 months, originating a total of 4851 meals, from which 3984 were included. Plate waste was measured by the nurses through the visual estimation method, and by the hospital's registered dietitians through the actual measurement method. The actual measurement method was first validated to serve as a reference, and the level of agreement between both methods was then determined. The month, time of day, type of food provided, and patients' physical characteristics were considered for analysis. For the 3984 meals included in the analysis, the level of agreement between the measurement methods was 78.4%. Disagreement of measurements consisted of 3.8% of underestimation and 17.8% of overestimation. Cronbach's α (0.60, P < 0.001) indicated that the reliability of the visual estimation method was within the acceptable range. The visual estimation method was found to be a valid and reliable method for estimating food intake in patients with different levels of eating impairment. The successful implementation and use of the method depends upon adequate training and motivation of the nurses and care staff involved. Copyright © 2017 European Society for Clinical Nutrition and Metabolism. Published by Elsevier Ltd. All rights reserved.

  11. Validation conform ISO-15189 of assays in the field of autoimmunity: Joint efforts in The Netherlands.

    PubMed

    Mulder, Leontine; van der Molen, Renate; Koelman, Carin; van Leeuwen, Ester; Roos, Anja; Damoiseaux, Jan

    2018-05-01

    ISO 15189:2012 requires validation of methods used in the medical laboratory, and lists a series of performance parameters for consideration to include. Although these performance parameters are feasible for clinical chemistry analytes, application in the validation of autoimmunity tests is a challenge. Lack of gold standards or reference methods in combination with the scarcity of well-defined diagnostic samples of patients with rare diseases make validation of new assays difficult. The present manuscript describes the initiative of Dutch medical immunology laboratory specialists to combine efforts and perform multi-center validation studies of new assays in the field of autoimmunity. Validation data and reports are made available to interested Dutch laboratory specialists. Copyright © 2018 Elsevier B.V. All rights reserved.

  12. Simulation verification techniques study

    NASA Technical Reports Server (NTRS)

    Schoonmaker, P. B.; Wenglinski, T. H.

    1975-01-01

    Results are summarized of the simulation verification techniques study which consisted of two tasks: to develop techniques for simulator hardware checkout and to develop techniques for simulation performance verification (validation). The hardware verification task involved definition of simulation hardware (hardware units and integrated simulator configurations), survey of current hardware self-test techniques, and definition of hardware and software techniques for checkout of simulator subsystems. The performance verification task included definition of simulation performance parameters (and critical performance parameters), definition of methods for establishing standards of performance (sources of reference data or validation), and definition of methods for validating performance. Both major tasks included definition of verification software and assessment of verification data base impact. An annotated bibliography of all documents generated during this study is provided.

  13. Modification and Validation of Conceptual Design Aerodynamic Prediction Method HASC95 With VTXCHN

    NASA Technical Reports Server (NTRS)

    Albright, Alan E.; Dixon, Charles J.; Hegedus, Martin C.

    1996-01-01

    A conceptual/preliminary design level subsonic aerodynamic prediction code HASC (High Angle of Attack Stability and Control) has been improved in several areas, validated, and documented. The improved code includes improved methodologies for increased accuracy and robustness, and simplified input/output files. An engineering method called VTXCHN (Vortex Chine) for prediciting nose vortex shedding from circular and non-circular forebodies with sharp chine edges has been improved and integrated into the HASC code. This report contains a summary of modifications, description of the code, user's guide, and validation of HASC. Appendices include discussion of a new HASC utility code, listings of sample input and output files, and a discussion of the application of HASC to buffet analysis.

  14. An intercomparison of a large ensemble of statistical downscaling methods for Europe: Overall results from the VALUE perfect predictor cross-validation experiment

    NASA Astrophysics Data System (ADS)

    Gutiérrez, Jose Manuel; Maraun, Douglas; Widmann, Martin; Huth, Radan; Hertig, Elke; Benestad, Rasmus; Roessler, Ole; Wibig, Joanna; Wilcke, Renate; Kotlarski, Sven

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research (http://www.value-cost.eu). A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. This framework is based on a user-focused validation tree, guiding the selection of relevant validation indices and performance measures for different aspects of the validation (marginal, temporal, spatial, multi-variable). Moreover, several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur (assessment of intrinsic performance, effect of errors inherited from the global models, effect of non-stationarity, etc.). The list of downscaling experiments includes 1) cross-validation with perfect predictors, 2) GCM predictors -aligned with EURO-CORDEX experiment- and 3) pseudo reality predictors (see Maraun et al. 2015, Earth's Future, 3, doi:10.1002/2014EF000259, for more details). The results of these experiments are gathered, validated and publicly distributed through the VALUE validation portal, allowing for a comprehensive community-open downscaling intercomparison study. In this contribution we describe the overall results from Experiment 1), consisting of a European wide 5-fold cross-validation (with consecutive 6-year periods from 1979 to 2008) using predictors from ERA-Interim to downscale precipitation and temperatures (minimum and maximum) over a set of 86 ECA&D stations representative of the main geographical and climatic regions in Europe. As a result of the open call for contribution to this experiment (closed in Dec. 2015), over 40 methods representative of the main approaches (MOS and Perfect Prognosis, PP) and techniques (linear scaling, quantile mapping, analogs, weather typing, linear and generalized regression, weather generators, etc.) were submitted, including information both data (downscaled values) and metadata (characterizing different aspects of the downscaling methods). This constitutes the largest and most comprehensive to date intercomparison of statistical downscaling methods. Here, we present an overall validation, analyzing marginal and temporal aspects to assess the intrinsic performance and added value of statistical downscaling methods at both annual and seasonal levels. This validation takes into account the different properties/limitations of different approaches and techniques (as reported in the provided metadata) in order to perform a fair comparison. It is pointed out that this experiment alone is not sufficient to evaluate the limitations of (MOS) bias correction techniques. Moreover, it also does not fully validate PP since we don't learn whether we have the right predictors and whether the PP assumption is valid. These problems will be analyzed in the subsequent community-open VALUE experiments 2) and 3), which will be open for participation along the present year.

  15. Development and Content Validation of the Transition Readiness Inventory Item Pool for Adolescent and Young Adult Survivors of Childhood Cancer.

    PubMed

    Schwartz, Lisa A; Hamilton, Jessica L; Brumley, Lauren D; Barakat, Lamia P; Deatrick, Janet A; Szalda, Dava E; Bevans, Katherine B; Tucker, Carole A; Daniel, Lauren C; Butler, Eliana; Kazak, Anne E; Hobbie, Wendy L; Ginsberg, Jill P; Psihogios, Alexandra M; Ver Hoeve, Elizabeth; Tuchman, Lisa K

    2017-10-01

    The development of the Transition Readiness Inventory (TRI) item pool for adolescent and young adult childhood cancer survivors is described, aiming to both advance transition research and provide an example of the application of NIH Patient Reported Outcomes Information System methods. Using rigorous measurement development methods including mixed methods, patient and parent versions of the TRI item pool were created based on the Social-ecological Model of Adolescent and young adult Readiness for Transition (SMART). Each stage informed development and refinement of the item pool. Content validity ratings and cognitive interviews resulted in 81 content valid items for the patient version and 85 items for the parent version. TRI represents the first multi-informant, rigorously developed transition readiness item pool that comprehensively measures the social-ecological components of transition readiness. Discussion includes clinical implications, the application of TRI and the methods to develop the item pool to other populations, and next steps for further validation and refinement. © The Author 2017. Published by Oxford University Press on behalf of the Society of Pediatric Psychology. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com

  16. Validation of a new screening, determinative, and confirmatory multi-residue method for nitroimidazoles and their hydroxy metabolites in turkey muscle tissue by liquid chromatography-tandem mass spectrometry.

    PubMed

    Boison, Joe O; Asea, Philip A; Matus, Johanna L

    2012-08-01

    A new and sensitive multi-residue method (MRM) with detection by LC-MS/MS was developed and validated for the screening, determination, and confirmation of residues of 7 nitroimidazoles and 3 of their metabolites in turkey muscle tissues at concentrations ≥ 0.05 ng/g. The compounds were extracted into a solvent with an alkali salt. Sample clean-up and concentration was then done by solid-phase extraction (SPE) and the compounds were quantified by liquid chromatography-tandem mass spectrometry (LC-MS/MS). The characteristic parameters including repeatability, selectivity, ruggedness, stability, level of quantification, and level of confirmation for the new method were determined. Method validation was achieved by independent verification of the parameters measured during method characterization. The seven nitroimidazoles included are metronidazole (MTZ), ronidazole (RNZ), dimetridazole (DMZ), tinidazole (TNZ), ornidazole (ONZ), ipronidazole (IPR), and carnidazole (CNZ). It was discovered during the single laboratory validation of the method that five of the seven nitroimidazoles (i.e. metronidazole, dimetridazole, tinidazole, ornidazole and ipronidazole) and the 3 metabolites (1-(2-hydroxyethyl)-2-hydroxymethyl-5-nitroimidazole (MTZ-OH), 2-hydroxymethyl-1-methyl-5-nitroimidazole (HMMNI, the common metabolite of ronidazole and dimetridazole), and 1-methyl-2-(2'-hydroxyisopropyl)-5-nitroimidazole (IPR-OH) included in this study could be detected, confirmed, and quantified accurately whereas RNZ and CNZ could only be detected and confirmed but not accurately quantified. © Her Majesty the Queen in Right of Canada as Represented by the Minister of Agriculture and Agri-food Canada 2012.

  17. The predictive validity of a situational judgement test, a clinical problem solving test and the core medical training selection methods for performance in specialty training .

    PubMed

    Patterson, Fiona; Lopes, Safiatu; Harding, Stephen; Vaux, Emma; Berkin, Liz; Black, David

    2017-02-01

    The aim of this study was to follow up a sample of physicians who began core medical training (CMT) in 2009. This paper examines the long-term validity of CMT and GP selection methods in predicting performance in the Membership of Royal College of Physicians (MRCP(UK)) examinations. We performed a longitudinal study, examining the extent to which the GP and CMT selection methods (T1) predict performance in the MRCP(UK) examinations (T2). A total of 2,569 applicants from 2008-09 who completed CMT and GP selection methods were included in the study. Looking at MRCP(UK) part 1, part 2 written and PACES scores, both CMT and GP selection methods show evidence of predictive validity for the outcome variables, and hierarchical regressions show the GP methods add significant value to the CMT selection process. CMT selection methods predict performance in important outcomes and have good evidence of validity; the GP methods may have an additional role alongside the CMT selection methods. © Royal College of Physicians 2017. All rights reserved.

  18. Standard Setting Methods for Pass/Fail Decisions on High-Stakes Objective Structured Clinical Examinations: A Validity Study.

    PubMed

    Yousuf, Naveed; Violato, Claudio; Zuberi, Rukhsana W

    2015-01-01

    CONSTRUCT: Authentic standard setting methods will demonstrate high convergent validity evidence of their outcomes, that is, cutoff scores and pass/fail decisions, with most other methods when compared with each other. The objective structured clinical examination (OSCE) was established for valid, reliable, and objective assessment of clinical skills in health professions education. Various standard setting methods have been proposed to identify objective, reliable, and valid cutoff scores on OSCEs. These methods may identify different cutoff scores for the same examinations. Identification of valid and reliable cutoff scores for OSCEs remains an important issue and a challenge. Thirty OSCE stations administered at least twice in the years 2010-2012 to 393 medical students in Years 2 and 3 at Aga Khan University are included. Psychometric properties of the scores are determined. Cutoff scores and pass/fail decisions of Wijnen, Cohen, Mean-1.5SD, Mean-1SD, Angoff, borderline group and borderline regression (BL-R) methods are compared with each other and with three variants of cluster analysis using repeated measures analysis of variance and Cohen's kappa. The mean psychometric indices on the 30 OSCE stations are reliability coefficient = 0.76 (SD = 0.12); standard error of measurement = 5.66 (SD = 1.38); coefficient of determination = 0.47 (SD = 0.19), and intergrade discrimination = 7.19 (SD = 1.89). BL-R and Wijnen methods show the highest convergent validity evidence among other methods on the defined criteria. Angoff and Mean-1.5SD demonstrated least convergent validity evidence. The three cluster variants showed substantial convergent validity with borderline methods. Although there was a high level of convergent validity of Wijnen method, it lacks the theoretical strength to be used for competency-based assessments. The BL-R method is found to show the highest convergent validity evidences for OSCEs with other standard setting methods used in the present study. We also found that cluster analysis using mean method can be used for quality assurance of borderline methods. These findings should be further confirmed by studies in other settings.

  19. Nonclinical dose formulation analysis method validation and sample analysis.

    PubMed

    Whitmire, Monica Lee; Bryan, Peter; Henry, Teresa R; Holbrook, John; Lehmann, Paul; Mollitor, Thomas; Ohorodnik, Susan; Reed, David; Wietgrefe, Holly D

    2010-12-01

    Nonclinical dose formulation analysis methods are used to confirm test article concentration and homogeneity in formulations and determine formulation stability in support of regulated nonclinical studies. There is currently no regulatory guidance for nonclinical dose formulation analysis method validation or sample analysis. Regulatory guidance for the validation of analytical procedures has been developed for drug product/formulation testing; however, verification of the formulation concentrations falls under the framework of GLP regulations (not GMP). The only current related regulatory guidance is the bioanalytical guidance for method validation. The fundamental parameters for bioanalysis and formulation analysis validations that overlap include: recovery, accuracy, precision, specificity, selectivity, carryover, sensitivity, and stability. Divergence in bioanalytical and drug product validations typically center around the acceptance criteria used. As the dose formulation samples are not true "unknowns", the concept of quality control samples that cover the entire range of the standard curve serving as the indication for the confidence in the data generated from the "unknown" study samples may not always be necessary. Also, the standard bioanalytical acceptance criteria may not be directly applicable, especially when the determined concentration does not match the target concentration. This paper attempts to reconcile the different practices being performed in the community and to provide recommendations of best practices and proposed acceptance criteria for nonclinical dose formulation method validation and sample analysis.

  20. The ICR96 exon CNV validation series: a resource for orthogonal assessment of exon CNV calling in NGS data.

    PubMed

    Mahamdallie, Shazia; Ruark, Elise; Yost, Shawn; Ramsay, Emma; Uddin, Imran; Wylie, Harriett; Elliott, Anna; Strydom, Ann; Renwick, Anthony; Seal, Sheila; Rahman, Nazneen

    2017-01-01

    Detection of deletions and duplications of whole exons (exon CNVs) is a key requirement of genetic testing. Accurate detection of this variant type has proved very challenging in targeted next-generation sequencing (NGS) data, particularly if only a single exon is involved. Many different NGS exon CNV calling methods have been developed over the last five years. Such methods are usually evaluated using simulated and/or in-house data due to a lack of publicly-available datasets with orthogonally generated results. This hinders tool comparisons, transparency and reproducibility. To provide a community resource for assessment of exon CNV calling methods in targeted NGS data, we here present the ICR96 exon CNV validation series. The dataset includes high-quality sequencing data from a targeted NGS assay (the TruSight Cancer Panel) together with Multiplex Ligation-dependent Probe Amplification (MLPA) results for 96 independent samples. 66 samples contain at least one validated exon CNV and 30 samples have validated negative results for exon CNVs in 26 genes. The dataset includes 46 exon CNVs in BRCA1 , BRCA2 , TP53 , MLH1 , MSH2 , MSH6 , PMS2 , EPCAM or PTEN , giving excellent representation of the cancer predisposition genes most frequently tested in clinical practice. Moreover, the validated exon CNVs include 25 single exon CNVs, the most difficult type of exon CNV to detect. The FASTQ files for the ICR96 exon CNV validation series can be accessed through the European-Genome phenome Archive (EGA) under the accession number EGAS00001002428.

  1. Validation of asthma recording in electronic health records: a systematic review

    PubMed Central

    Nissen, Francis; Quint, Jennifer K; Wilkinson, Samantha; Mullerova, Hana; Smeeth, Liam; Douglas, Ian J

    2017-01-01

    Objective To describe the methods used to validate asthma diagnoses in electronic health records and summarize the results of the validation studies. Background Electronic health records are increasingly being used for research on asthma to inform health services and health policy. Validation of the recording of asthma diagnoses in electronic health records is essential to use these databases for credible epidemiological asthma research. Methods We searched EMBASE and MEDLINE databases for studies that validated asthma diagnoses detected in electronic health records up to October 2016. Two reviewers independently assessed the full text against the predetermined inclusion criteria. Key data including author, year, data source, case definitions, reference standard, and validation statistics (including sensitivity, specificity, positive predictive value [PPV], and negative predictive value [NPV]) were summarized in two tables. Results Thirteen studies met the inclusion criteria. Most studies demonstrated a high validity using at least one case definition (PPV >80%). Ten studies used a manual validation as the reference standard; each had at least one case definition with a PPV of at least 63%, up to 100%. We also found two studies using a second independent database to validate asthma diagnoses. The PPVs of the best performing case definitions ranged from 46% to 58%. We found one study which used a questionnaire as the reference standard to validate a database case definition; the PPV of the case definition algorithm in this study was 89%. Conclusion Attaining high PPVs (>80%) is possible using each of the discussed validation methods. Identifying asthma cases in electronic health records is possible with high sensitivity, specificity or PPV, by combining multiple data sources, or by focusing on specific test measures. Studies testing a range of case definitions show wide variation in the validity of each definition, suggesting this may be important for obtaining asthma definitions with optimal validity. PMID:29238227

  2. Validation of Proposed "DSM-5" Criteria for Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Frazier, Thomas W.; Youngstrom, Eric A.; Speer, Leslie; Embacher, Rebecca; Law, Paul; Constantino, John; Findling, Robert L.; Hardan, Antonio Y.; Eng, Charis

    2012-01-01

    Objective: The primary aim of the present study was to evaluate the validity of proposed "DSM-5" criteria for autism spectrum disorder (ASD). Method: We analyzed symptoms from 14,744 siblings (8,911 ASD and 5,863 non-ASD) included in a national registry, the Interactive Autism Network. Youth 2 through 18 years of age were included if at least one…

  3. Overview of Heat Addition and Efficiency Predictions for an Advanced Stirling Convertor

    NASA Technical Reports Server (NTRS)

    Wilson, Scott D.; Reid, Terry; Schifer, Nicholas; Briggs, Maxwell

    2011-01-01

    Past methods of predicting net heat input needed to be validated. Validation effort pursued with several paths including improving model inputs, using test hardware to provide validation data, and validating high fidelity models. Validation test hardware provided direct measurement of net heat input for comparison to predicted values. Predicted value of net heat input was 1.7 percent less than measured value and initial calculations of measurement uncertainty were 2.1 percent (under review). Lessons learned during validation effort were incorporated into convertor modeling approach which improved predictions of convertor efficiency.

  4. Use of the Method of Triads in the Validation of Sodium and Potassium Intake in the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil)

    PubMed Central

    Pereira, Taísa Sabrina Silva; Cade, Nágela Valadão; Mill, José Geraldo; Sichieri, Rosely; Molina, Maria del Carmen Bisi

    2016-01-01

    Introduction Biomarkers are a good choice to be used in the validation of food frequency questionnaire due to the independence of their random errors. Objective To assess the validity of the potassium and sodium intake estimated using the Food Frequency Questionnaire ELSA-Brasil. Subjects/Methods A subsample of participants in the ELSA-Brasil cohort was included in this study in 2009. Sodium and potassium intake were estimated using three methods: Semi-quantitative food frequency questionnaire, 12-hour nocturnal urinary excretion and three 24-hour food records. Correlation coefficients were calculated between the methods, and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficient were estimated using bootstrap sampling. Exact and adjacent agreement and disagreement of the estimated sodium and potassium intake quintiles were compared among three methods. Results The sample consisted of 246 participants, aged 53±8 years, 52% of women. Validity coefficient for sodium were considered weak (рfood frequency questionnaire actual intake = 0.37 and рbiomarker actual intake = 0.21) and moderate (рfood records actual intake 0.56). The validity coefficient were higher for potassium (рfood frequency questionnaire actual intake = 0.60; рbiomarker actual intake = 0.42; рfood records actual intake = 0.79). Conclusions: The Food Frequency Questionnaire ELSA-Brasil showed good validity in estimating potassium intake in epidemiological studies. For sodium validity was weak, likely due to the non-quantification of the added salt to prepared food. PMID:28030625

  5. Validation of an analytical method for nitrous oxide (N2O) laughing gas by headspace gas chromatography coupled to mass spectrometry (HS-GC-MS): forensic application to a lethal intoxication.

    PubMed

    Giuliani, N; Beyer, J; Augsburger, M; Varlet, V

    2015-03-01

    Drug abuse is a widespread problem affecting both teenagers and adults. Nitrous oxide is becoming increasingly popular as an inhalation drug, causing harmful neurological and hematological effects. Some gas chromatography-mass spectrometry (GC-MS) methods for nitrous oxide measurement have been previously described. The main drawbacks of these methods include a lack of sensitivity for forensic applications; including an inability to quantitatively determine the concentration of gas present. The following study provides a validated method using HS-GC-MS which incorporates hydrogen sulfide as a suitable internal standard allowing the quantification of nitrous oxide. Upon analysis, sample and internal standard have similar retention times and are eluted quickly from the molecular sieve 5Å PLOT capillary column and the Porabond Q column therefore providing rapid data collection whilst preserving well defined peaks. After validation, the method has been applied to a real case of N2O intoxication indicating concentrations in a mono-intoxication. Copyright © 2015 Elsevier B.V. All rights reserved.

  6. Simulation-based training for prostate surgery.

    PubMed

    Khan, Raheej; Aydin, Abdullatif; Khan, Muhammad Shamim; Dasgupta, Prokar; Ahmed, Kamran

    2015-10-01

    To identify and review the currently available simulators for prostate surgery and to explore the evidence supporting their validity for training purposes. A review of the literature between 1999 and 2014 was performed. The search terms included a combination of urology, prostate surgery, robotic prostatectomy, laparoscopic prostatectomy, transurethral resection of the prostate (TURP), simulation, virtual reality, animal model, human cadavers, training, assessment, technical skills, validation and learning curves. Furthermore, relevant abstracts from the American Urological Association, European Association of Urology, British Association of Urological Surgeons and World Congress of Endourology meetings, between 1999 and 2013, were included. Only studies related to prostate surgery simulators were included; studies regarding other urological simulators were excluded. A total of 22 studies that carried out a validation study were identified. Five validated models and/or simulators were identified for TURP, one for photoselective vaporisation of the prostate, two for holmium enucleation of the prostate, three for laparoscopic radical prostatectomy (LRP) and four for robot-assisted surgery. Of the TURP simulators, all five have demonstrated content validity, three face validity and four construct validity. The GreenLight laser simulator has demonstrated face, content and construct validities. The Kansai HoLEP Simulator has demonstrated face and content validity whilst the UroSim HoLEP Simulator has demonstrated face, content and construct validity. All three animal models for LRP have been shown to have construct validity whilst the chicken skin model was also content valid. Only two robotic simulators were identified with relevance to robot-assisted laparoscopic prostatectomy, both of which demonstrated construct validity. A wide range of different simulators are available for prostate surgery, including synthetic bench models, virtual-reality platforms, animal models, human cadavers, distributed simulation and advanced training programmes and modules. The currently validated simulators can be used by healthcare organisations to provide supplementary training sessions for trainee surgeons. Further research should be conducted to validate simulated environments, to determine which simulators have greater efficacy than others and to assess the cost-effectiveness of the simulators and the transferability of skills learnt. With surgeons investigating new possibilities for easily reproducible and valid methods of training, simulation offers great scope for implementation alongside traditional methods of training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.

  7. Development of the pediatric quality of life inventory neurofibromatosis type 1 module items for children, adolescents and young adults: qualitative methods.

    PubMed

    Nutakki, Kavitha; Varni, James W; Steinbrenner, Sheila; Draucker, Claire B; Swigonski, Nancy L

    2017-03-01

    Health-related quality of life (HRQOL) is arguably one of the most important measures in evaluating effectiveness of clinical treatments. At present, there is no disease-specific outcome measure to assess the HRQOL of children, adolescents and young adults with Neurofibromatosis Type 1 (NF1). This study aimed to develop the items and support the content validity for the Pediatric Quality of Life Inventory™ (PedsQL™) NF1 Module for children, adolescents and young adults. The iterative process included multiphase qualitative methods including a literature review, survey of expert opinions, semi-structured interviews, cognitive interviews and pilot testing. Fifteen domains were derived from the qualitative methods, with content saturation achieved, resulting in 115 items. The domains include skin, pain, pain impact, pain management, cognitive functioning, speech, fine motor, balance, vision, perceived physical appearance, communication, worry, treatment, medicines and gastrointestinal symptoms. This study is limited because all participants are recruited from a single-site. Qualitative methods support the content validity for the PedsQL™ NF1 Module for children, adolescents and young adults. The PedsQL™ NF1 Module is now undergoing national multisite field testing for the psychometric validation of the instrument development.

  8. Increased efficacy for in-house validation of real-time PCR GMO detection methods.

    PubMed

    Scholtens, I M J; Kok, E J; Hougs, L; Molenaar, B; Thissen, J T N M; van der Voet, H

    2010-03-01

    To improve the efficacy of the in-house validation of GMO detection methods (DNA isolation and real-time PCR, polymerase chain reaction), a study was performed to gain insight in the contribution of the different steps of the GMO detection method to the repeatability and in-house reproducibility. In the present study, 19 methods for (GM) soy, maize canola and potato were validated in-house of which 14 on the basis of an 8-day validation scheme using eight different samples and five on the basis of a more concise validation protocol. In this way, data was obtained with respect to the detection limit, accuracy and precision. Also, decision limits were calculated for declaring non-conformance (>0.9%) with 95% reliability. In order to estimate the contribution of the different steps in the GMO analysis to the total variation variance components were estimated using REML (residual maximum likelihood method). From these components, relative standard deviations for repeatability and reproducibility (RSD(r) and RSD(R)) were calculated. The results showed that not only the PCR reaction but also the factors 'DNA isolation' and 'PCR day' are important factors for the total variance and should therefore be included in the in-house validation. It is proposed to use a statistical model to estimate these factors from a large dataset of initial validations so that for similar GMO methods in the future, only the PCR step needs to be validated. The resulting data are discussed in the light of agreed European criteria for qualified GMO detection methods.

  9. Model Validation | Center for Cancer Research

    Cancer.gov

    Research Investigation and Animal Model Validation This activity is also under development and thus far has included increasing pathology resources, delivering pathology services, as well as using imaging and surgical methods to develop and refine animal models in collaboration with other CCR investigators.

  10. Psychometric and cognitive validation of a social capital measurement tool in Peru and Vietnam.

    PubMed

    De Silva, Mary J; Harpham, Trudy; Tuan, Tran; Bartolini, Rosario; Penny, Mary E; Huttly, Sharon R

    2006-02-01

    Social capital is a relatively new concept which has attracted significant attention in recent years. No consensus has yet been reached on how to measure social capital, resulting in a large number of different tools available. While psychometric validation methods such as factor analysis have been used by a few studies to assess the internal validity of some tools, these techniques rely on data already collected by the tool and are therefore not capable of eliciting what the questions are actually measuring. The Young Lives (YL) study includes quantitative measures of caregiver's social capital in four countries (Vietnam, Peru, Ethiopia, and India) using a short version of the Adapted Social Capital Assessment Tool (SASCAT). A range of different psychometric methods including factor analysis were used to evaluate the construct validity of SASCAT in Peru and Vietnam. In addition, qualitative cognitive interviews with 20 respondents from Peru and 24 respondents from Vietnam were conducted to explore what each question is actually measuring. We argue that psychometric validation techniques alone are not sufficient to adequately validate multi-faceted social capital tools for use in different cultural settings. Psychometric techniques show SASCAT to be a valid tool reflecting known constructs and displaying postulated links with other variables. However, results from the cognitive interviews present a more mixed picture with some questions being appropriately interpreted by respondents, and others displaying significant differences between what the researchers intended them to measure and what they actually do. Using evidence from a range of methods of assessing validity has enabled the modification of an existing instrument into a valid and low cost tool designed to measure social capital within larger surveys in Peru and Vietnam, with the potential for use in other developing countries following local piloting and cultural adaptation of the tool.

  11. Reliability and validity assessment of gastrointestinal dystemperaments questionnaire: a novel scale in Persian traditional medicine

    PubMed Central

    Hoseinzadeh, Hamidreza; Taghipour, Ali; Yousefi, Mahdi

    2018-01-01

    Background Development of a questionnaire based on the resources of Persian traditional medicine seems necessary. One of the problems faced by practitioners of traditional medicine is the different opinions regarding the diagnosis of general temperament or temperament of member. One of the reasons is the lack of validity tools, and it has led to difficulties in training the student of traditional medicine and the treatment of patients. The differences in the detection methods, have given rise to several treatment methods. Objective The present study aimed to develop a questionnaire and standard software for diagnosis of gastrointestinal dystemperaments. Methods The present research is a tool developing study which included 8 stages of developing the items, determining the statements based on items, assessing the face validity, assessing the content validity, assessing the reliability, rating the items, developing a software for calculation of the total score of the questionnaire named GDS v.1.1, and evaluating the concurrent validity using statistical tests including Cronbach’s alpha coefficient, Cohen’s kappa coefficient. Results Based on the results, 112 notes including 62 symptoms were extracted from resources, and 58 items were obtained from in-person interview sessions with a panel of experts. A statement was selected for each item and, after merging a number of statements, a total of 49 statements were finally obtained. By calculating the score of statement impact and determining the content validity, respectively, 6 and 10 other items were removed from the list of statements. Standardized Cronbach’s alpha for this questionnaire was obtained 0.795 and its concurrent validity was equal to 0.8. Conclusion A quantitative tool was developed for diagnosis and examination of gastrointestinal dystemperaments. The developed questionnaire is adequately reliable and valid for this purpose. In addition, the software can be used for clinical diagnosis. PMID:29629060

  12. Best Practices in Stability Indicating Method Development and Validation for Non-clinical Dose Formulations.

    PubMed

    Henry, Teresa R; Penn, Lara D; Conerty, Jason R; Wright, Francesca E; Gorman, Gregory; Pack, Brian W

    2016-11-01

    Non-clinical dose formulations (also known as pre-clinical or GLP formulations) play a key role in early drug development. These formulations are used to introduce active pharmaceutical ingredients (APIs) into test organisms for both pharmacokinetic and toxicological studies. Since these studies are ultimately used to support dose and safety ranges in human studies, it is important to understand not only the concentration and PK/PD of the active ingredient but also to generate safety data for likely process impurities and degradation products of the active ingredient. As such, many in the industry have chosen to develop and validate methods which can accurately detect and quantify the active ingredient along with impurities and degradation products. Such methods often provide trendable results which are predictive of stability, thus leading to the name; stability indicating methods. This document provides an overview of best practices for those choosing to include development and validation of such methods as part of their non-clinical drug development program. This document is intended to support teams who are either new to stability indicating method development and validation or who are less familiar with the requirements of validation due to their position within the product development life cycle.

  13. Hybrid Particle-Element Simulation of Impact on Composite Orbital Debris Shields

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.

    2004-01-01

    This report describes the development of new numerical methods and new constitutive models for the simulation of hypervelocity impact effects on spacecraft. The research has included parallel implementation of the numerical methods and material models developed under the project. Validation work has included both one dimensional simulations, for comparison with exact solutions, and three dimensional simulations of published hypervelocity impact experiments. The validated formulations have been applied to simulate impact effects in a velocity and kinetic energy regime outside the capabilities of current experimental methods. The research results presented here allow for the expanded use of numerical simulation, as a complement to experimental work, in future design of spacecraft for hypervelocity impact effects.

  14. Validity of worksheet-based guided inquiry and mind mapping for training students’ creative thinking skills

    NASA Astrophysics Data System (ADS)

    Susanti, L. B.; Poedjiastoeti, S.; Taufikurohmah, T.

    2018-04-01

    The purpose of this study is to explain the validity of guided inquiry and mind mapping-based worksheet that has been developed in this study. The worksheet implemented the phases of guided inquiry teaching models in order to train students’ creative thinking skills. The creative thinking skills which were trained in this study included fluency, flexibility, originality and elaboration. The types of validity used in this study included content and construct validity. The type of this study is development research with Research and Development (R & D) method. The data of this study were collected using review and validation sheets. Sources of the data were chemistry lecturer and teacher. The data is the analyzed descriptively. The results showed that the worksheet is very valid and could be used as a learning media with the percentage of validity ranged from 82.5%-92.5%.

  15. Evaluating the dynamic response of in-flight thrust calculation techniques during throttle transients

    NASA Technical Reports Server (NTRS)

    Ray, Ronald J.

    1994-01-01

    New flight test maneuvers and analysis techniques for evaluating the dynamic response of in-flight thrust models during throttle transients have been developed and validated. The approach is based on the aircraft and engine performance relationship between thrust and drag. Two flight test maneuvers, a throttle step and a throttle frequency sweep, were developed and used in the study. Graphical analysis techniques, including a frequency domain analysis method, were also developed and evaluated. They provide quantitative and qualitative results. Four thrust calculation methods were used to demonstrate and validate the test technique. Flight test applications on two high-performance aircraft confirmed the test methods as valid and accurate. These maneuvers and analysis techniques were easy to implement and use. Flight test results indicate the analysis techniques can identify the combined effects of model error and instrumentation response limitations on the calculated thrust value. The methods developed in this report provide an accurate approach for evaluating, validating, or comparing thrust calculation methods for dynamic flight applications.

  16. Validation of Alternative In Vitro Methods to Animal Testing: Concepts, Challenges, Processes and Tools.

    PubMed

    Griesinger, Claudius; Desprez, Bertrand; Coecke, Sandra; Casey, Warren; Zuang, Valérie

    This chapter explores the concepts, processes, tools and challenges relating to the validation of alternative methods for toxicity and safety testing. In general terms, validation is the process of assessing the appropriateness and usefulness of a tool for its intended purpose. Validation is routinely used in various contexts in science, technology, the manufacturing and services sectors. It serves to assess the fitness-for-purpose of devices, systems, software up to entire methodologies. In the area of toxicity testing, validation plays an indispensable role: "alternative approaches" are increasingly replacing animal models as predictive tools and it needs to be demonstrated that these novel methods are fit for purpose. Alternative approaches include in vitro test methods, non-testing approaches such as predictive computer models up to entire testing and assessment strategies composed of method suites, data sources and decision-aiding tools. Data generated with alternative approaches are ultimately used for decision-making on public health and the protection of the environment. It is therefore essential that the underlying methods and methodologies are thoroughly characterised, assessed and transparently documented through validation studies involving impartial actors. Importantly, validation serves as a filter to ensure that only test methods able to produce data that help to address legislative requirements (e.g. EU's REACH legislation) are accepted as official testing tools and, owing to the globalisation of markets, recognised on international level (e.g. through inclusion in OECD test guidelines). Since validation creates a credible and transparent evidence base on test methods, it provides a quality stamp, supporting companies developing and marketing alternative methods and creating considerable business opportunities. Validation of alternative methods is conducted through scientific studies assessing two key hypotheses, reliability and relevance of the test method for a given purpose. Relevance encapsulates the scientific basis of the test method, its capacity to predict adverse effects in the "target system" (i.e. human health or the environment) as well as its applicability for the intended purpose. In this chapter we focus on the validation of non-animal in vitro alternative testing methods and review the concepts, challenges, processes and tools fundamental to the validation of in vitro methods intended for hazard testing of chemicals. We explore major challenges and peculiarities of validation in this area. Based on the notion that validation per se is a scientific endeavour that needs to adhere to key scientific principles, namely objectivity and appropriate choice of methodology, we examine basic aspects of study design and management, and provide illustrations of statistical approaches to describe predictive performance of validated test methods as well as their reliability.

  17. Measurement of patient safety: a systematic review of the reliability and validity of adverse event detection with record review

    PubMed Central

    Hanskamp-Sebregts, Mirelle; Zegers, Marieke; Vincent, Charles; van Gurp, Petra J; de Vet, Henrica C W; Wollersheim, Hub

    2016-01-01

    Objectives Record review is the most used method to quantify patient safety. We systematically reviewed the reliability and validity of adverse event detection with record review. Design A systematic review of the literature. Methods We searched PubMed, EMBASE, CINAHL, PsycINFO and the Cochrane Library and from their inception through February 2015. We included all studies that aimed to describe the reliability and/or validity of record review. Two reviewers conducted data extraction. We pooled κ values (κ) and analysed the differences in subgroups according to number of reviewers, reviewer experience and training level, adjusted for the prevalence of adverse events. Results In 25 studies, the psychometric data of the Global Trigger Tool (GTT) and the Harvard Medical Practice Study (HMPS) were reported and 24 studies were included for statistical pooling. The inter-rater reliability of the GTT and HMPS showed a pooled κ of 0.65 and 0.55, respectively. The inter-rater agreement was statistically significantly higher when the group of reviewers within a study consisted of a maximum five reviewers. We found no studies reporting on the validity of the GTT and HMPS. Conclusions The reliability of record review is moderate to substantial and improved when a small group of reviewers carried out record review. The validity of the record review method has never been evaluated, while clinical data registries, autopsy or direct observations of patient care are potential reference methods that can be used to test concurrent validity. PMID:27550650

  18. Validation of GC and HPLC systems for residue studies

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williams, M.

    1995-12-01

    For residue studies, GC and HPLC system performance must be validated prior to and during use. One excellent measure of system performance is the standard curve and associated chromatograms used to construct that curve. The standard curve is a model of system response to an analyte over a specific time period, and is prima facia evidence of system performance beginning at the auto sampler and proceeding through the injector, column, detector, electronics, data-capture device, and printer/plotter. This tool measures the performance of the entire chromatographic system; its power negates most of the benefits associated with costly and time-consuming validation ofmore » individual system components. Other measures of instrument and method validation will be discussed, including quality control charts and experimental designs for method validation.« less

  19. Use of the Method of Triads in the Validation of Sodium and Potassium Intake in the Brazilian Longitudinal Study of Adult Health (ELSA-Brasil).

    PubMed

    Pereira, Taísa Sabrina Silva; Cade, Nágela Valadão; Mill, José Geraldo; Sichieri, Rosely; Molina, Maria Del Carmen Bisi

    2016-01-01

    Biomarkers are a good choice to be used in the validation of food frequency questionnaire due to the independence of their random errors. To assess the validity of the potassium and sodium intake estimated using the Food Frequency Questionnaire ELSA-Brasil. A subsample of participants in the ELSA-Brasil cohort was included in this study in 2009. Sodium and potassium intake were estimated using three methods: Semi-quantitative food frequency questionnaire, 12-hour nocturnal urinary excretion and three 24-hour food records. Correlation coefficients were calculated between the methods, and the validity coefficient was calculated using the method of triads. The 95% confidence intervals for the validity coefficient were estimated using bootstrap sampling. Exact and adjacent agreement and disagreement of the estimated sodium and potassium intake quintiles were compared among three methods. The sample consisted of 246 participants, aged 53±8 years, 52% of women. Validity coefficient for sodium were considered weak (рfood frequency questionnaire actual intake = 0.37 and рbiomarker actual intake = 0.21) and moderate (рfood records actual intake 0.56). The validity coefficient were higher for potassium (рfood frequency questionnaire actual intake = 0.60; рbiomarker actual intake = 0.42; рfood records actual intake = 0.79). Conclusions: The Food Frequency Questionnaire ELSA-Brasil showed good validity in estimating potassium intake in epidemiological studies. For sodium validity was weak, likely due to the non-quantification of the added salt to prepared food.

  20. Development of models for classification of action between heat-clearing herbs and blood-activating stasis-resolving herbs based on theory of traditional Chinese medicine.

    PubMed

    Chen, Zhao; Cao, Yanfeng; He, Shuaibing; Qiao, Yanjiang

    2018-01-01

    Action (" gongxiao " in Chinese) of traditional Chinese medicine (TCM) is the high recapitulation for therapeutic and health-preserving effects under the guidance of TCM theory. TCM-defined herbal properties (" yaoxing " in Chinese) had been used in this research. TCM herbal property (TCM-HP) is the high generalization and summary for actions, both of which come from long-term effective clinical practice in two thousands of years in China. However, the specific relationship between TCM-HP and action of TCM is complex and unclear from a scientific perspective. The research about this is conducive to expound the connotation of TCM-HP theory and is of important significance for the development of the TCM-HP theory. One hundred and thirty-three herbs including 88 heat-clearing herbs (HCHs) and 45 blood-activating stasis-resolving herbs (BAHRHs) were collected from reputable TCM literatures, and their corresponding TCM-HPs/actions information were collected from Chinese pharmacopoeia (2015 edition). The Kennard-Stone (K-S) algorithm was used to split 133 herbs into 100 calibration samples and 33 validation samples. Then, machine learning methods including supported vector machine (SVM), k-nearest neighbor (kNN) and deep learning methods including deep belief network (DBN), convolutional neutral network (CNN) were adopted to develop action classification models based on TCM-HP theory, respectively. In order to ensure robustness, these four classification methods were evaluated by using the method of tenfold cross validation and 20 external validation samples for prediction. As results, 72.7-100% of 33 validation samples including 17 HCHs and 16 BASRHs were correctly predicted by these four types of methods. Both of the DBN and CNN methods gave out the best results and their sensitivity, specificity, precision, accuracy were all 100.00%. Especially, the predicted results of external validation set showed that the performance of deep learning methods (DBN, CNN) were better than traditional machine learning methods (kNN, SVM) in terms of their sensitivity, specificity, precision, accuracy. Moreover, the distribution patterns of TCM-HPs of HCHs and BASRHs were also analyzed to detect the featured TCM-HPs of these two types of herbs. The result showed that the featured TCM-HPs of HCHs were cold, bitter, liver and stomach meridians entered, while those of BASRHs were warm, bitter and pungent, liver meridian entered. The performance on validation set and external validation set of deep learning methods (DBN, CNN) were better than machine learning models (kNN, SVM) in sensitivity, specificity, precision, accuracy when predicting the actions of heat-clearing and blood-activating stasis-resolving based on TCM-HP theory. The deep learning classification methods owned better generalization ability and accuracy when predicting the actions of heat-clearing and blood-activating stasis-resolving based on TCM-HP theory. Besides, the methods of deep learning would help us to improve our understanding about the relationship between herbal property and action, as well as to enrich and develop the theory of TCM-HP scientifically.

  1. Semi-automating the manual literature search for systematic reviews increases efficiency.

    PubMed

    Chapman, Andrea L; Morgan, Laura C; Gartlehner, Gerald

    2010-03-01

    To minimise retrieval bias, manual literature searches are a key part of the search process of any systematic review. Considering the need to have accurate information, valid results of the manual literature search are essential to ensure scientific standards; likewise efficient approaches that minimise the amount of personnel time required to conduct a manual literature search are of great interest. The objective of this project was to determine the validity and efficiency of a new manual search method that utilises the scopus database. We used the traditional manual search approach as the gold standard to determine the validity and efficiency of the proposed scopus method. Outcome measures included completeness of article detection and personnel time involved. Using both methods independently, we compared the results based on accuracy of the results, validity and time spent conducting the search, efficiency. Regarding accuracy, the scopus method identified the same studies as the traditional approach indicating its validity. In terms of efficiency, using scopus led to a time saving of 62.5% compared with the traditional approach (3 h versus 8 h). The scopus method can significantly improve the efficiency of manual searches and thus of systematic reviews.

  2. Rotorcraft noise

    NASA Technical Reports Server (NTRS)

    Huston, R. J. (Compiler)

    1982-01-01

    The establishment of a realistic plan for NASA and the U.S. helicopter industry to develop a design-for-noise methodology, including plans for the identification and development of promising noise reduction technology was discussed. Topics included: noise reduction techniques, scaling laws, empirical noise prediction, psychoacoustics, and methods of developing and validing noise prediction methods.

  3. A Survey of Model Evaluation Approaches with a Tutorial on Hierarchical Bayesian Methods

    ERIC Educational Resources Information Center

    Shiffrin, Richard M.; Lee, Michael D.; Kim, Woojae; Wagenmakers, Eric-Jan

    2008-01-01

    This article reviews current methods for evaluating models in the cognitive sciences, including theoretically based approaches, such as Bayes factors and minimum description length measures; simulation approaches, including model mimicry evaluations; and practical approaches, such as validation and generalization measures. This article argues…

  4. A pan-European ring trial to validate an International Standard for detection of Vibrio cholerae, Vibrio parahaemolyticus and Vibrio vulnificus in seafoods.

    PubMed

    Hartnell, R E; Stockley, L; Keay, W; Rosec, J-P; Hervio-Heath, D; Van den Berg, H; Leoni, F; Ottaviani, D; Henigman, U; Denayer, S; Serbruyns, B; Georgsson, F; Krumova-Valcheva, G; Gyurova, E; Blanco, C; Copin, S; Strauch, E; Wieczorek, K; Lopatek, M; Britova, A; Hardouin, G; Lombard, B; In't Veld, P; Leclercq, A; Baker-Austin, C

    2018-02-10

    Globally, vibrios represent an important and well-established group of bacterial foodborne pathogens. The European Commission (EC) mandated the Comite de European Normalisation (CEN) to undertake work to provide validation data for 15 methods in microbiology to support EC legislation. As part of this mandated work programme, merging of ISO/TS 21872-1:2007, which specifies a horizontal method for the detection of V. parahaemolyticus and V. cholerae, and ISO/TS 21872-2:2007, a similar horizontal method for the detection of potentially pathogenic vibrios other than V. cholerae and V. parahaemolyticus was proposed. Both parts of ISO/TS 21872 utilized classical culture-based isolation techniques coupled with biochemical confirmation steps. The work also considered simplification of the biochemical confirmation steps. In addition, because of advances in molecular based methods for identification of human pathogenic Vibrio spp. classical and real-time PCR options were also included within the scope of the validation. These considerations formed the basis of a multi-laboratory validation study with the aim of improving the precision of this ISO technical specification and providing a single ISO standard method to enable detection of these important foodborne Vibrio spp.. To achieve this aim, an international validation study involving 13 laboratories from 9 countries in Europe was conducted in 2013. The results of this validation have enabled integration of the two existing technical specifications targeting the detection of the major foodborne Vibrio spp., simplification of the suite of recommended biochemical identification tests and the introduction of molecular procedures that provide both species level identification and discrimination of putatively pathogenic strains of V. parahaemolyticus by the determination of the presence of theromostable direct and direct related haemolysins. The method performance characteristics generated in this have been included in revised international standard, ISO 21872:2017, published in July 2017. Copyright © 2018. Published by Elsevier B.V.

  5. A systematic review of validated methods to capture acute bronchospasm using administrative or claims data.

    PubMed

    Sharifi, Mona; Krishanswami, Shanthi; McPheeters, Melissa L

    2013-12-30

    To identify and assess billing, procedural, or diagnosis code, or pharmacy claim-based algorithms used to identify acute bronchospasm in administrative and claims databases. We searched the MEDLINE database from 1991 to September 2012 using controlled vocabulary and key terms related to bronchospasm, wheeze and acute asthma. We also searched the reference lists of included studies. Two investigators independently assessed the full text of studies against pre-determined inclusion criteria. Two reviewers independently extracted data regarding participant and algorithm characteristics. Our searches identified 677 citations of which 38 met our inclusion criteria. In these 38 studies, the most commonly used ICD-9 code was 493.x. Only 3 studies reported any validation methods for the identification of bronchospasm, wheeze or acute asthma in administrative and claims databases; all were among pediatric populations and only 2 offered any validation statistics. Some of the outcome definitions utilized were heterogeneous and included other disease based diagnoses, such as bronchiolitis and pneumonia, which are typically of an infectious etiology. One study offered the validation of algorithms utilizing Emergency Department triage chief complaint codes to diagnose acute asthma exacerbations with ICD-9 786.07 (wheezing) revealing the highest sensitivity (56%), specificity (97%), PPV (93.5%) and NPV (76%). There is a paucity of studies reporting rigorous methods to validate algorithms for the identification of bronchospasm in administrative data. The scant validated data available are limited in their generalizability to broad-based populations. Copyright © 2013 Elsevier Ltd. All rights reserved.

  6. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers

    PubMed Central

    Reeves, Todd D.; Marbach-Ad, Gili

    2016-01-01

    Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology—either quantitative or qualitative—on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. PMID:26903498

  7. Measuring Resource Utilization: A Systematic Review of Validated Self-Reported Questionnaires.

    PubMed

    Leggett, Laura E; Khadaroo, Rachel G; Holroyd-Leduc, Jayna; Lorenzetti, Diane L; Hanson, Heather; Wagg, Adrian; Padwal, Raj; Clement, Fiona

    2016-03-01

    A variety of methods may be used to obtain costing data. Although administrative data are most commonly used, the data available in these datasets are often limited. An alternative method of obtaining costing is through self-reported questionnaires. Currently, there are no systematic reviews that summarize self-reported resource utilization instruments from the published literature.The aim of the study was to identify validated self-report healthcare resource use instruments and to map their attributes.A systematic review was conducted. The search identified articles using terms like "healthcare utilization" and "questionnaire." All abstracts and full texts were considered in duplicate. For inclusion, studies had to assess the validity of a self-reported resource use questionnaire, to report original data, include adult populations, and the questionnaire had to be publically available. Data such as type of resource utilization assessed by each questionnaire, and validation findings were extracted from each study.In all, 2343 unique citations were retrieved; 2297 were excluded during abstract review. Forty-six studies were reviewed in full text, and 15 studies were included in this systematic review. Six assessed resource utilization of patients with chronic conditions; 5 assessed mental health service utilization; 3 assessed resource utilization by a general population; and 1 assessed utilization in older populations. The most frequently measured resources included visits to general practitioners and inpatient stays; nonmedical resources were least frequently measured. Self-reported questionnaires on resource utilization had good agreement with administrative data, although, visits to general practitioners, outpatient days, and nurse visits had poorer agreement.Self-reported questionnaires are a valid method of collecting data on healthcare resource utilization.

  8. A simple method for measurement of maximal downstroke power on friction-loaded cycle ergometer.

    PubMed

    Morin, Jean-Benoît; Belli, Alain

    2004-01-01

    The aim of this study was to propose and validate a post-hoc correction method to obtain maximal power values taking into account inertia of the flywheel during sprints on friction-loaded cycle ergometers. This correction method was obtained from a basic postulate of linear deceleration-time evolution during the initial phase (until maximal power) of a sprint and included simple parameters as flywheel inertia, maximal velocity, time to reach maximal velocity and friction force. The validity of this model was tested by comparing measured and calculated maximal power values for 19 sprint bouts performed by five subjects against 0.6-1 N kg(-1) friction loads. Non-significant differences between measured and calculated maximal power (1151+/-169 vs. 1148+/-170 W) and a mean error index of 1.31+/-1.20% (ranging from 0.09% to 4.20%) showed the validity of this method. Furthermore, the differences between measured maximal power and power neglecting inertia (20.4+/-7.6%, ranging from 9.5% to 33.2%) emphasized the usefulness of power correcting in studies about anaerobic power which do not include inertia, and also the interest of this simple post-hoc method.

  9. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    NASA Astrophysics Data System (ADS)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  10. Development of a refractive error quality of life scale for Thai adults (the REQ-Thai).

    PubMed

    Sukhawarn, Roongthip; Wiratchai, Nonglak; Tatsanavivat, Pyatat; Pitiyanuwat, Somwung; Kanato, Manop; Srivannaboon, Sabong; Guyatt, Gordon H

    2011-08-01

    To develop a scale for measuring refractive error quality of life (QOL) for Thai adults. The full survey comprised 424 respondents from 5 medical centers in Bangkok and from 3 medical centers in Chiangmai, Songkla and KhonKaen provinces. Participants were emmetropes and persons with refractive correction with visual acuity of 20/30 or better An item reduction process was employed by combining 3 methods-expert opinion, impact method and item-total correlation methods. The classical reliability testing and the validity testing including convergent, discriminative and construct validity was performed. The developed questionnaire comprised 87 items in 6 dimensions: 1) quality of vision, 2) visual function, 3) social function, 4) psychological function, 5) symptoms and 6) refractive correction problems. It is the 5-level Likert scale type. The Cronbach's Alpha coefficients of its dimensions ranged from 0.756 to 0. 979. All validity testing were shown to be valid. The construct validity was validated by the confirmatory factor analysis. A short version questionnaire comprised 48 items with good reliability and validity was also developed. This is the first validated instrument for measuring refractive error quality of life for Thai adults that was developed with strong research methodology and large sample size.

  11. DDOT MXD+ method development report.

    DOT National Transportation Integrated Search

    2015-09-01

    Mixed-use development has become increasingly common across the country, including Washington, D.C. : However, a straightforward and empirically validated method for evaluating the traffic impacts of such : projects is still needed. The data presente...

  12. 11th GCC Closed Forum: cumulative stability; matrix stability; immunogenicity assays; laboratory manuals; biosimilars; chiral methods; hybrid LBA/LCMS assays; fit-for-purpose validation; China Food and Drug Administration bioanalytical method validation.

    PubMed

    Islam, Rafiq; Briscoe, Chad; Bower, Joseph; Cape, Stephanie; Arnold, Mark; Hayes, Roger; Warren, Mark; Karnik, Shane; Stouffer, Bruce; Xiao, Yi Qun; van der Strate, Barry; Sikkema, Daniel; Fang, Xinping; Tudoroniu, Ariana; Tayyem, Rabab; Brant, Ashley; Spriggs, Franklin; Barry, Colin; Khan, Masood; Keyhani, Anahita; Zimmer, Jennifer; Caturla, Maria Cruz; Couerbe, Philippe; Khadang, Ardeshir; Bourdage, James; Datin, Jim; Zemo, Jennifer; Hughes, Nicola; Fatmi, Saadya; Sheldon, Curtis; Fountain, Scott; Satterwhite, Christina; Colletti, Kelly; Vija, Jenifer; Yu, Mathilde; Stamatopoulos, John; Lin, Jenny; Wilfahrt, Jim; Dinan, Andrew; Ohorodnik, Susan; Hulse, James; Patel, Vimal; Garofolo, Wei; Savoie, Natasha; Brown, Michael; Papac, Damon; Buonarati, Mike; Hristopoulos, George; Beaver, Chris; Boudreau, Nadine; Williard, Clark; Liu, Yansheng; Ray, Gene; Warrino, Dominic; Xu, Allan; Green, Rachel; Hayward-Sewell, Joanne; Marcelletti, John; Sanchez, Christina; Kennedy, Michael; Charles, Jessica St; Bouhajib, Mohammed; Nehls, Corey; Tabler, Edward; Tu, Jing; Joyce, Philip; Iordachescu, Adriana; DuBey, Ira; Lindsay, John; Yamashita, Jim; Wells, Edward

    2018-04-01

    The 11th Global CRO Council Closed Forum was held in Universal City, CA, USA on 3 April 2017. Representatives from international CRO members offering bioanalytical services were in attendance in order to discuss scientific and regulatory issues specific to bioanalysis. The second CRO-Pharma Scientific Interchange Meeting was held on 7 April 2017, which included Pharma representatives' sharing perspectives on the topics discussed earlier in the week with the CRO members. The issues discussed at the meetings included cumulative stability evaluations, matrix stability evaluations, the 2016 US FDA Immunogenicity Guidance and recent and unexpected FDA Form 483s on immunogenicity assays, the bioanalytical laboratory's role in writing PK sample collection instructions, biosimilars, CRO perspectives on the use of chiral versus achiral methods, hybrid LBA/LCMS assays, applications of fit-for-purpose validation and, at the Global CRO Council Closed Forum only, the status and trend of current regulated bioanalytical practice in China under CFDA's new BMV policy. Conclusions from discussions of these topics at both meetings are included in this report.

  13. Development and Validation of an Instrument for Assessing Mathematics Classroom Environment in Tertiary Institutions

    ERIC Educational Resources Information Center

    Yin, Hongbiao; Lu, Genshu

    2014-01-01

    This report describes the development and validation of an instrument, the University Mathematics Classroom Environment Questionnaire (UMCEQ), for assessing the mathematics classroom environment in tertiary institutions in China. Through the use of multiple methods, including exploratory and confirmatory factor analyses, on two independent samples…

  14. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed Central

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-01-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing. PMID:3697596

  15. A hydrostatic weighing method using total lung capacity and a small tank.

    PubMed

    Warner, J G; Yeater, R; Sherwood, L; Weber, K

    1986-03-01

    The purpose of this study was to establish the validity and reliability of a hydrostatic weighing method using total lung capacity (measuring vital capacity with a respirometer at the time of weighing) the prone position, and a small oblong tank. The validity of the method was established by comparing the TLC prone (tank) method against three hydrostatic weighing methods administered in a pool. The three methods included residual volume seated, TLC seated and TLC prone. Eighty male and female subjects were underwater weighed using each of the four methods. Validity coefficients for per cent body fat between the TLC prone (tank) method and the RV seated (pool), TLC seated (pool) and TLC prone (pool) methods were .98, .99 and .99, respectively. A randomised complete block ANOVA found significant differences between the RV seated (pool) method and each of the three TLC methods with respect to both body density and per cent body fat. The differences were negligible with respect to HW error. Reliability of the TLC prone (tank) method was established by weighing twenty subjects three different times with ten-minute time intervals between testing. Multiple correlations yielded reliability coefficients for body density and per cent body fat values of .99 and .99, respectively. It was concluded that the TLC prone (tank) method is valid, reliable and a favourable method of hydrostatic weighing.

  16. The bottom-up approach to integrative validity: a new perspective for program evaluation.

    PubMed

    Chen, Huey T

    2010-08-01

    The Campbellian validity model and the traditional top-down approach to validity have had a profound influence on research and evaluation. That model includes the concepts of internal and external validity and within that model, the preeminence of internal validity as demonstrated in the top-down approach. Evaluators and researchers have, however, increasingly recognized that in an evaluation, the over-emphasis on internal validity reduces that evaluation's usefulness and contributes to the gulf between academic and practical communities regarding interventions. This article examines the limitations of the Campbellian validity model and the top-down approach and provides a comprehensive, alternative model, known as the integrative validity model for program evaluation. The integrative validity model includes the concept of viable validity, which is predicated on a bottom-up approach to validity. This approach better reflects stakeholders' evaluation views and concerns, makes external validity workable, and becomes therefore a preferable alternative for evaluation of health promotion/social betterment programs. The integrative validity model and the bottom-up approach enable evaluators to meet scientific and practical requirements, facilitate in advancing external validity, and gain a new perspective on methods. The new perspective also furnishes a balanced view of credible evidence, and offers an alternative perspective for funding. Copyright (c) 2009 Elsevier Ltd. All rights reserved.

  17. Method of validating measurement data of a process parameter from a plurality of individual sensor inputs

    DOEpatents

    Scarola, Kenneth; Jamison, David S.; Manazir, Richard M.; Rescorl, Robert L.; Harmon, Daryl L.

    1998-01-01

    A method for generating a validated measurement of a process parameter at a point in time by using a plurality of individual sensor inputs from a scan of said sensors at said point in time. The sensor inputs from said scan are stored and a first validation pass is initiated by computing an initial average of all stored sensor inputs. Each sensor input is deviation checked by comparing each input including a preset tolerance against the initial average input. If the first deviation check is unsatisfactory, the sensor which produced the unsatisfactory input is flagged as suspect. It is then determined whether at least two of the inputs have not been flagged as suspect and are therefore considered good inputs. If two or more inputs are good, a second validation pass is initiated by computing a second average of all the good sensor inputs, and deviation checking the good inputs by comparing each good input including a present tolerance against the second average. If the second deviation check is satisfactory, the second average is displayed as the validated measurement and the suspect sensor as flagged as bad. A validation fault occurs if at least two inputs are not considered good, or if the second deviation check is not satisfactory. In the latter situation the inputs from each of all the sensors are compared against the last validated measurement and the value from the sensor input that deviates the least from the last valid measurement is displayed.

  18. Evaluation of biologic occupational risk control practices: quality indicators development and validation.

    PubMed

    Takahashi, Renata Ferreira; Gryschek, Anna Luíza F P L; Izumi Nichiata, Lúcia Yasuko; Lacerda, Rúbia Aparecida; Ciosak, Suely Itsuko; Gir, Elucir; Padoveze, Maria Clara

    2010-05-01

    There is growing demand for the adoption of qualification systems for health care practices. This study is aimed at describing the development and validation of indicators for evaluation of biologic occupational risk control programs. The study involved 3 stages: (1) setting up a research team, (2) development of indicators, and (3) validation of the indicators by a team of specialists recruited to validate each attribute of the developed indicators. The content validation method was used for the validation, and a psychometric scale was developed for the specialists' assessment. A consensus technique was used, and every attribute that obtained a Content Validity Index of at least 0.75 was approved. Eight indicators were developed for the evaluation of the biologic occupational risk prevention program, with emphasis on accidents caused by sharp instruments and occupational tuberculosis prevention. The indicators included evaluation of the structure, process, and results at the prevention and biologic risk control levels. The majority of indicators achieved a favorable consensus regarding all validated attributes. The developed indicators were considered validated, and the method used for construction and validation proved to be effective. Copyright (c) 2010 Association for Professionals in Infection Control and Epidemiology, Inc. Published by Mosby, Inc. All rights reserved.

  19. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth.

    PubMed

    Zhang, Zhaoyang; Fang, Hua; Wang, Honggang

    2016-06-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering are more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services.

  20. Multiple Imputation based Clustering Validation (MIV) for Big Longitudinal Trial Data with Missing Values in eHealth

    PubMed Central

    Zhang, Zhaoyang; Wang, Honggang

    2016-01-01

    Web-delivered trials are an important component in eHealth services. These trials, mostly behavior-based, generate big heterogeneous data that are longitudinal, high dimensional with missing values. Unsupervised learning methods have been widely applied in this area, however, validating the optimal number of clusters has been challenging. Built upon our multiple imputation (MI) based fuzzy clustering, MIfuzzy, we proposed a new multiple imputation based validation (MIV) framework and corresponding MIV algorithms for clustering big longitudinal eHealth data with missing values, more generally for fuzzy-logic based clustering methods. Specifically, we detect the optimal number of clusters by auto-searching and -synthesizing a suite of MI-based validation methods and indices, including conventional (bootstrap or cross-validation based) and emerging (modularity-based) validation indices for general clustering methods as well as the specific one (Xie and Beni) for fuzzy clustering. The MIV performance was demonstrated on a big longitudinal dataset from a real web-delivered trial and using simulation. The results indicate MI-based Xie and Beni index for fuzzy-clustering is more appropriate for detecting the optimal number of clusters for such complex data. The MIV concept and algorithms could be easily adapted to different types of clustering that could process big incomplete longitudinal trial data in eHealth services. PMID:27126063

  1. Measuring Adverse Events in Helicopter Emergency Medical Services: Establishing Content Validity

    PubMed Central

    Patterson, P. Daniel; Lave, Judith R.; Martin-Gill, Christian; Weaver, Matthew D.; Wadas, Richard J.; Arnold, Robert M.; Roth, Ronald N.; Mosesso, Vincent N.; Guyette, Francis X.; Rittenberger, Jon C.; Yealy, Donald M.

    2015-01-01

    Introduction We sought to create a valid framework for detecting Adverse Events (AEs) in the high-risk setting of Helicopter Emergency Medical Services (HEMS). Methods We assembled a panel of 10 expert clinicians (n=6 emergency medicine physicians and n=4 prehospital nurses and flight paramedics) affiliated with a large multi-state HEMS organization in the Northeast U.S. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the Content Validity Index (CVI), to quantify the validity of the framework’s content. Results The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: 1) a trigger tool, 2) a method for rating proximal cause, and 3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. Conclusions We demonstrate a standardized process for the development of a content valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS. PMID:24003951

  2. Hopes and Cautions for Instrument-Based Evaluation of Consent Capacity: Results of a Construct Validity Study of Three Instruments

    PubMed Central

    Moye, Jennifer; Azar, Annin R.; Karel, Michele J.; Gurrera, Ronald J.

    2016-01-01

    Does instrument based evaluation of consent capacity increase the precision and validity of competency assessment or does ostensible precision provide a false sense of confidence without in fact improving validity? In this paper we critically examine the evidence for construct validity of three instruments for measuring four functional abilities important in consent capacity: understanding, appreciation, reasoning, and expressing a choice. Instrument based assessment of these abilities is compared through investigation of a multi-trait multi-method matrix in 88 older adults with mild to moderate dementia. Results find variable support for validity. There appears to be strong evidence for good hetero-method validity for the measurement of understanding, mixed evidence for validity in the measurement of reasoning, and strong evidence for poor hetero-method validity for the concepts of appreciation and expressing a choice, although the latter is likely due to extreme range restrictions. The development of empirically based tools for use in capacity evaluation should ultimately enhance the reliability and validity of assessment, yet clearly more research is needed to define and measure the constructs of decisional capacity. We would also emphasize that instrument based assessment of capacity is only one part of a comprehensive evaluation of competency which includes consideration of diagnosis, psychiatric and/or cognitive symptomatology, risk involved in the situation, and individual and cultural differences. PMID:27330455

  3. Estimating Rooftop Suitability for PV: A Review of Methods, Patents, and Validation Techniques

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Melius, J.; Margolis, R.; Ong, S.

    2013-12-01

    A number of methods have been developed using remote sensing data to estimate rooftop area suitable for the installation of photovoltaics (PV) at various geospatial resolutions. This report reviews the literature and patents on methods for estimating rooftop-area appropriate for PV, including constant-value methods, manual selection methods, and GIS-based methods. This report also presents NREL's proposed method for estimating suitable rooftop area for PV using Light Detection and Ranging (LiDAR) data in conjunction with a GIS model to predict areas with appropriate slope, orientation, and sunlight. NREL's method is validated against solar installation data from New Jersey, Colorado, and Californiamore » to compare modeled results to actual on-the-ground measurements.« less

  4. Physiotherapy for patients with soft tissue shoulder disorders: a systematic review of randomised clinical trials.

    PubMed Central

    van der Heijden, G. J.; van der Windt, D. A.; de Winter, A. F.

    1997-01-01

    OBJECTIVE: To assess the effectiveness of physiotherapy for patients with soft tissue shoulder disorders. DESIGN: A systematic computerised literature search of Medline and Embase, supplemented with citation tracking, for relevant trials with random allocation published before 1996. SUBJECTS: Patients treated with physiotherapy for disorders of soft tissue of the shoulder. MAIN OUTCOME MEASURES: Success rates, mobility, pain, functional status. RESULTS: Six of the 20 assessed trials satisfied at least five of eight validity criteria. Assessment of methods was often hampered by insufficient information on various validity criteria, and trials were often flawed by lack of blinding, high proportions of withdrawals from treatment, and high proportions of missing values. Trial sizes were small: only six trials included intervention groups of more than 25 patients. Ultrasound therapy, evaluated in six trials, was not shown to be effective. Four other trials favoured physiotherapy (laser therapy or manipulation), but the validity of their methods was unsatisfactory. CONCLUSIONS: There is evidence that ultrasound therapy is ineffective in the treatment of soft tissue shoulder disorders. Due to small trial sizes and unsatisfactory methods, evidence for the effectiveness of other methods of physiotherapy is inconclusive. For all methods of treatment, trials were too heterogeneous with respect to included patients, index and reference treatments, and follow up to merit valid statistical pooling. Future studies should show whether physiotherapy is superior to treatment with drugs, steroid injections, or a wait and see policy. PMID:9233322

  5. Extension and Validation of a Hybrid Particle-Finite Element Method for Hypervelocity Impact Simulation. Chapter 2

    NASA Technical Reports Server (NTRS)

    Fahrenthold, Eric P.; Shivarama, Ravishankar

    2004-01-01

    The hybrid particle-finite element method of Fahrenthold and Horban, developed for the simulation of hypervelocity impact problems, has been extended to include new formulations of the particle-element kinematics, additional constitutive models, and an improved numerical implementation. The extended formulation has been validated in three dimensional simulations of published impact experiments. The test cases demonstrate good agreement with experiment, good parallel speedup, and numerical convergence of the simulation results.

  6. Integrating cell biology and proteomic approaches in plants.

    PubMed

    Takáč, Tomáš; Šamajová, Olga; Šamaj, Jozef

    2017-10-03

    Significant improvements of protein extraction, separation, mass spectrometry and bioinformatics nurtured advancements of proteomics during the past years. The usefulness of proteomics in the investigation of biological problems can be enhanced by integration with other experimental methods from cell biology, genetics, biochemistry, pharmacology, molecular biology and other omics approaches including transcriptomics and metabolomics. This review aims to summarize current trends integrating cell biology and proteomics in plant science. Cell biology approaches are most frequently used in proteomic studies investigating subcellular and developmental proteomes, however, they were also employed in proteomic studies exploring abiotic and biotic stress responses, vesicular transport, cytoskeleton and protein posttranslational modifications. They are used either for detailed cellular or ultrastructural characterization of the object subjected to proteomic study, validation of proteomic results or to expand proteomic data. In this respect, a broad spectrum of methods is employed to support proteomic studies including ultrastructural electron microscopy studies, histochemical staining, immunochemical localization, in vivo imaging of fluorescently tagged proteins and visualization of protein-protein interactions. Thus, cell biological observations on fixed or living cell compartments, cells, tissues and organs are feasible, and in some cases fundamental for the validation and complementation of proteomic data. Validation of proteomic data by independent experimental methods requires development of new complementary approaches. Benefits of cell biology methods and techniques are not sufficiently highlighted in current proteomic studies. This encouraged us to review most popular cell biology methods used in proteomic studies and to evaluate their relevance and potential for proteomic data validation and enrichment of purely proteomic analyses. We also provide examples of representative studies combining proteomic and cell biology methods for various purposes. Integrating cell biology approaches with proteomic ones allow validation and better interpretation of proteomic data. Moreover, cell biology methods remarkably extend the knowledge provided by proteomic studies and might be fundamental for the functional complementation of proteomic data. This review article summarizes current literature linking proteomics with cell biology. Copyright © 2017 Elsevier B.V. All rights reserved.

  7. What are validated self-report adherence scales really measuring?: a systematic review

    PubMed Central

    Nguyen, Thi-My-Uyen; Caze, Adam La; Cottrell, Neil

    2014-01-01

    Aims Medication non-adherence is a significant health problem. There are numerous methods for measuring adherence, but no single method performs well on all criteria. The purpose of this systematic review is to (i) identify self-report medication adherence scales that have been correlated with comparison measures of medication-taking behaviour, (ii) assess how these scales measure adherence and (iii) explore how these adherence scales have been validated. Methods Cinahl and PubMed databases were used to search articles written in English on the development or validation of medication adherence scales dating to August 2012. The search terms used were medication adherence, medication non-adherence, medication compliance and names of each scale. Data such as barriers identified and validation comparison measures were extracted and compared. Results Sixty articles were included in the review, which consisted of 43 adherence scales. Adherence scales include items that either elicit information regarding the patient's medication-taking behaviour and/or attempts to identify barriers to good medication-taking behaviour or beliefs associated with adherence. The validation strategies employed depended on whether the focus of the scale was to measure medication-taking behaviour or identify barriers or beliefs. Conclusions Supporting patients to be adherent requires information on their medication-taking behaviour, barriers to adherence and beliefs about medicines. Adherence scales have the potential to explore these aspects of adherence, but currently there has been a greater focus on measuring medication-taking behaviour. Selecting the ‘right’ adherence scale(s) requires consideration of what needs to be measured and how (and in whom) the scale has been validated. PMID:23803249

  8. Validating Remotely Sensed Land Surface Evapotranspiration Based on Multi-scale Field Measurements

    NASA Astrophysics Data System (ADS)

    Jia, Z.; Liu, S.; Ziwei, X.; Liang, S.

    2012-12-01

    The land surface evapotranspiration plays an important role in the surface energy balance and the water cycle. There have been significant technical and theoretical advances in our knowledge of evapotranspiration over the past two decades. Acquisition of the temporally and spatially continuous distribution of evapotranspiration using remote sensing technology has attracted the widespread attention of researchers and managers. However, remote sensing technology still has many uncertainties coming from model mechanism, model inputs, parameterization schemes, and scaling issue in the regional estimation. Achieving remotely sensed evapotranspiration (RS_ET) with confident certainty is required but difficult. As a result, it is indispensable to develop the validation methods to quantitatively assess the accuracy and error sources of the regional RS_ET estimations. This study proposes an innovative validation method based on multi-scale evapotranspiration acquired from field measurements, with the validation results including the accuracy assessment, error source analysis, and uncertainty analysis of the validation process. It is a potentially useful approach to evaluate the accuracy and analyze the spatio-temporal properties of RS_ET at both the basin and local scales, and is appropriate to validate RS_ET in diverse resolutions at different time-scales. An independent RS_ET validation using this method was presented over the Hai River Basin, China in 2002-2009 as a case study. Validation at the basin scale showed good agreements between the 1 km annual RS_ET and the validation data such as the water balanced evapotranspiration, MODIS evapotranspiration products, precipitation, and landuse types. Validation at the local scale also had good results for monthly, daily RS_ET at 30 m and 1 km resolutions, comparing to the multi-scale evapotranspiration measurements from the EC and LAS, respectively, with the footprint model over three typical landscapes. Although some validation experiments demonstrated that the models yield accurate estimates at flux measurement sites, the question remains whether they are performing well over the broader landscape. Moreover, a large number of RS_ET products have been released in recent years. Thus, we also pay attention to the cross-validation method of RS_ET derived from multi-source models. "The Multi-scale Observation Experiment on Evapotranspiration over Heterogeneous Land Surfaces: Flux Observation Matrix" campaign is carried out at the middle reaches of the Heihe River Basin, China in 2012. Flux measurements from an observation matrix composed of 22 EC and 4 LAS are acquired to investigate the cross-validation of multi-source models over different landscapes. In this case, six remote sensing models, including the empirical statistical model, the one-source and two-source models, the Penman-Monteith equation based model, the Priestley-Taylor equation based model, and the complementary relationship based model, are used to perform an intercomparison. All the results from the two cases of RS_ET validation showed that the proposed validation methods are reasonable and feasible.

  9. [Validation and verfication of microbiology methods].

    PubMed

    Camaró-Sala, María Luisa; Martínez-García, Rosana; Olmos-Martínez, Piedad; Catalá-Cuenca, Vicente; Ocete-Mochón, María Dolores; Gimeno-Cardona, Concepción

    2015-01-01

    Clinical microbiologists should ensure, to the maximum level allowed by the scientific and technical development, the reliability of the results. This implies that, in addition to meeting the technical criteria to ensure their validity, they must be performed with a number of conditions that allows comparable results to be obtained, regardless of the laboratory that performs the test. In this sense, the use of recognized and accepted reference methodsis the most effective tool for these guarantees. The activities related to verification and validation of analytical methods has become very important, as there is continuous development, as well as updating techniques and increasingly complex analytical equipment, and an interest of professionals to ensure quality processes and results. The definitions of validation and verification are described, along with the different types of validation/verification, and the types of methods, and the level of validation necessary depending on the degree of standardization. The situations in which validation/verification is mandatory and/or recommended is discussed, including those particularly related to validation in Microbiology. It stresses the importance of promoting the use of reference strains as controls in Microbiology and the use of standard controls, as well as the importance of participation in External Quality Assessment programs to demonstrate technical competence. The emphasis is on how to calculate some of the parameters required for validation/verification, such as the accuracy and precision. The development of these concepts can be found in the microbiological process SEIMC number 48: «Validation and verification of microbiological methods» www.seimc.org/protocols/microbiology. Copyright © 2013 Elsevier España, S.L.U. y Sociedad Española de Enfermedades Infecciosas y Microbiología Clínica. All rights reserved.

  10. Update of Standard Practices for New Method Validation in Forensic Toxicology.

    PubMed

    Wille, Sarah M R; Coucke, Wim; De Baere, Thierry; Peters, Frank T

    2017-01-01

    International agreement concerning validation guidelines is important to obtain quality forensic bioanalytical research and routine applications as it all starts with the reporting of reliable analytical data. Standards for fundamental validation parameters are provided in guidelines as those from the US Food and Drug Administration (FDA), the European Medicines Agency (EMA), the German speaking Gesellschaft fur Toxikologie und Forensische Chemie (GTFCH) and the Scientific Working Group of Forensic Toxicology (SWGTOX). These validation parameters include selectivity, matrix effects, method limits, calibration, accuracy and stability, as well as other parameters such as carryover, dilution integrity and incurred sample reanalysis. It is, however, not easy for laboratories to implement these guidelines into practice as these international guidelines remain nonbinding protocols, that depend on the applied analytical technique, and that need to be updated according the analyst's method requirements and the application type. In this manuscript, a review of the current guidelines and literature concerning bioanalytical validation parameters in a forensic context is given and discussed. In addition, suggestions for the experimental set-up, the pros and cons of statistical approaches and adequate acceptance criteria for the validation of bioanalytical applications are given. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  11. Measuring adverse events in helicopter emergency medical services: establishing content validity.

    PubMed

    Patterson, P Daniel; Lave, Judith R; Martin-Gill, Christian; Weaver, Matthew D; Wadas, Richard J; Arnold, Robert M; Roth, Ronald N; Mosesso, Vincent N; Guyette, Francis X; Rittenberger, Jon C; Yealy, Donald M

    2014-01-01

    We sought to create a valid framework for detecting adverse events (AEs) in the high-risk setting of helicopter emergency medical services (HEMS). We assembled a panel of 10 expert clinicians (n = 6 emergency medicine physicians and n = 4 prehospital nurses and flight paramedics) affiliated with a large multistate HEMS organization in the Northeast US. We used a modified Delphi technique to develop a framework for detecting AEs associated with the treatment of critically ill or injured patients. We used a widely applied measure, the content validity index (CVI), to quantify the validity of the framework's content. The expert panel of 10 clinicians reached consensus on a common AE definition and four-step protocol/process for AE detection in HEMS. The consensus-based framework is composed of three main components: (1) a trigger tool, (2) a method for rating proximal cause, and (3) a method for rating AE severity. The CVI findings isolate components of the framework considered content valid. We demonstrate a standardized process for the development of a content-valid framework for AE detection. The framework is a model for the development of a method for AE identification in other settings, including ground-based EMS.

  12. Validation of Self-Report on Smoking among University Students in Korea

    ERIC Educational Resources Information Center

    Lee, Chung Yul; Shin, Sunmi; Lee, Hyeon Kyeong; Hong, Yoon Mi

    2009-01-01

    Objective: To validate the self-reported smoking status of Korean university students. Methods: Subjects included 322 Korean university in Korea, who participated in an annual health screening. Data on smoking were collected through a self-reported questionnaire and urine test. The data were analyzed by the McNemar test. Results: In the…

  13. A dual validation approach to detect anthelmintic residues in bovine liver over an extended concentration range

    USDA-ARS?s Scientific Manuscript database

    This paper describes a method for the detection and quantification of 38 of the most widely used anthelmintics (including benzimidazoles, macrocyclic lactones and flukicides) in bovine liver at MRL and non-MRL level. A dual validation approach was adapted to reliably detect anthelmintic residues ov...

  14. Initial Development and Validation of the BullyHARM: The Bullying, Harassment, and Aggression Receipt Measure

    ERIC Educational Resources Information Center

    Hall, William J.

    2016-01-01

    This article describes the development and preliminary validation of the Bullying, Harassment, and Aggression Receipt Measure (BullyHARM). The development of the BullyHARM involved a number of steps and methods, including a literature review, expert review, cognitive testing, readability testing, data collection from a large sample, reliability…

  15. Development and Validation of a Measure of Interpersonal Strengths: The Inventory of Interpersonal Strengths

    ERIC Educational Resources Information Center

    Hatcher, Robert L.; Rogers, Daniel T.

    2009-01-01

    An Inventory of Interpersonal Strengths (IIS) was developed and validated in a series of large college student samples. Based on interpersonal theory and associated methods, the IIS was designed to assess positive characteristics representing the full range of interpersonal domains, including those generally thought to have negative qualities…

  16. Validated reversed phase LC method for quantitative analysis of polymethoxyflavones in citrus peel extracts.

    PubMed

    Wang, Zhenyu; Li, Shiming; Ferguson, Stephen; Goodnow, Robert; Ho, Chi-Tang

    2008-01-01

    Polymethoxyflavones (PMFs), which exist exclusively in the citrus genus, have biological activities including anti-inflammatory, anticarcinogenic, and antiatherogenic properties. A validated RPLC method was developed for quantitative analysis of six major PMFs, namely nobiletin, tangeretin, sinensetin, 5,6,7,4'-tetramethoxyflavone, 3,5,6,7,3',4'-hexamethoxyflavone, and 3,5,6,7,8,3',4'-heptamethoxyflavone. The polar embedded LC stationary phase was able to fully resolve the six analogues. The developed method was fully validated in terms of linearity, accuracy, precision, sensitivity, and system suitability. The LOD of the method was calculated as 0.15 microg/mL and the recovery rate was between 97.0 and 105.1%. This analytical method was successfully applied to quantify the individual PMFs in four commercially available citrus peel extracts (CPEs). Each extract shows significant difference in the PMF composition and concentration. This method may provide a simple, rapid, and reliable tool to help reveal the correlation between the bioactivity of the PMF extracts and the individual PMF content.

  17. R package PRIMsrc: Bump Hunting by Patient Rule Induction Method for Survival, Regression and Classification

    PubMed Central

    Dazard, Jean-Eudes; Choe, Michael; LeBlanc, Michael; Rao, J. Sunil

    2015-01-01

    PRIMsrc is a novel implementation of a non-parametric bump hunting procedure, based on the Patient Rule Induction Method (PRIM), offering a unified treatment of outcome variables, including censored time-to-event (Survival), continuous (Regression) and discrete (Classification) responses. To fit the model, it uses a recursive peeling procedure with specific peeling criteria and stopping rules depending on the response. To validate the model, it provides an objective function based on prediction-error or other specific statistic, as well as two alternative cross-validation techniques, adapted to the task of decision-rule making and estimation in the three types of settings. PRIMsrc comes as an open source R package, including at this point: (i) a main function for fitting a Survival Bump Hunting model with various options allowing cross-validated model selection to control model size (#covariates) and model complexity (#peeling steps) and generation of cross-validated end-point estimates; (ii) parallel computing; (iii) various S3-generic and specific plotting functions for data visualization, diagnostic, prediction, summary and display of results. It is available on CRAN and GitHub. PMID:26798326

  18. A critical analysis of test-retest reliability in instrument validation studies of cancer patients under palliative care: a systematic review

    PubMed Central

    2014-01-01

    Background Patient-reported outcome validation needs to achieve validity and reliability standards. Among reliability analysis parameters, test-retest reliability is an important psychometric property. Retested patients must be in a clinically stable condition. This is particularly problematic in palliative care (PC) settings because advanced cancer patients are prone to a faster rate of clinical deterioration. The aim of this study was to evaluate the methods by which multi-symptom and health-related qualities of life (HRQoL) based on patient-reported outcomes (PROs) have been validated in oncological PC settings with regards to test-retest reliability. Methods A systematic search of PubMed (1966 to June 2013), EMBASE (1980 to June 2013), PsychInfo (1806 to June 2013), CINAHL (1980 to June 2013), and SCIELO (1998 to June 2013), and specific PRO databases was performed. Studies were included if they described a set of validation studies. Studies were included if they described a set of validation studies for an instrument developed to measure multi-symptom or multidimensional HRQoL in advanced cancer patients under PC. The COSMIN checklist was used to rate the methodological quality of the study designs. Results We identified 89 validation studies from 746 potentially relevant articles. From those 89 articles, 31 measured test-retest reliability and were included in this review. Upon critical analysis of the overall quality of the criteria used to determine the test-retest reliability, 6 (19.4%), 17 (54.8%), and 8 (25.8%) of these articles were rated as good, fair, or poor, respectively, and no article was classified as excellent. Multi-symptom instruments were retested over a shortened interval when compared to the HRQoL instruments (median values 24 hours and 168 hours, respectively; p = 0.001). Validation studies that included objective confirmation of clinical stability in their design yielded better results for the test-retest analysis with regard to both pain and global HRQoL scores (p < 0.05). The quality of the statistical analysis and its description were of great concern. Conclusion Test-retest reliability has been infrequently and poorly evaluated. The confirmation of clinical stability was an important factor in our analysis, and we suggest that special attention be focused on clinical stability when designing a PRO validation study that includes advanced cancer patients under PC. PMID:24447633

  19. Optimization and validation of Folin-Ciocalteu method for the determination of total polyphenol content of Pu-erh tea.

    PubMed

    Musci, Marilena; Yao, Shicong

    2017-12-01

    Pu-erh tea is a post-fermented tea that has recently gained popularity worldwide, due to potential health benefits related to the antioxidant activity resulting from its high polyphenolic content. The Folin-Ciocalteu method is a simple, rapid, and inexpensive assay widely applied for the determination of total polyphenol content. Over the past years, it has been subjected to many modifications, often without any systematic optimization or validation. In our study, we sought to optimize the Folin-Ciocalteu method, evaluate quality parameters including linearity, precision and stability, and then apply the optimized model to determine the total polyphenol content of 57 Chinese teas, including green tea, aged and ripened Pu-erh tea. Our optimized Folin-Ciocalteu method reduced analysis time, allowed for the analysis of a large number of samples, to discriminate among the different teas, and to assess the effect of the post-fermentation process on polyphenol content.

  20. Quantitative validation of carbon-fiber laminate low velocity impact simulations

    DOE PAGES

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    2015-09-26

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  1. PSI-Center Simulations of Validation Platform Experiments

    NASA Astrophysics Data System (ADS)

    Nelson, B. A.; Akcay, C.; Glasser, A. H.; Hansen, C. J.; Jarboe, T. R.; Marklin, G. J.; Milroy, R. D.; Morgan, K. D.; Norgaard, P. C.; Shumlak, U.; Victor, B. S.; Sovinec, C. R.; O'Bryan, J. B.; Held, E. D.; Ji, J.-Y.; Lukin, V. S.

    2013-10-01

    The Plasma Science and Innovation Center (PSI-Center - http://www.psicenter.org) supports collaborating validation platform experiments with extended MHD simulations. Collaborators include the Bellan Plasma Group (Caltech), CTH (Auburn U), FRX-L (Los Alamos National Laboratory), HIT-SI (U Wash - UW), LTX (PPPL), MAST (Culham), Pegasus (U Wisc-Madison), PHD/ELF (UW/MSNW), SSX (Swarthmore College), TCSU (UW), and ZaP/ZaP-HD (UW). Modifications have been made to the NIMROD, HiFi, and PSI-Tet codes to specifically model these experiments, including mesh generation/refinement, non-local closures, appropriate boundary conditions (external fields, insulating BCs, etc.), and kinetic and neutral particle interactions. The PSI-Center is exploring application of validation metrics between experimental data and simulations results. Biorthogonal decomposition is proving to be a powerful method to compare global temporal and spatial structures for validation. Results from these simulation and validation studies, as well as an overview of the PSI-Center status will be presented.

  2. Validating Savings Claims of Cold Climate Zero Energy Ready Homes

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Williamson, J.; Puttagunta, S.

    This report details the validation methods used to analyze consumption at each of these homes. It includes a detailed end-use examination of consumptions from the following categories: 1) Heating, 2) Cooling, 3) Lights, Appliances, and Miscellaneous Electric Loads (LAMELS) along with Domestic Hot Water Use, 4) Ventilation, and 5) PV generation. A utility bill disaggregation method, which allows a crude estimation of space conditioning loads based on outdoor air temperature, was also performed and the results compared to the actual measured data.

  3. A study in the founding of applied behavior analysis through its publications.

    PubMed

    Morris, Edward K; Altus, Deborah E; Smith, Nathaniel G

    2013-01-01

    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research.

  4. Preliminary Structural Sensitivity Study of Hypersonic Inflatable Aerodynamic Decelerator Using Probabilistic Methods

    NASA Technical Reports Server (NTRS)

    Lyle, Karen H.

    2014-01-01

    Acceptance of new spacecraft structural architectures and concepts requires validated design methods to minimize the expense involved with technology validation via flighttesting. This paper explores the implementation of probabilistic methods in the sensitivity analysis of the structural response of a Hypersonic Inflatable Aerodynamic Decelerator (HIAD). HIAD architectures are attractive for spacecraft deceleration because they are lightweight, store compactly, and utilize the atmosphere to decelerate a spacecraft during re-entry. However, designers are hesitant to include these inflatable approaches for large payloads or spacecraft because of the lack of flight validation. In the example presented here, the structural parameters of an existing HIAD model have been varied to illustrate the design approach utilizing uncertainty-based methods. Surrogate models have been used to reduce computational expense several orders of magnitude. The suitability of the design is based on assessing variation in the resulting cone angle. The acceptable cone angle variation would rely on the aerodynamic requirements.

  5. A Study in the Founding of Applied Behavior Analysis Through Its Publications

    PubMed Central

    Morris, Edward K.; Altus, Deborah E.; Smith, Nathaniel G.

    2013-01-01

    This article reports a study of the founding of applied behavior analysis through its publications. Our methods included hand searches of sources (e.g., journals, reference lists), search terms (i.e., early, applied, behavioral, research, literature), inclusion criteria (e.g., the field's applied dimension), and (d) challenges to their face and content validity. Our results were 36 articles published between 1959 and 1967 that we organized into 4 groups: 12 in 3 programs of research and 24 others. Our discussion addresses (a) limitations in our method (e.g., the completeness of our search), (b) challenges to the validity of our methods and results (e.g., convergent validity), and (c) priority claims about the field's founding. We conclude that the claims are irresolvable because identification of the founding publications depends significantly on methods and because the field's founding was an evolutionary process. We close with suggestions for future research. PMID:25729133

  6. Total Arsenic, Cadmium, and Lead Determination in Brazilian Rice Samples Using ICP-MS

    PubMed Central

    Buzzo, Márcia Liane; de Arauz, Luciana Juncioni; Carvalho, Maria de Fátima Henriques; Arakaki, Edna Emy Kumagai; Matsuzaki, Richard; Tiglea, Paulo

    2016-01-01

    This study is aimed at investigating a suitable method for rice sample preparation as well as validating and applying the method for monitoring the concentration of total arsenic, cadmium, and lead in rice by using Inductively Coupled Plasma Mass Spectrometry (ICP-MS). Various rice sample preparation procedures were evaluated. The analytical method was validated by measuring several parameters including limit of detection (LOD), limit of quantification (LOQ), linearity, relative bias, and repeatability. Regarding the sample preparation, recoveries of spiked samples were within the acceptable range from 89.3 to 98.2% for muffle furnace, 94.2 to 103.3% for heating block, 81.0 to 115.0% for hot plate, and 92.8 to 108.2% for microwave. Validation parameters showed that the method fits for its purpose, being the total arsenic, cadmium, and lead within the Brazilian Legislation limits. The method was applied for analyzing 37 rice samples (including polished, brown, and parboiled), consumed by the Brazilian population. The total arsenic, cadmium, and lead contents were lower than the established legislative values, except for total arsenic in one brown rice sample. This study indicated the need to establish monitoring programs for emphasizing the study on this type of cereal, aiming at promoting the Public Health. PMID:27766178

  7. Validation of Field Methods to Assess Body Fat Percentage in Elite Youth Soccer Players.

    PubMed

    Munguia-Izquierdo, Diego; Suarez-Arrones, Luis; Di Salvo, Valter; Paredes-Hernandez, Victor; Alcazar, Julian; Ara, Ignacio; Kreider, Richard; Mendez-Villanueva, Alberto

    2018-05-01

    This study determined the most effective field method for quantifying body fat percentage in male elite youth soccer players and developed prediction equations based on anthropometric variables. Forty-four male elite-standard youth soccer players aged 16.3-18.0 years underwent body fat percentage assessments, including bioelectrical impedance analysis and the calculation of various skinfold-based prediction equations. Dual X-ray absorptiometry provided a criterion measure of body fat percentage. Correlation coefficients, bias, limits of agreement, and differences were used as validity measures, and regression analyses were used to develop soccer-specific prediction equations. The equations from Sarria et al. (1998) and Durnin & Rahaman (1967) reached very large correlations and the lowest biases, and they reached neither the practically worthwhile difference nor the substantial difference between methods. The new youth soccer-specific skinfold equation included a combination of triceps and supraspinale skinfolds. None of the practical methods compared in this study are adequate for estimating body fat percentage in male elite youth soccer players, except for the equations from Sarria et al. (1998) and Durnin & Rahaman (1967). The new youth soccer-specific equation calculated in this investigation is the only field method specifically developed and validated in elite male players, and it shows potentially good predictive power. © Georg Thieme Verlag KG Stuttgart · New York.

  8. Errors in reporting on dissolution research: methodological and statistical implications.

    PubMed

    Jasińska-Stroschein, Magdalena; Kurczewska, Urszula; Orszulak-Michalak, Daria

    2017-02-01

    In vitro dissolution testing provides useful information at clinical and preclinical stages of the drug development process. The study includes pharmaceutical papers on dissolution research published in Polish journals between 2010 and 2015. They were analyzed with regard to information provided by authors about chosen methods, performed validation, statistical reporting or assumptions used to properly compare release profiles considering the present guideline documents addressed to dissolution methodology and its validation. Of all the papers included in the study, 23.86% presented at least one set of validation parameters, 63.64% gave the results of the weight uniformity test, 55.68% content determination, 97.73% dissolution testing conditions, and 50% discussed a comparison of release profiles. The assumptions for methods used to compare dissolution profiles were discussed in 6.82% of papers. By means of example analyses, we demonstrate that the outcome can be influenced by the violation of several assumptions or selection of an improper method to compare dissolution profiles. A clearer description of the procedures would undoubtedly increase the quality of papers in this area.

  9. Performance validity testing in neuropsychology: a clinical guide, critical review, and update on a rapidly evolving literature.

    PubMed

    Lippa, Sara M

    2018-04-01

    Over the past two decades, there has been much research on measures of response bias and myriad measures have been validated in a variety of clinical and research samples. This critical review aims to guide clinicians through the use of performance validity tests (PVTs) from test selection and administration through test interpretation and feedback. Recommended cutoffs and relevant test operating characteristics are presented. Other important issues to consider during test selection, administration, interpretation, and feedback are discussed including order effects, coaching, impact on test data, and methods to combine measures and improve predictive power. When interpreting performance validity measures, neuropsychologists must use particular caution in cases of dementia, low intelligence, English as a second language/minority cultures, or low education. PVTs provide valuable information regarding response bias and, under the right circumstances, can provide excellent evidence of response bias. Only after consideration of the entire clinical picture, including validity test performance, can concrete determinations regarding the validity of test data be made.

  10. A trace map comparison algorithm for the discrete fracture network models of rock masses

    NASA Astrophysics Data System (ADS)

    Han, Shuai; Wang, Gang; Li, Mingchao

    2018-06-01

    Discrete fracture networks (DFN) are widely used to build refined geological models. However, validating whether a refined model can match to reality is a crucial problem, concerning whether the model can be used for analysis. The current validation methods include numerical validation and graphical validation. However, the graphical validation, aiming at estimating the similarity between a simulated trace map and the real trace map by visual observation, is subjective. In this paper, an algorithm for the graphical validation of DFN is set up. Four main indicators, including total gray, gray grade curve, characteristic direction and gray density distribution curve, are presented to assess the similarity between two trace maps. A modified Radon transform and loop cosine similarity are presented based on Radon transform and cosine similarity respectively. Besides, how to use Bézier curve to reduce the edge effect is described. Finally, a case study shows that the new algorithm can effectively distinguish which simulated trace map is more similar to the real trace map.

  11. Prospective, Multicenter Validation Study of Magnetic Resonance Volumetry for Response Assessment After Preoperative Chemoradiation in Rectal Cancer: Can the Results in the Literature be Reproduced?

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Martens, Milou H., E-mail: mh.martens@hotmail.com; Department of Surgery, Maastricht University Medical Center, Maastricht; GROW School for Oncology and Developmental Biology, Maastricht University Medical Center, Maastricht

    2015-12-01

    Purpose: To review the available literature on tumor size/volume measurements on magnetic resonance imaging for response assessment after chemoradiotherapy, and validate these cut-offs in an independent multicenter patient cohort. Methods and Materials: The study included 2 parts. (1) Review of the literature: articles were included that assessed the accuracy of tumor size/volume measurements on magnetic resonance imaging for tumor response assessment. Size/volume cut-offs were extracted; (2) Multicenter validation: extracted cut-offs from the literature were tested in a multicenter cohort (n=146). Accuracies were calculated and compared with reported results from the literature. Results: The review included 14 articles, in which 3more » different measurement methods were assessed: (1) tumor length; (2) 3-dimensonial tumor size; and (3) whole volume. Study outcomes consisted of (1) complete response (ypT0) versus residual tumor; (2) tumor regression grade 1 to 2 versus 3 to 5; and (3) T-downstaging (ypT« less

  12. Validation of a multi-residue method for the determination of several antibiotic groups in honey by LC-MS/MS.

    PubMed

    Bohm, Detlef A; Stachel, Carolin S; Gowik, Petra

    2012-07-01

    The presented multi-method was developed for the confirmation of 37 antibiotic substances from the six antibiotic groups: macrolides, lincosamides, quinolones, tetracyclines, pleuromutilines and diamino-pyrimidine derivatives. All substances were analysed simultaneously in a single analytical run with the same procedure, including an extraction with buffer, a clean-up by solid-phase extraction, and the measurement by liquid chromatography tandem mass spectrometry in ESI+ mode. The method was validated on the basis of an in-house validation concept with factorial design by combination of seven factors to check the robustness in a concentration range of 5-50 μg kg(-1). The honeys used were of different types with regard to colour and origin. The values calculated for the validation parameters-decision limit CCα (range, 7.5-12.9 μg kg(-1)), detection capability CCβ (range, 9.4-19.9 μg kg(-1)), within-laboratory reproducibility RSD(wR) (<20% except for tulathromycin with 23.5% and tylvalosin with 21.4 %), repeatability RSD(r) (<20% except for tylvalosin with 21.1%), and recovery (range, 92-106%)-were acceptable and in agreement with the criteria of Commission Decision 2002/657/EC. The validation results showed that the method was applicable for the residue analysis of antibiotics in honey to substances with and without recommended concentrations, although some changes had been tested during validation to determine the robustness of the method.

  13. System and method for forward error correction

    NASA Technical Reports Server (NTRS)

    Cole, Robert M. (Inventor); Bishop, James E. (Inventor)

    2006-01-01

    A system and method are provided for transferring a packet across a data link. The packet may include a stream of data symbols which is delimited by one or more framing symbols. Corruptions of the framing symbol which result in valid data symbols may be mapped to invalid symbols. If it is desired to transfer one of the valid data symbols that has been mapped to an invalid symbol, the data symbol may be replaced with an unused symbol. At the receiving end, these unused symbols are replaced with the corresponding valid data symbols. The data stream of the packet may be encoded with forward error correction information to detect and correct errors in the data stream.

  14. System and method for transferring data on a data link

    NASA Technical Reports Server (NTRS)

    Cole, Robert M. (Inventor); Bishop, James E. (Inventor)

    2007-01-01

    A system and method are provided for transferring a packet across a data link. The packet may include a stream of data symbols which is delimited by one or more framing symbols. Corruptions of the framing symbol which result in valid data symbols may be mapped to invalid symbols. If it is desired to transfer one of the valid data symbols that has been mapped to an invalid symbol, the data symbol may be replaced with an unused symbol. At the receiving end, these unused symbols are replaced with the corresponding valid data symbols. The data stream of the packet may be encoded with forward error correction information to detect and correct errors in the data stream.

  15. Testing the feasibility of eliciting preferences for health states from adolescents using direct methods.

    PubMed

    Crump, R Trafford; Lau, Ryan; Cox, Elizabeth; Currie, Gillian; Panepinto, Julie

    2018-06-22

    Measuring adolescents' preferences for health states can play an important role in evaluating the delivery of pediatric healthcare. However, formal evaluation of the common direct preference elicitation methods for health states has not been done with adolescents. Therefore, the purpose of this study is to test how these methods perform in terms of their feasibility, reliability, and validity for measuring health state preferences in adolescents. This study used a web-based survey of adolescents, 18 years of age or younger, living in the United States. The survey included four health states, each comprised of six attributes. Preferences for these health states were elicited using the visual analogue scale, time trade-off, and standard gamble. The feasibility, test-retest reliability, and construct validity of each of these preference elicitation methods were tested and compared. A total of 144 participants were included in this study. Using a web-based survey format to elicit preferences for health states from adolescents was feasible. A majority of participants completed all three elicitation methods, ranked those methods as being easy, with very few requiring assistance from someone else. However, all three elicitation methods demonstrated weak test-retest reliability, with Kendall's tau-a values ranging from 0.204 to 0.402. Similarly, all three methods demonstrated poor construct validity, with 9-50% of all rankings aligning with our expectations. There were no significant differences across age groups. Using a web-based survey format to elicit preferences for health states from adolescents is feasible. However, the reliability and construct validity of the methods used to elicit these preferences when using this survey format are poor. Further research into the effects of a web-based survey approach to eliciting preferences for health states from adolescents is needed before health services researchers or pediatric clinicians widely employ these methods.

  16. Uncertain sightings and the extinction of the Ivory-billed Woodpecker.

    PubMed

    Solow, Andrew; Smith, Woollcott; Burgman, Mark; Rout, Tracy; Wintle, Brendan; Roberts, David

    2012-02-01

    The extinction of a species can be inferred from a record of its sightings. Existing methods for doing so assume that all sightings in the record are valid. Often, however, there are sightings of uncertain validity. To date, uncertain sightings have been treated in an ad hoc way, either excluding them from the record or including them as if they were certain. We developed a Bayesian method that formally accounts for such uncertain sightings. The method assumes that valid and invalid sightings follow independent Poisson processes and use noninformative prior distributions for the rate of valid sightings and for a measure of the quality of uncertain sightings. We applied the method to a recently published record of sightings of the Ivory-billed Woodpecker (Campephilus principalis). This record covers the period 1897-2010 and contains 39 sightings classified as certain and 29 classified as uncertain. The Bayes factor in favor of extinction was 4.03, which constitutes substantial support for extinction. The posterior distribution of the time of extinction has 3 main modes in 1944, 1952, and 1988. The method can be applied to sighting records of other purportedly extinct species. ©2011 Society for Conservation Biology.

  17. 75 FR 13515 - Office of Innovation and Improvement (OII); Overview Information; Ready-to-Learn Television...

    Federal Register 2010, 2011, 2012, 2013, 2014

    2010-03-22

    ... on rigorous scientifically based research methods to assess the effectiveness of a particular... activities and programs; and (B) Includes research that-- (i) Employs systematic, empirical methods that draw... or observational methods that provide reliable and valid data across evaluators and observers, across...

  18. Verification and Validation of Monte Carlo N-Particle 6 for Computing Gamma Protection Factors

    DTIC Science & Technology

    2015-03-26

    methods for evaluating RPFs, which it used for the subsequent 30 years. These approaches included computational modeling, radioisotopes , and a high...1.2.1. Past Methods of Experimental Evaluation ........................................................ 2 1.2.2. Modeling Efforts...Other Considerations ......................................................................................... 14 2.4. Monte Carlo Methods

  19. Measurement properties of existing clinical assessment methods evaluating scapular positioning and function. A systematic review.

    PubMed

    Larsen, Camilla Marie; Juul-Kristensen, Birgit; Lund, Hans; Søgaard, Karen

    2014-10-01

    The aims were to compile a schematic overview of clinical scapular assessment methods and critically appraise the methodological quality of the involved studies. A systematic, computer-assisted literature search using Medline, CINAHL, SportDiscus and EMBASE was performed from inception to October 2013. Reference lists in articles were also screened for publications. From 50 articles, 54 method names were identified and categorized into three groups: (1) Static positioning assessment (n = 19); (2) Semi-dynamic (n = 13); and (3) Dynamic functional assessment (n = 22). Fifteen studies were excluded for evaluation due to no/few clinimetric results, leaving 35 studies for evaluation. Graded according to the COnsensus-based Standards for the selection of health Measurement INstruments (COSMIN checklist), the methodological quality in the reliability and validity domains was "fair" (57%) to "poor" (43%), with only one study rated as "good". The reliability domain was most often investigated. Few of the assessment methods in the included studies that had "fair" or "good" measurement property ratings demonstrated acceptable results for both reliability and validity. We found a substantially larger number of clinical scapular assessment methods than previously reported. Using the COSMIN checklist the methodological quality of the included measurement properties in the reliability and validity domains were in general "fair" to "poor". None were examined for all three domains: (1) reliability; (2) validity; and (3) responsiveness. Observational evaluation systems and assessment of scapular upward rotation seem suitably evidence-based for clinical use. Future studies should test and improve the clinimetric properties, and especially diagnostic accuracy and responsiveness, to increase utility for clinical practice.

  20. Creating wavelet-based models for real-time synthesis of perceptually convincing environmental sounds

    NASA Astrophysics Data System (ADS)

    Miner, Nadine Elizabeth

    1998-09-01

    This dissertation presents a new wavelet-based method for synthesizing perceptually convincing, dynamic sounds using parameterized sound models. The sound synthesis method is applicable to a variety of applications including Virtual Reality (VR), multi-media, entertainment, and the World Wide Web (WWW). A unique contribution of this research is the modeling of the stochastic, or non-pitched, sound components. This stochastic-based modeling approach leads to perceptually compelling sound synthesis. Two preliminary studies conducted provide data on multi-sensory interaction and audio-visual synchronization timing. These results contributed to the design of the new sound synthesis method. The method uses a four-phase development process, including analysis, parameterization, synthesis and validation, to create the wavelet-based sound models. A patent is pending for this dynamic sound synthesis method, which provides perceptually-realistic, real-time sound generation. This dissertation also presents a battery of perceptual experiments developed to verify the sound synthesis results. These experiments are applicable for validation of any sound synthesis technique.

  1. Evidence-based dentistry: analysis of dental anxiety scales for children.

    PubMed

    Al-Namankany, A; de Souza, M; Ashley, P

    2012-03-09

    To review paediatric dental anxiety measures (DAMs) and assess the statistical methods used for validation and their clinical implications. A search of four computerised databases between 1960 and January 2011 associated with DAMs, using pre-specified search terms, to assess the method of validation including the reliability as intra-observer agreement 'repeatability or stability' and inter-observer agreement 'reproducibility' and all types of validity. Fourteen paediatric DAMs were predominantly validated in schools and not in the clinical setting while five of the DAMs were not validated at all. The DAMs that were validated were done so against other paediatric DAMs which may not have been validated previously. Reliability was not assessed in four of the DAMs. However, all of the validated studies assessed reliability which was usually 'good' or 'acceptable'. None of the current DAMs used a formal sample size technique. Diversity was seen between the studies ranging from a few simple pictograms to lists of questions reported by either the individual or an observer. To date there is no scale that can be considered as a gold standard, and there is a need to further develop an anxiety scale with a cognitive component for children and adolescents.

  2. A Chinese version of the revised Nurses Professional Values Scale: reliability and validity assessment.

    PubMed

    Lin, Yu-Hua; Wang, Liching Sung

    2010-08-01

    The purpose of this study was to assess the reliability and validity of a Chinese version of the revised nurses professional values scale (NPVS-R). The convenient sampling method, including senior undergraduate nursing students (n=110) and clinical nurses (n=223), was applied to recruit appropriate samples from southern Taiwan. The revised nurses professional values scale (NPVS-R) was used in this study. Content validity, construct validity, internal consistency, and reliability were assessed. The final sample consisted of 286 subjects. three factors were detected in the results, accounting for 60.12% of the explained variance. The first factor was titled professionalism, and included 13 items. The second factor was named caring, and consisted of seven items. Activism was the third factor, which included six items. Overall Cronbach's alpha coefficient was 0.90, taken from values for each of the three factors of 0.88, 0.90, and 0.81, respectively. The Chinese version of the NPVS-R can be considered a reliable and valid scale for assigning values that can mark professionalism in Taiwanese nurses. Copyright 2009 Elsevier Ltd. All rights reserved.

  3. Testing the Construct Validity of Proposed Criteria for "DSM-5" Autism Spectrum Disorder

    ERIC Educational Resources Information Center

    Mandy, William P. L.; Charman, Tony; Skuse, David H.

    2012-01-01

    Objective: To use confirmatory factor analysis to test the construct validity of the proposed "DSM-5" symptom model of autism spectrum disorder (ASD), in comparison to alternative models, including that described in "DSM-IV-TR." Method: Participants were 708 verbal children and young persons (mean age, 9.5 years) with mild to severe autistic…

  4. Validation of a Milk Consumption Stage of Change Algorithm among Adolescent Survivors of Childhood Cancer

    ERIC Educational Resources Information Center

    Mays, Darren; Gerfen, Elissa; Mosher, Revonda B.; Shad, Aziza T.; Tercyak, Kenneth P.

    2012-01-01

    Objective: To assess the construct validity of a milk consumption Stages of Change (SOC) algorithm among adolescent survivors of childhood cancer ages 11 to 21 years (n = 75). Methods: Baseline data from a randomized controlled trial designed to evaluate a health behavior intervention were analyzed. Assessments included a milk consumption SOC…

  5. Spanish Adaptation and Validation of the Family Quality of Life Survey

    ERIC Educational Resources Information Center

    Verdugo, M. A.; Cordoba, L.; Gomez, J.

    2005-01-01

    Background: Assessing the quality of life (QOL) for families that include a person with a disability have recently become a major emphasis in cross-cultural QOL studies. The present study examined the reliability and validity of the Family Quality of Life Survey (FQOL) on a Spanish sample. Method and Results: The sample comprised 385 families who…

  6. Validation of annual growth rings in freshwater mussel shells using cross dating .Can

    Treesearch

    Andrew L. Rypel; Wendell R. Haag; Robert H. Findlay

    2009-01-01

    We examined the usefulness of dendrochronological cross-dating methods for studying long-term, interannual growth patterns in freshwater mussels, including validation of annual shell ring formation. Using 13 species from three rivers, we measured increment widths between putative annual rings on shell thin sections and then removed age-related variation by...

  7. A Proposed Methodology for the Conceptualization, Operationalization, and Empirical Validation of the Concept of Information Need

    ERIC Educational Resources Information Center

    Afzal, Waseem

    2017-01-01

    Introduction: The purpose of this paper is to propose a methodology to conceptualize, operationalize, and empirically validate the concept of information need. Method: The proposed methodology makes use of both qualitative and quantitative perspectives, and includes a broad array of approaches such as literature reviews, expert opinions, focus…

  8. Multimethod Investigation of Interpersonal Functioning in Borderline Personality Disorder

    PubMed Central

    Stepp, Stephanie D.; Hallquist, Michael N.; Morse, Jennifer Q.; Pilkonis, Paul A.

    2011-01-01

    Even though interpersonal functioning is of great clinical importance for patients with borderline personality disorder (BPD), the comparative validity of different assessment methods for interpersonal dysfunction has not yet been tested. This study examined multiple methods of assessing interpersonal functioning, including self- and other-reports, clinical ratings, electronic diaries, and social cognitions in three groups of psychiatric patients (N=138): patients with (1) BPD, (2) another personality disorder, and (3) Axis I psychopathology only. Using dominance analysis, we examined the predictive validity of each method in detecting changes in symptom distress and social functioning six months later. Across multiple methods, the BPD group often reported higher interpersonal dysfunction scores compared to other groups. Predictive validity results demonstrated that self-report and electronic diary ratings were the most important predictors of distress and social functioning. Our findings suggest that self-report scores and electronic diary ratings have high clinical utility, as these methods appear most sensitive to change. PMID:21808661

  9. Investigating the technical adequacy of curriculum-based measurement in written expression for students who are deaf or hard of hearing.

    PubMed

    Cheng, Shu-Fen; Rose, Susan

    2009-01-01

    This study investigated the technical adequacy of curriculum-based measures of written expression (CBM-W) in terms of writing prompts and scoring methods for deaf and hard-of-hearing students. Twenty-two students at the secondary school-level completed 3-min essays within two weeks, which were scored for nine existing and alternative curriculum-based measurement (CBM) scoring methods. The technical features of the nine scoring methods were examined for interrater reliability, alternate-form reliability, and criterion-related validity. The existing CBM scoring method--number of correct minus incorrect word sequences--yielded the highest reliability and validity coefficients. The findings from this study support the use of the CBM-W as a reliable and valid tool for assessing general writing proficiency with secondary students who are deaf or hard of hearing. The CBM alternative scoring methods that may serve as additional indicators of written expression include correct subject-verb agreements, correct clauses, and correct morphemes.

  10. Validity of Dietary Assessment in Athletes: A Systematic Review

    PubMed Central

    Beck, Kathryn L.; Gifford, Janelle A.; Slater, Gary; Flood, Victoria M.; O’Connor, Helen

    2017-01-01

    Dietary assessment methods that are recognized as appropriate for the general population are usually applied in a similar manner to athletes, despite the knowledge that sport-specific factors can complicate assessment and impact accuracy in unique ways. As dietary assessment methods are used extensively within the field of sports nutrition, there is concern the validity of methodologies have not undergone more rigorous evaluation in this unique population sub-group. The purpose of this systematic review was to compare two or more methods of dietary assessment, including dietary intake measured against biomarkers or reference measures of energy expenditure, in athletes. Six electronic databases were searched for English-language, full-text articles published from January 1980 until June 2016. The search strategy combined the following keywords: diet, nutrition assessment, athlete, and validity; where the following outcomes are reported but not limited to: energy intake, macro and/or micronutrient intake, food intake, nutritional adequacy, diet quality, or nutritional status. Meta-analysis was performed on studies with sufficient methodological similarity, with between-group standardized mean differences (or effect size) and 95% confidence intervals (CI) being calculated. Of the 1624 studies identified, 18 were eligible for inclusion. Studies comparing self-reported energy intake (EI) to energy expenditure assessed via doubly labelled water were grouped for comparison (n = 11) and demonstrated mean EI was under-estimated by 19% (−2793 ± 1134 kJ/day). Meta-analysis revealed a large pooled effect size of −1.006 (95% CI: −1.3 to −0.7; p < 0.001). The remaining studies (n = 7) compared a new dietary tool or instrument to a reference method(s) (e.g., food record, 24-h dietary recall, biomarker) as part of a validation study. This systematic review revealed there are limited robust studies evaluating dietary assessment methods in athletes. Existing literature demonstrates the substantial variability between methods, with under- and misreporting of intake being frequently observed. There is a clear need for careful validation of dietary assessment methods, including emerging technical innovations, among athlete populations. PMID:29207495

  11. Local Validation of Global Estimates of Biosphere Properties: Synthesis of Scaling Methods and Results Across Several Major Biomes

    NASA Technical Reports Server (NTRS)

    Cohen, Warren B.; Wessman, Carol A.; Aber, John D.; VanderCaslte, John R.; Running, Steven W.

    1998-01-01

    To assist in validating future MODIS land cover, LAI, IPAR, and NPP products, this project conducted a series of prototyping exercises that resulted in enhanced understanding of the issues regarding such validation. As a result, we have several papers to appear as a special issue of Remote Sensing of Environment in 1999. Also, we have been successful at obtaining a follow-on grant to pursue actual validation of these products over the next several years. This document consists of a delivery letter, including a listing of published papers.

  12. MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR

    PubMed Central

    Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo

    2015-01-01

    Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. PMID:26109350

  13. Method of Analysis by the U.S. Geological Survey California District Sacramento Laboratory-- Determination of Dissolved Organic Carbon in Water by High Temperature Catalytic Oxidation, Method Validation, and Quality-Control Practices

    USGS Publications Warehouse

    Bird, Susan M.; Fram, Miranda S.; Crepeau, Kathryn L.

    2003-01-01

    An analytical method has been developed for the determination of dissolved organic carbon concentration in water samples. This method includes the results of the tests used to validate the method and the quality-control practices used for dissolved organic carbon analysis. Prior to analysis, water samples are filtered to remove suspended particulate matter. A Shimadzu TOC-5000A Total Organic Carbon Analyzer in the nonpurgeable organic carbon mode is used to analyze the samples by high temperature catalytic oxidation. The analysis usually is completed within 48 hours of sample collection. The laboratory reporting level is 0.22 milligrams per liter.

  14. A new method for assessing content validity in model-based creation and iteration of eHealth interventions.

    PubMed

    Kassam-Adams, Nancy; Marsac, Meghan L; Kohser, Kristen L; Kenardy, Justin A; March, Sonja; Winston, Flaura K

    2015-04-15

    The advent of eHealth interventions to address psychological concerns and health behaviors has created new opportunities, including the ability to optimize the effectiveness of intervention activities and then deliver these activities consistently to a large number of individuals in need. Given that eHealth interventions grounded in a well-delineated theoretical model for change are more likely to be effective and that eHealth interventions can be costly to develop, assuring the match of final intervention content and activities to the underlying model is a key step. We propose to apply the concept of "content validity" as a crucial checkpoint to evaluate the extent to which proposed intervention activities in an eHealth intervention program are valid (eg, relevant and likely to be effective) for the specific mechanism of change that each is intended to target and the intended target population for the intervention. The aims of this paper are to define content validity as it applies to model-based eHealth intervention development, to present a feasible method for assessing content validity in this context, and to describe the implementation of this new method during the development of a Web-based intervention for children. We designed a practical 5-step method for assessing content validity in eHealth interventions that includes defining key intervention targets, delineating intervention activity-target pairings, identifying experts and using a survey tool to gather expert ratings of the relevance of each activity to its intended target, its likely effectiveness in achieving the intended target, and its appropriateness with a specific intended audience, and then using quantitative and qualitative results to identify intervention activities that may need modification. We applied this method during our development of the Coping Coach Web-based intervention for school-age children. In the evaluation of Coping Coach content validity, 15 experts from five countries rated each of 15 intervention activity-target pairings. Based on quantitative indices, content validity was excellent for relevance and good for likely effectiveness and age-appropriateness. Two intervention activities had item-level indicators that suggested the need for further review and potential revision by the development team. This project demonstrated that assessment of content validity can be straightforward and feasible to implement and that results of this assessment provide useful information for ongoing development and iterations of new eHealth interventions, complementing other sources of information (eg, user feedback, effectiveness evaluations). This approach can be utilized at one or more points during the development process to guide ongoing optimization of eHealth interventions.

  15. Cross-validation pitfalls when selecting and assessing regression and classification models.

    PubMed

    Krstajic, Damjan; Buturovic, Ljubomir J; Leahy, David E; Thomas, Simon

    2014-03-29

    We address the problem of selecting and assessing classification and regression models using cross-validation. Current state-of-the-art methods can yield models with high variance, rendering them unsuitable for a number of practical applications including QSAR. In this paper we describe and evaluate best practices which improve reliability and increase confidence in selected models. A key operational component of the proposed methods is cloud computing which enables routine use of previously infeasible approaches. We describe in detail an algorithm for repeated grid-search V-fold cross-validation for parameter tuning in classification and regression, and we define a repeated nested cross-validation algorithm for model assessment. As regards variable selection and parameter tuning we define two algorithms (repeated grid-search cross-validation and double cross-validation), and provide arguments for using the repeated grid-search in the general case. We show results of our algorithms on seven QSAR datasets. The variation of the prediction performance, which is the result of choosing different splits of the dataset in V-fold cross-validation, needs to be taken into account when selecting and assessing classification and regression models. We demonstrate the importance of repeating cross-validation when selecting an optimal model, as well as the importance of repeating nested cross-validation when assessing a prediction error.

  16. Content validity of symptom-based measures for diabetic, chemotherapy, and HIV peripheral neuropathy.

    PubMed

    Gewandter, Jennifer S; Burke, Laurie; Cavaletti, Guido; Dworkin, Robert H; Gibbons, Christopher; Gover, Tony D; Herrmann, David N; Mcarthur, Justin C; McDermott, Michael P; Rappaport, Bob A; Reeve, Bryce B; Russell, James W; Smith, A Gordon; Smith, Shannon M; Turk, Dennis C; Vinik, Aaron I; Freeman, Roy

    2017-03-01

    No treatments for axonal peripheral neuropathy are approved by the United States Food and Drug Administration (FDA). Although patient- and clinician-reported outcomes are central to evaluating neuropathy symptoms, they can be difficult to assess accurately. The inability to identify efficacious treatments for peripheral neuropathies could be due to invalid or inadequate outcome measures. This systematic review examined the content validity of symptom-based measures of diabetic peripheral neuropathy, HIV neuropathy, and chemotherapy-induced peripheral neuropathy. Use of all FDA-recommended methods to establish content validity was only reported for 2 of 18 measures. Multiple sensory and motor symptoms were included in measures for all 3 conditions; these included numbness, tingling, pain, allodynia, difficulty walking, and cramping. Autonomic symptoms were less frequently included. Given significant overlap in symptoms between neuropathy etiologies, a measure with content validity for multiple neuropathies with supplemental disease-specific modules could be of great value in the development of disease-modifying treatments for peripheral neuropathies. Muscle Nerve 55: 366-372, 2017. © 2016 Wiley Periodicals, Inc.

  17. Validation of a Russian Language Oswestry Disability Index Questionnaire.

    PubMed

    Yu, Elizabeth M; Nosova, Emily V; Falkenstein, Yuri; Prasad, Priya; Leasure, Jeremi M; Kondrashov, Dimitriy G

    2016-11-01

    Study Design  Retrospective reliability and validity study. Objective  To validate a recently translated Russian language version of the Oswestry Disability Index (R-ODI) using standardized methods detailed from previous validations in other languages. Methods  We included all subjects who were seen in our spine surgery clinic, over the age of 18, and fluent in the Russian language. R-ODI was translated by six bilingual people and combined into a consensus version. R-ODI and visual analog scale (VAS) questionnaires for leg and back pain were distributed to subjects during both their initial and follow-up visits. Test validity, stability, and internal consistency were measured using standardized psychometric methods. Results Ninety-seven subjects participated in the study. No change in the meaning of the questions on R-ODI was noted with translation from English to Russian. There was a significant positive correlation between R-ODI and VAS scores for both the leg and back during both the initial and follow-up visits ( p  < 0.01 for all). The instrument was shown to have high internal consistency (Cronbach α = 0.82) and moderate test-retest stability (interclass correlation coefficient = 0.70). Conclusions  The R-ODI is both valid and reliable for use among the Russian-speaking population in the United States.

  18. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol

    PubMed Central

    Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-01-01

    Introduction The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. Methods and analysis As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. Ethics and dissemination The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. PMID:28827239

  19. Pressure ulcer prevention algorithm content validation: a mixed-methods, quantitative study.

    PubMed

    van Rijswijk, Lia; Beitz, Janice M

    2015-04-01

    Translating pressure ulcer prevention (PUP) evidence-based recommendations into practice remains challenging for a variety of reasons, including the perceived quality, validity, and usability of the research or the guideline itself. Following the development and face validation testing of an evidence-based PUP algorithm, additional stakeholder input and testing were needed. Using convenience sampling methods, wound care experts attending a national wound care conference and a regional wound ostomy continence nursing (WOCN) conference and/or graduates of a WOCN program were invited to participate in an Internal Review Board-approved, mixed-methods quantitative survey with qualitative components to examine algorithm content validity. After participants provided written informed consent, demographic variables were collected and participants were asked to comment on and rate the relevance and appropriateness of each of the 26 algorithm decision points/steps using standard content validation study procedures. All responses were anonymous. Descriptive summary statistics, mean relevance/appropriateness scores, and the content validity index (CVI) were calculated. Qualitative comments were transcribed and thematically analyzed. Of the 553 wound care experts invited, 79 (average age 52.9 years, SD 10.1; range 23-73) consented to participate and completed the study (a response rate of 14%). Most (67, 85%) were female, registered (49, 62%) or advanced practice (12, 15%) nurses, and had > 10 years of health care experience (88, 92%). Other health disciplines included medical doctors, physical therapists, nurse practitioners, and certified nurse specialists. Almost all had received formal wound care education (75, 95%). On a Likert-type scale of 1 (not relevant/appropriate) to 4 (very relevant and appropriate), the average score for the entire algorithm/all decision points (N = 1,912) was 3.72 with an overall CVI of 0.94 (out of 1). The only decision point/step recommendation with a CVI of ≤ 0.70 was the recommendation to provide medical-grade sheepskin for patients at high risk for friction/shear. Many positive and substantive suggestions for minor modifications including color, flow, and algorithm orientation were received. The high overall and individual item rating scores and CVI further support the validity and appropriateness of the PUP algorithm with the addition of the minor modifications. The generic recommendations facilitate individualization, and future research should focus on construct validation testing.

  20. Derivation and validation of simple anthropometric equations to predict adipose tissue mass and total fat mass with MRI as the reference method

    PubMed Central

    Al-Gindan, Yasmin Y.; Hankey, Catherine R.; Govan, Lindsay; Gallagher, Dympna; Heymsfield, Steven B.; Lean, Michael E. J.

    2017-01-01

    The reference organ-level body composition measurement method is MRI. Practical estimations of total adipose tissue mass (TATM), total adipose tissue fat mass (TATFM) and total body fat are valuable for epidemiology, but validated prediction equations based on MRI are not currently available. We aimed to derive and validate new anthropometric equations to estimate MRI-measured TATM/TATFM/total body fat and compare them with existing prediction equations using older methods. The derivation sample included 416 participants (222 women), aged between 18 and 88 years with BMI between 15·9 and 40·8 (kg/m2). The validation sample included 204 participants (110 women), aged between 18 and 86 years with BMI between 15·7 and 36·4 (kg/m2). Both samples included mixed ethnic/racial groups. All the participants underwent whole-body MRI to quantify TATM (dependent variable) and anthropometry (independent variables). Prediction equations developed using stepwise multiple regression were further investigated for agreement and bias before validation in separate data sets. Simplest equations with optimal R2 and Bland–Altman plots demonstrated good agreement without bias in the validation analyses: men: TATM (kg) = 0·198 weight (kg) + 0·478 waist (cm) − 0·147 height (cm) − 12·8 (validation: R2 0·79, CV = 20 %, standard error of the estimate (SEE)=3·8 kg) and women: TATM (kg)=0·789 weight (kg) + 0·0786 age (years) − 0·342 height (cm) + 24·5 (validation: R2 0·84, CV = 13 %, SEE = 3·0 kg). Published anthropometric prediction equations, based on MRI and computed tomographic scans, correlated strongly with MRI-measured TATM: (R2 0·70 – 0·82). Estimated TATFM correlated well with published prediction equations for total body fat based on underwater weighing (R2 0·70–0·80), with mean bias of 2·5–4·9 kg, correctable with log-transformation in most equations. In conclusion, new equations, using simple anthropometric measurements, estimated MRI-measured TATM with correlations and agreements suitable for use in groups and populations across a wide range of fatness. PMID:26435103

  1. Development and Validation of HPLC-DAD and UHPLC-DAD Methods for the Simultaneous Determination of Guanylhydrazone Derivatives Employing a Factorial Design.

    PubMed

    Azevedo de Brito, Wanessa; Gomes Dantas, Monique; Andrade Nogueira, Fernando Henrique; Ferreira da Silva-Júnior, Edeildo; Xavier de Araújo-Júnior, João; Aquino, Thiago Mendonça de; Adélia Nogueira Ribeiro, Êurica; da Silva Solon, Lilian Grace; Soares Aragão, Cícero Flávio; Barreto Gomes, Ana Paula

    2017-08-30

    Guanylhydrazones are molecules with great pharmacological potential in various therapeutic areas, including antitumoral activity. Factorial design is an excellent tool in the optimization of a chromatographic method, because it is possible quickly change factors such as temperature, mobile phase composition, mobile phase pH, column length, among others to establish the optimal conditions of analysis. The aim of the present work was to develop and validate a HPLC and UHPLC methods for the simultaneous determination of guanylhydrazones with anticancer activity employing experimental design. Precise, exact, linear and robust HPLC and UHPLC methods were developed and validated for the simultaneous quantification of the guanylhydrazones LQM10, LQM14, and LQM17. The UHPLC method was more economic, with a four times less solvent consumption, and 20 times less injection volume, what allowed better column performance. Comparing the empirical approach employed in the HPLC method development to the DoE approach employed in the UHPLC method development, we can conclude that the factorial design made the method development faster, more practical and rational. This resulted in methods that can be employed in the analysis, evaluation and quality control of these new synthetic guanylhydrazones.

  2. Challenges in application of bioanalytical method on different populations and effect of population on PK.

    PubMed

    Kale, Prashant; Shukla, Manoj; Soni, Gunjan; Patel, Ronak; Gupta, Shailendra

    2014-01-01

    Prashant Kale has 22 years of immense experience in the analytical and bioanalytical domain. He is Senior Vice President, Bioequivalence Operations of Lambda Therapeutic Research, India which includes Bioanalytical, Clinics, Clinical data management, Pharmacokinetics and Biostatistics, Protocol writing, Clinical lab and Quality Assurance departments. He has been with Lambda for over 14 years. By qualification he is a M.Sc. and an MBA. Mr. Kale is responsible for the management, technical and administrative functions of the BE unit located at Ahmedabad and Mumbai, India. He is also responsible for leading the process of integration between bioanalytical laboratories and services offered by Lambda at global locations (India and Canada). Mr. Kale has faced several regulatory audits and inspections from leading regulatory bodies including but not limited to DCGI, USFDA, ANVISA, Health Canada, UK MHRA, Turkey MoH, WHO. There are many challenges involved in the application of bioanalytical method on different populations. This includes difference in equipment, material and environment across laboratories, variations in the matrix characteristics in different populations, differences in techniques between analysts such as sample processing and handling and others. Additionally, there is variability in the PK of a drug in different populations. This article shows the effect of different populations on validated bioanalytical method and on the PK of a drug. Hence, the bioanalytical method developed and validated for a specific population may need required modification when applied to another population. Critical consideration of all such aspects is the key to successful implementation of a validated method on different populations.

  3. A simple validated multi-analyte method for detecting drugs in oral fluid by ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS).

    PubMed

    Zheng, Yufang; Sparve, Erik; Bergström, Mats

    2018-06-01

    A UPLC-MS/MS method was developed to identify and quantitate 37 commonly abused drugs in oral fluid. Drugs of interest included amphetamines, benzodiazepines, cocaine, opiates, opioids, phencyclidine and tetrahydrocannabinol. Sample preparation and extraction are simple, and analysis times short. Validation showed satisfactory performance at relevant concentrations. The possibility of contaminated samples as well as the interpretation in relation to well-knows matrices, such as urine, will demand further study. Copyright © 2017 John Wiley & Sons, Ltd.

  4. Dimension from covariance matrices.

    PubMed

    Carroll, T L; Byers, J M

    2017-02-01

    We describe a method to estimate embedding dimension from a time series. This method includes an estimate of the probability that the dimension estimate is valid. Such validity estimates are not common in algorithms for calculating the properties of dynamical systems. The algorithm described here compares the eigenvalues of covariance matrices created from an embedded signal to the eigenvalues for a covariance matrix of a Gaussian random process with the same dimension and number of points. A statistical test gives the probability that the eigenvalues for the embedded signal did not come from the Gaussian random process.

  5. Use of the "Intervention Selection Profile-Social Skills" to Identify Social Skill Acquisition Deficits: A Preliminary Validation Study

    ERIC Educational Resources Information Center

    Kilgus, Stephen P.; von der Embse, Nathaniel P.; Scott, Katherine; Paxton, Sara

    2015-01-01

    The purpose of this investigation was to develop and initially validate the "Intervention Selection Profile-Social Skills" (ISP-SS), a novel brief social skills assessment method intended for use at Tier 2. Participants included 54 elementary school teachers and their 243 randomly selected students. Teachers rated students on two rating…

  6. Validity, Reliability, and Equity Issues in an Observational Talent Assessment Process in the Performing Arts

    ERIC Educational Resources Information Center

    Oreck, Barry A.; Owen, Steven V.; Baum, Susan M.

    2003-01-01

    The lack of valid, research-based methods to identify potential artistic talent hampers the inclusion of the arts in programs for the gifted and talented. The Talent Assessment Process in Dance, Music, and Theater (D/M/T TAP) was designed to identify potential performing arts talent in diverse populations, including bilingual and special education…

  7. Niosh analytical methods for Set G

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Not Available

    1976-12-01

    Industrial Hygiene sampling and analytical monitoring methods validated under the joint NIOSH/OSHA Standards Completion Program for Set G are contained herein. Monitoring methods for the following compounds are included: butadiene, heptane, ketene, methyl cyclohexane, octachloronaphthalene, pentachloronaphthalene, petroleum distillates, propylene dichloride, turpentine, dioxane, hexane, LPG, naphtha(coal tar), octane, pentane, propane, and stoddard solvent.

  8. Technical Notes on the Multifactor Method of Elementary School Closing.

    ERIC Educational Resources Information Center

    Puleo, Vincent T.

    This report provides preliminary technical information on a method for analyzing the factors involved in the closing of elementary schools. Included is a presentation of data and a brief discussion bearing on descriptive statistics, reliability, and validity. An intercorrelation matrix is also examined. The method employs 9 factors that have a…

  9. Validity of Bioelectrical Impedance Analysis to Estimation Fat-Free Mass in the Army Cadets.

    PubMed

    Langer, Raquel D; Borges, Juliano H; Pascoa, Mauro A; Cirolini, Vagner X; Guerra-Júnior, Gil; Gonçalves, Ezequiel M

    2016-03-11

    Bioelectrical Impedance Analysis (BIA) is a fast, practical, non-invasive, and frequently used method for fat-free mass (FFM) estimation. The aims of this study were to validate predictive equations of BIA to FFM estimation in Army cadets and to develop and validate a specific BIA equation for this population. A total of 396 males, Brazilian Army cadets, aged 17-24 years were included. The study used eight published predictive BIA equations, a specific equation in FFM estimation, and dual-energy X-ray absorptiometry (DXA) as a reference method. Student's t-test (for paired sample), linear regression analysis, and Bland-Altman method were used to test the validity of the BIA equations. Predictive BIA equations showed significant differences in FFM compared to DXA (p < 0.05) and large limits of agreement by Bland-Altman. Predictive BIA equations explained 68% to 88% of FFM variance. Specific BIA equations showed no significant differences in FFM, compared to DXA values. Published BIA predictive equations showed poor accuracy in this sample. The specific BIA equations, developed in this study, demonstrated validity for this sample, although should be used with caution in samples with a large range of FFM.

  10. Development and validation of a multiresidue method for the analysis of polybrominated diphenyl ethers, new brominated and organophosphorus flame retardants in sediment, sludge and dust.

    PubMed

    Cristale, Joyce; Lacorte, Silvia

    2013-08-30

    This study presents a multiresidue method for simultaneous extraction, clean-up and analysis of priority and emerging flame retardants in sediment, sewage sludge and dust. Studied compounds included eight polybrominated diphenyl ethers congeners, nine new brominated flame retardants and ten organophosphorus flame retardants. The analytical method was based on ultrasound-assisted extraction with ethyl acetate/cyclohexane (5:2, v/v), clean-up with Florisil cartridges and analysis by gas chromatography coupled to tandem mass spectrometry (GC-EI-MS/MS). Method development and validation protocol included spiked samples, certified reference material (for dust), and participation in an interlaboratory calibration. The method proved to be efficient and robust for extraction and determination of three families of flame retardants families in the studied solid matrices. The method was applied to river sediment, sewage sludge and dust samples, and allowed detection of 24 among the 27 studied flame retardants. Organophosphate esters, BDE-209 and decabromodiphenyl ethane were the most ubiquitous contaminants detected. Copyright © 2013 Elsevier B.V. All rights reserved.

  11. A Process Analytical Technology (PAT) approach to control a new API manufacturing process: development, validation and implementation.

    PubMed

    Schaefer, Cédric; Clicq, David; Lecomte, Clémence; Merschaert, Alain; Norrant, Edith; Fotiadu, Frédéric

    2014-03-01

    Pharmaceutical companies are progressively adopting and introducing Process Analytical Technology (PAT) and Quality-by-Design (QbD) concepts promoted by the regulatory agencies, aiming the building of the quality directly into the product by combining thorough scientific understanding and quality risk management. An analytical method based on near infrared (NIR) spectroscopy was developed as a PAT tool to control on-line an API (active pharmaceutical ingredient) manufacturing crystallization step during which the API and residual solvent contents need to be precisely determined to reach the predefined seeding point. An original methodology based on the QbD principles was designed to conduct the development and validation of the NIR method and to ensure that it is fitted for its intended use. On this basis, Partial least squares (PLS) models were developed and optimized using chemometrics methods. The method was fully validated according to the ICH Q2(R1) guideline and using the accuracy profile approach. The dosing ranges were evaluated to 9.0-12.0% w/w for the API and 0.18-1.50% w/w for the residual methanol. As by nature the variability of the sampling method and the reference method are included in the variability obtained for the NIR method during the validation phase, a real-time process monitoring exercise was performed to prove its fit for purpose. The implementation of this in-process control (IPC) method on the industrial plant from the launch of the new API synthesis process will enable automatic control of the final crystallization step in order to ensure a predefined quality level of the API. In addition, several valuable benefits are expected including reduction of the process time, suppression of a rather difficult sampling and tedious off-line analyses. © 2013 Published by Elsevier B.V.

  12. Assessment of the Validity of the Research Diagnostic Criteria for Temporomandibular Disorders: Overview and Methodology

    PubMed Central

    Schiffman, Eric L.; Truelove, Edmond L.; Ohrbach, Richard; Anderson, Gary C.; John, Mike T.; List, Thomas; Look, John O.

    2011-01-01

    AIMS The purpose of the Research Diagnostic Criteria for Temporomandibular Disorders (RDC/TMD) Validation Project was to assess the diagnostic validity of this examination protocol. An overview is presented, including Axis I and II methodology and descriptive statistics for the study participant sample. This paper details the development of reliable methods to establish the reference standards for assessing criterion validity of the Axis I RDC/TMD diagnoses. Validity testing for the Axis II biobehavioral instruments was based on previously validated reference standards. METHODS The Axis I reference standards were based on the consensus of 2 criterion examiners independently performing a comprehensive history, clinical examination, and evaluation of imaging. Intersite reliability was assessed annually for criterion examiners and radiologists. Criterion exam reliability was also assessed within study sites. RESULTS Study participant demographics were comparable to those of participants in previous studies using the RDC/TMD. Diagnostic agreement of the criterion examiners with each other and with the consensus-based reference standards was excellent with all kappas ≥ 0.81, except for osteoarthrosis (moderate agreement, k = 0.53). Intrasite criterion exam agreement with reference standards was excellent (k ≥ 0.95). Intersite reliability of the radiologists for detecting computed tomography-disclosed osteoarthrosis and magnetic resonance imaging-disclosed disc displacement was good to excellent (k = 0.71 and 0.84, respectively). CONCLUSION The Validation Project study population was appropriate for assessing the reliability and validity of the RDC/TMD Axis I and II. The reference standards used to assess the validity of Axis I TMD were based on reliable and clinically credible methods. PMID:20213028

  13. Development and Validation of an Inductively Coupled Plasma Mass Spectrometry (ICP-MS) Method for Quantitative Analysis of Platinum in Plasma, Urine, and Tissues.

    PubMed

    Zhang, Ti; Cai, Shuang; Forrest, Wai Chee; Mohr, Eva; Yang, Qiuhong; Forrest, M Laird

    2016-09-01

    Cisplatin, a platinum chemotherapeutic, is one of the most commonly used chemotherapeutic agents for many solid tumors. In this work, we developed and validated an inductively coupled plasma mass spectrometry (ICP-MS) method for quantitative determination of platinum levels in rat urine, plasma, and tissue matrices including liver, brain, lungs, kidney, muscle, heart, spleen, bladder, and lymph nodes. The tissues were processed using a microwave accelerated reaction system (MARS) system prior to analysis on an Agilent 7500 ICP-MS. According to the Food and Drug Administration guidance for industry, bioanalytical validation parameters of the method, such as selectivity, accuracy, precision, recovery, and stability were evaluated in rat biological samples. Our data suggested that the method was selective for platinum without interferences caused by other presenting elements, and the lower limit of quantification was 0.5 ppb. The accuracy and precision of the method were within 15% variation and the recoveries of platinum for all tissue matrices examined were determined to be 85-115% of the theoretical values. The stability of the platinum-containing solutions, including calibration standards, stock solutions, and processed samples in rat biological matrices was investigated. Results indicated that the samples were stable after three cycles of freeze-thaw and for up to three months. © The Author(s) 2016.

  14. Development and Validation of an Inductively Coupled Plasma Mass Spectrometry (ICP-MS) Method for Quantitative Analysis of Platinum in Plasma, Urine, and Tissues

    PubMed Central

    Zhang, Ti; Cai, Shuang; Forrest, Wai Chee; Mohr, Eva; Yang, Qiuhong; Forrest, M. Laird

    2016-01-01

    Cisplatin, a platinum chemotherapeutic, is one of the most commonly used chemotherapeutic agents for many solid tumors. In this work, we developed and validated an inductively coupled plasma mass spectrometry (ICP-MS) method for quantitative determination of platinum levels in rat urine, plasma, and tissue matrices including liver, brain, lungs, kidney, muscle, heart, spleen, bladder, and lymph nodes. The tissues were processed using a microwave accelerated reaction system (MARS) system prior to analysis on an Agilent 7500 ICP-MS. According to the Food and Drug Administration guidance for industry, bioanalytical validation parameters of the method, such as selectivity, accuracy, precision, recovery, and stability were evaluated in rat biological samples. Our data suggested that the method was selective for platinum without interferences caused by other presenting elements, and the lower limit of quantification was 0.5 ppb. The accuracy and precision of the method were within 15% variation and the recoveries of platinum for all tissue matrices examined were determined to be 85–115% of the theoretical values. The stability of the platinum-containing solutions, including calibration standards, stock solutions, and processed samples in rat biological matrices was investigated. Results indicated that the samples were stable after three cycles of freeze–thaw and for up to three months. PMID:27527103

  15. Developing and validating risk prediction models in an individual participant data meta-analysis

    PubMed Central

    2014-01-01

    Background Risk prediction models estimate the risk of developing future outcomes for individuals based on one or more underlying characteristics (predictors). We review how researchers develop and validate risk prediction models within an individual participant data (IPD) meta-analysis, in order to assess the feasibility and conduct of the approach. Methods A qualitative review of the aims, methodology, and reporting in 15 articles that developed a risk prediction model using IPD from multiple studies. Results The IPD approach offers many opportunities but methodological challenges exist, including: unavailability of requested IPD, missing patient data and predictors, and between-study heterogeneity in methods of measurement, outcome definitions and predictor effects. Most articles develop their model using IPD from all available studies and perform only an internal validation (on the same set of data). Ten of the 15 articles did not allow for any study differences in baseline risk (intercepts), potentially limiting their model’s applicability and performance in some populations. Only two articles used external validation (on different data), including a novel method which develops the model on all but one of the IPD studies, tests performance in the excluded study, and repeats by rotating the omitted study. Conclusions An IPD meta-analysis offers unique opportunities for risk prediction research. Researchers can make more of this by allowing separate model intercept terms for each study (population) to improve generalisability, and by using ‘internal-external cross-validation’ to simultaneously develop and validate their model. Methodological challenges can be reduced by prospectively planned collaborations that share IPD for risk prediction. PMID:24397587

  16. Assessment of dietary sodium intake using a food frequency questionnaire and 24-hour urinary sodium excretion: a systematic literature review.

    PubMed

    McLean, Rachael M; Farmer, Victoria L; Nettleton, Alice; Cameron, Claire M; Cook, Nancy R; Campbell, Norman R C

    2017-12-01

    Food frequency questionnaires (FFQs) are often used to assess dietary sodium intake, although 24-hour urinary excretion is the most accurate measure of intake. The authors conducted a systematic review to investigate whether FFQs are a reliable and valid way of measuring usual dietary sodium intake. Results from 18 studies are described in this review, including 16 validation studies. The methods of study design and analysis varied widely with respect to FFQ instrument, number of 24-hour urine collections collected per participant, methods used to assess completeness of urine collections, and statistical analysis. Overall, there was poor agreement between estimates from FFQ and 24-hour urine. The authors suggest a framework for validation and reporting based on a consensus statement (2004), and recommend that all FFQs used to estimate dietary sodium intake undergo validation against multiple 24-hour urine collections. ©2017 Wiley Periodicals, Inc.

  17. Individualizing Pharmacotherapy in Patients with Renal Impairment: The Validity of the Modification of Diet in Renal Disease Formula in Specific Patient Populations with a Glomerular Filtration Rate below 60 Ml/Min. A Systematic Review

    PubMed Central

    Kramers, Cornelis; Derijks, Hieronymus J.; Wensing, Michel; Wetzels, Jack F. M.

    2015-01-01

    Background The Modification of Diet in Renal Disease (MDRD) formula is widely used in clinical practice to assess the correct drug dose. This formula is based on serum creatinine levels which might be influenced by chronic diseases itself or the effects of the chronic diseases. We conducted a systematic review to determine the validity of the MDRD formula in specific patient populations with renal impairment: elderly, hospitalized and obese patients, patients with cardiovascular disease, cancer, chronic respiratory diseases, diabetes mellitus, liver cirrhosis and human immunodeficiency virus. Methods and Findings We searched for articles in Pubmed published from January 1999 through January 2014. Selection criteria were (1) patients with a glomerular filtration rate (GFR) < 60 ml/min (/1.73m2), (2) MDRD formula compared with a gold standard and (3) statistical analysis focused on bias, precision and/or accuracy. Data extraction was done by the first author and checked by a second author. A bias of 20% or less, a precision of 30% or less and an accuracy expressed as P30% of 80% or higher were indicators of the validity of the MDRD formula. In total we included 27 studies. The number of patients included ranged from 8 to 1831. The gold standard and measurement method used varied across the studies. For none of the specific patient populations the studies provided sufficient evidence of validity of the MDRD formula regarding the three parameters. For patients with diabetes mellitus and liver cirrhosis, hospitalized patients and elderly with moderate to severe renal impairment we concluded that the MDRD formula is not valid. Limitations of the review are the lack of considering the method of measuring serum creatinine levels and the type of gold standard used. Conclusion In several specific patient populations with renal impairment the use of the MDRD formula is not valid or has uncertain validity. PMID:25741695

  18. Physiologic measures of sexual function in women: a review.

    PubMed

    Woodard, Terri L; Diamond, Michael P

    2009-07-01

    To review and describe physiologic measures of assessing sexual function in women. Literature review. Studies that use instruments designed to measure female sexual function. Women participating in studies of female sexual function. Various instruments that measure physiologic features of female sexual function. Appraisal of the various instruments, including their advantages and disadvantages. Many unique physiologic methods of evaluating female sexual function have been developed during the past four decades. Each method has its benefits and limitations. Many physiologic methods exist, but most are not well-validated. In addition there has been an inability to correlate most physiologic measures with subjective measures of sexual arousal. Furthermore, given the complex nature of the sexual response in women, physiologic measures should be considered in context of other data, including the history, physical examination, and validated questionnaires. Nonetheless, the existence of appropriate physiologic measures is vital to our understanding of female sexual function and dysfunction.

  19. Methods for Geometric Data Validation of 3d City Models

    NASA Astrophysics Data System (ADS)

    Wagner, D.; Alam, N.; Wewetzer, M.; Pries, M.; Coors, V.

    2015-12-01

    Geometric quality of 3D city models is crucial for data analysis and simulation tasks, which are part of modern applications of the data (e.g. potential heating energy consumption of city quarters, solar potential, etc.). Geometric quality in these contexts is however a different concept as it is for 2D maps. In the latter case, aspects such as positional or temporal accuracy and correctness represent typical quality metrics of the data. They are defined in ISO 19157 and should be mentioned as part of the metadata. 3D data has a far wider range of aspects which influence their quality, plus the idea of quality itself is application dependent. Thus, concepts for definition of quality are needed, including methods to validate these definitions. Quality on this sense means internal validation and detection of inconsistent or wrong geometry according to a predefined set of rules. A useful starting point would be to have correct geometry in accordance with ISO 19107. A valid solid should consist of planar faces which touch their neighbours exclusively in defined corner points and edges. No gaps between them are allowed, and the whole feature must be 2-manifold. In this paper, we present methods to validate common geometric requirements for building geometry. Different checks based on several algorithms have been implemented to validate a set of rules derived from the solid definition mentioned above (e.g. water tightness of the solid or planarity of its polygons), as they were developed for the software tool CityDoctor. The method of each check is specified, with a special focus on the discussion of tolerance values where they are necessary. The checks include polygon level checks to validate the correctness of each polygon, i.e. closeness of the bounding linear ring and planarity. On the solid level, which is only validated if the polygons have passed validation, correct polygon orientation is checked, after self-intersections outside of defined corner points and edges are detected, among additional criteria. Self-intersection might lead to different results, e.g. intersection points, lines or areas. Depending on the geometric constellation, they might represent gaps between bounding polygons of the solids, overlaps, or violations of the 2-manifoldness. Not least due to the floating point problem in digital numbers, tolerances must be considered in some algorithms, e.g. planarity and solid self-intersection. Effects of different tolerance values and their handling is discussed; recommendations for suitable values are given. The goal of the paper is to give a clear understanding of geometric validation in the context of 3D city models. This should also enable the data holder to get a better comprehension of the validation results and their consequences on the deployment fields of the validated data set.

  20. A bibliography on formal methods for system specification, design and validation

    NASA Technical Reports Server (NTRS)

    Meyer, J. F.; Furchtgott, D. G.; Movaghar, A.

    1982-01-01

    Literature on the specification, design, verification, testing, and evaluation of avionics systems was surveyed, providing 655 citations. Journal papers, conference papers, and technical reports are included. Manual and computer-based methods were employed. Keywords used in the online search are listed.

  1. Validity and reliability of the session-RPE method for quantifying training load in karate athletes.

    PubMed

    Tabben, M; Tourny, C; Haddad, M; Chaabane, H; Chamari, K; Coquart, J B

    2015-04-24

    To test the construct validity and reliability of the session rating of perceived exertion (sRPE) method by examining the relationship between RPE and physiological parameters (heart rate: HR and blood lactate concentration: [La --] ) and the correlations between sRPE and two HR--based methods for quantifying internal training load (Banister's method and Edwards's method) during karate training camp. Eighteen elite karate athletes: ten men (age: 24.2 ± 2.3 y, body mass: 71.2 ± 9.0 kg, body fat: 8.2 ± 1.3% and height: 178 ± 7 cm) and eight women (age: 22.6 ± 1.2 y, body mass: 59.8 ± 8.4 kg, body fat: 20.2 ± 4.4%, height: 169 ± 4 cm) were included in the study. During training camp, subjects participated in eight karate--training sessions including three training modes (4 tactical--technical, 2 technical--development, and 2 randori training), during which RPE, HR, and [La -- ] were recorded. Significant correlations were found between RPE and physiological parameters (percentage of maximal HR: r = 0.75, 95% CI = 0.64--0.86; [La --] : r = 0.62, 95% CI = 0.49--0.75; P < 0.001). Moreover, individual sRPE was significantly correlated with two HR--based methods for quantifying internal training load ( r = 0.65--0.95; P < 0.001). The sRPE method showed the high reliability of the same intensity across training sessions (Cronbach's α = 0.81, 95% CI = 0.61--0.92). This study demonstrates that the sRPE method is valid for quantifying internal training load and intensity in karate.

  2. Expansion of the Scope of AOAC First Action Method 2012.25--Single-Laboratory Validation of Triphenylmethane Dye and Leuco Metabolite Analysis in Shrimp, Tilapia, Catfish, and Salmon by LC-MS/MS.

    PubMed

    Andersen, Wendy C; Casey, Christine R; Schneider, Marilyn J; Turnipseed, Sherri B

    2015-01-01

    Prior to conducting a collaborative study of AOAC First Action 2012.25 LC-MS/MS analytical method for the determination of residues of three triphenylmethane dyes (malachite green, crystal violet, and brilliant green) and their metabolites (leucomalachite green and leucocrystal violet) in seafood, a single-laboratory validation of method 2012.25 was performed to expand the scope of the method to other seafood matrixes including salmon, catfish, tilapia, and shrimp. The validation included the analysis of fortified and incurred residues over multiple weeks to assess analyte stability in matrix at -80°C, a comparison of calibration methods over the range 0.25 to 4 μg/kg, study of matrix effects for analyte quantification, and qualitative identification of targeted analytes. Method accuracy ranged from 88 to 112% with 13% RSD or less for samples fortified at 0.5, 1.0, and 2.0 μg/kg. Analyte identification and determination limits were determined by procedures recommended both by the U. S. Food and Drug Administration and the European Commission. Method detection limits and decision limits ranged from 0.05 to 0.24 μg/kg and 0.08 to 0.54 μg/kg, respectively. AOAC First Action Method 2012.25 with an extracted matrix calibration curve and internal standard correction is suitable for the determination of triphenylmethane dyes and leuco metabolites in salmon, catfish, tilapia, and shrimp by LC-MS/MS at a residue determination level of 0.5 μg/kg or below.

  3. The internal and external validity of the Major Depression Inventory in measuring severity of depressive states.

    PubMed

    Olsen, L R; Jensen, D V; Noerholm, V; Martiny, K; Bech, P

    2003-02-01

    We have developed the Major Depression Inventory (MDI), consisting of 10 items, covering the DSM-IV as well as the ICD-10 symptoms of depressive illness. We aimed to evaluate this as a scale measuring severity of depressive states with reference to both internal and external validity. Patients representing the score range from no depression to marked depression on the Hamilton Depression Scale (HAM-D) completed the MDI. Both classical and modern psychometric methods were applied for the evaluation of validity, including the Rasch analysis. In total, 91 patients were included. The results showed that the MDI had an adequate internal validity in being a unidimensional scale (the total score an appropriate or sufficient statistic). The external validity of the MDI was also confirmed as the total score of the MDI correlated significantly with the HAM-D (Pearson's coefficient 0.86, P < or = 0.01, Spearman 0.80, P < or = 0.01). When used in a sample of patients with different states of depression the MDI has an adequate internal and external validity.

  4. Large scale study of multiple-molecule queries

    PubMed Central

    2009-01-01

    Background In ligand-based screening, as well as in other chemoinformatics applications, one seeks to effectively search large repositories of molecules in order to retrieve molecules that are similar typically to a single molecule lead. However, in some case, multiple molecules from the same family are available to seed the query and search for other members of the same family. Multiple-molecule query methods have been less studied than single-molecule query methods. Furthermore, the previous studies have relied on proprietary data and sometimes have not used proper cross-validation methods to assess the results. In contrast, here we develop and compare multiple-molecule query methods using several large publicly available data sets and background. We also create a framework based on a strict cross-validation protocol to allow unbiased benchmarking for direct comparison in future studies across several performance metrics. Results Fourteen different multiple-molecule query methods were defined and benchmarked using: (1) 41 publicly available data sets of related molecules with similar biological activity; and (2) publicly available background data sets consisting of up to 175,000 molecules randomly extracted from the ChemDB database and other sources. Eight of the fourteen methods were parameter free, and six of them fit one or two free parameters to the data using a careful cross-validation protocol. All the methods were assessed and compared for their ability to retrieve members of the same family against the background data set by using several performance metrics including the Area Under the Accumulation Curve (AUAC), Area Under the Curve (AUC), F1-measure, and BEDROC metrics. Consistent with the previous literature, the best parameter-free methods are the MAX-SIM and MIN-RANK methods, which score a molecule to a family by the maximum similarity, or minimum ranking, obtained across the family. One new parameterized method introduced in this study and two previously defined methods, the Exponential Tanimoto Discriminant (ETD), the Tanimoto Power Discriminant (TPD), and the Binary Kernel Discriminant (BKD), outperform most other methods but are more complex, requiring one or two parameters to be fit to the data. Conclusion Fourteen methods for multiple-molecule querying of chemical databases, including novel methods, (ETD) and (TPD), are validated using publicly available data sets, standard cross-validation protocols, and established metrics. The best results are obtained with ETD, TPD, BKD, MAX-SIM, and MIN-RANK. These results can be replicated and compared with the results of future studies using data freely downloadable from http://cdb.ics.uci.edu/. PMID:20298525

  5. The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies

    PubMed Central

    Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill

    2013-01-01

    Background The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study Aim To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. Design and setting A three-part longitudinal predictive validity study of selection into training for UK general practice. Method In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Results Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. Conclusion In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered. PMID:24267856

  6. Validated method for the analysis of goji berry, a rich source of zeaxanthin dipalmitate.

    PubMed

    Karioti, Anastasia; Bergonzi, Maria Camilla; Vincieri, Franco F; Bilia, Anna Rita

    2014-12-31

    In the present study an HPLC-DAD method was developed for the determination of the main carotenoid, zeaxanthin dipalmitate, in the fruits of Lycium barbarum. The aim was to develop and optimize an extraction protocol to allow fast, exhaustive, and repeatable extraction, suitable for labile carotenoid content. Use of liquid N2 allowed the grinding of the fruit. A step of ultrasonication with water removed efficiently the polysaccharides and enabled the exhaustive extraction of carotenoids by hexane/acetone 50:50. The assay was fast and simple and permitted the quality control of a large number of commercial samples including fruits, juices, and a jam. The HPLC method was validated according to ICH guidelines and satisfied the requirements. Finally, the overall method was validated for precision (% RSD ranging between 3.81 and 4.13) and accuracy at three concentration levels. The recovery was between 94 and 107% with RSD values <2%, within the acceptable limits, especially if the difficulty of the matrix is taken into consideration.

  7. Predicting implementation from organizational readiness for change: a study protocol

    PubMed Central

    2011-01-01

    Background There is widespread interest in measuring organizational readiness to implement evidence-based practices in clinical care. However, there are a number of challenges to validating organizational measures, including inferential bias arising from the halo effect and method bias - two threats to validity that, while well-documented by organizational scholars, are often ignored in health services research. We describe a protocol to comprehensively assess the psychometric properties of a previously developed survey, the Organizational Readiness to Change Assessment. Objectives Our objective is to conduct a comprehensive assessment of the psychometric properties of the Organizational Readiness to Change Assessment incorporating methods specifically to address threats from halo effect and method bias. Methods and Design We will conduct three sets of analyses using longitudinal, secondary data from four partner projects, each testing interventions to improve the implementation of an evidence-based clinical practice. Partner projects field the Organizational Readiness to Change Assessment at baseline (n = 208 respondents; 53 facilities), and prospectively assesses the degree to which the evidence-based practice is implemented. We will conduct predictive and concurrent validities using hierarchical linear modeling and multivariate regression, respectively. For predictive validity, the outcome is the change from baseline to follow-up in the use of the evidence-based practice. We will use intra-class correlations derived from hierarchical linear models to assess inter-rater reliability. Two partner projects will also field measures of job satisfaction for convergent and discriminant validity analyses, and will field Organizational Readiness to Change Assessment measures at follow-up for concurrent validity (n = 158 respondents; 33 facilities). Convergent and discriminant validities will test associations between organizational readiness and different aspects of job satisfaction: satisfaction with leadership, which should be highly correlated with readiness, versus satisfaction with salary, which should be less correlated with readiness. Content validity will be assessed using an expert panel and modified Delphi technique. Discussion We propose a comprehensive protocol for validating a survey instrument for assessing organizational readiness to change that specifically addresses key threats of bias related to halo effect, method bias and questions of construct validity that often go unexplored in research using measures of organizational constructs. PMID:21777479

  8. Methods for detecting, quantifying, and adjusting for dissemination bias in meta-analysis are described.

    PubMed

    Mueller, Katharina Felicitas; Meerpohl, Joerg J; Briel, Matthias; Antes, Gerd; von Elm, Erik; Lang, Britta; Motschall, Edith; Schwarzer, Guido; Bassler, Dirk

    2016-12-01

    To systematically review methodological articles which focus on nonpublication of studies and to describe methods of detecting and/or quantifying and/or adjusting for dissemination in meta-analyses. To evaluate whether the methods have been applied to an empirical data set for which one can be reasonably confident that all studies conducted have been included. We systematically searched Medline, the Cochrane Library, and Web of Science, for methodological articles that describe at least one method of detecting and/or quantifying and/or adjusting for dissemination bias in meta-analyses. The literature search retrieved 2,224 records, of which we finally included 150 full-text articles. A great variety of methods to detect, quantify, or adjust for dissemination bias were described. Methods included graphical methods mainly based on funnel plot approaches, statistical methods, such as regression tests, selection models, sensitivity analyses, and a great number of more recent statistical approaches. Only few methods have been validated in empirical evaluations using unpublished studies obtained from regulators (Food and Drug Administration, European Medicines Agency). We present an overview of existing methods to detect, quantify, or adjust for dissemination bias. It remains difficult to advise which method should be used as they are all limited and their validity has rarely been assessed. Therefore, a thorough literature search remains crucial in systematic reviews, and further steps to increase the availability of all research results need to be taken. Copyright © 2016 Elsevier Inc. All rights reserved.

  9. METAPHOR: Probability density estimation for machine learning based photometric redshifts

    NASA Astrophysics Data System (ADS)

    Amaro, V.; Cavuoti, S.; Brescia, M.; Vellucci, C.; Tortora, C.; Longo, G.

    2017-06-01

    We present METAPHOR (Machine-learning Estimation Tool for Accurate PHOtometric Redshifts), a method able to provide a reliable PDF for photometric galaxy redshifts estimated through empirical techniques. METAPHOR is a modular workflow, mainly based on the MLPQNA neural network as internal engine to derive photometric galaxy redshifts, but giving the possibility to easily replace MLPQNA with any other method to predict photo-z's and their PDF. We present here the results about a validation test of the workflow on the galaxies from SDSS-DR9, showing also the universality of the method by replacing MLPQNA with KNN and Random Forest models. The validation test include also a comparison with the PDF's derived from a traditional SED template fitting method (Le Phare).

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    English, Shawn A.; Briggs, Timothy M.; Nelson, Stacy M.

    Simulations of low velocity impact with a flat cylindrical indenter upon a carbon fiber fabric reinforced polymer laminate are rigorously validated. Comparison of the impact energy absorption between the model and experiment is used as the validation metric. Additionally, non-destructive evaluation, including ultrasonic scans and three-dimensional computed tomography, provide qualitative validation of the models. The simulations include delamination, matrix cracks and fiber breaks. An orthotropic damage and failure constitutive model, capable of predicting progressive damage and failure, is developed in conjunction and described. An ensemble of simulations incorporating model parameter uncertainties is used to predict a response distribution which ismore » then compared to experimental output using appropriate statistical methods. Lastly, the model form errors are exposed and corrected for use in an additional blind validation analysis. The result is a quantifiable confidence in material characterization and model physics when simulating low velocity impact in structures of interest.« less

  11. Toward a CFD nose-to-tail capability - Hypersonic unsteady Navier-Stokes code validation

    NASA Technical Reports Server (NTRS)

    Edwards, Thomas A.; Flores, Jolen

    1989-01-01

    Computational fluid dynamics (CFD) research for hypersonic flows presents new problems in code validation because of the added complexity of the physical models. This paper surveys code validation procedures applicable to hypersonic flow models that include real gas effects. The current status of hypersonic CFD flow analysis is assessed with the Compressible Navier-Stokes (CNS) code as a case study. The methods of code validation discussed to beyond comparison with experimental data to include comparisons with other codes and formulations, component analyses, and estimation of numerical errors. Current results indicate that predicting hypersonic flows of perfect gases and equilibrium air are well in hand. Pressure, shock location, and integrated quantities are relatively easy to predict accurately, while surface quantities such as heat transfer are more sensitive to the solution procedure. Modeling transition to turbulence needs refinement, though preliminary results are promising.

  12. Validation of newly developed and redesigned key indicator methods for assessment of different working conditions with physical workloads based on mixed-methods design: a study protocol.

    PubMed

    Klussmann, Andre; Liebers, Falk; Brandstädt, Felix; Schust, Marianne; Serafin, Patrick; Schäfer, Andreas; Gebhardt, Hansjürgen; Hartmann, Bernd; Steinberg, Ulf

    2017-08-21

    The impact of work-related musculoskeletal disorders is considerable. The assessment of work tasks with physical workloads is crucial to estimate the work-related health risks of exposed employees. Three key indicator methods are available for risk assessment regarding manual lifting, holding and carrying of loads; manual pulling and pushing of loads; and manual handling operations. Three further KIMs for risk assessment regarding whole-body forces, awkward body postures and body movement have been developed de novo. In addition, the development of a newly drafted combined method for mixed exposures is planned. All methods will be validated regarding face validity, reliability, convergent validity, criterion validity and further aspects of utility under practical conditions. As part of the joint project MEGAPHYS (multilevel risk assessment of physical workloads), a mixed-methods study is being designed for the validation of KIMs and conducted in companies of different sizes and branches in Germany. Workplaces are documented and analysed by observations, applying KIMs, interviews and assessment of environmental conditions. Furthermore, a survey among the employees at the respective workplaces takes place with standardised questionnaires, interviews and physical examinations. It is intended to include 1200 employees at 120 different workplaces. For analysis of the quality criteria, recommendations of the COSMIN checklist (COnsensus-based Standards for the selection of health Measurement INstruments) will be taken into account. The study was planned and conducted in accordance with the German Medical Professional Code and the Declaration of Helsinki as well as the German Federal Data Protection Act. The design of the study was approved by ethics committees. We intend to publish the validated KIMs in 2018. Results will be published in peer-reviewed journals, presented at international meetings and disseminated to actual users for practical application. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2017. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  13. Validation of the SCEC broadband platform V14.3 simulation methods using pseudo spectral acceleration data

    USGS Publications Warehouse

    Dreger, Douglas S.; Beroza, Gregory C.; Day, Steven M.; Goulet, Christine A.; Jordan, Thomas H; Spudich, Paul A.; Stewart, Jonathan P.

    2015-01-01

    This paper summarizes the evaluation of ground motion simulation methods implemented on the SCEC Broadband Platform (BBP), version 14.3 (as of March 2014). A seven-member panel, the authorship of this article, was formed to evaluate those methods for the prediction of pseudo-­‐spectral accelerations (PSAs) of ground motion. The panel’s mandate was to evaluate the methods using tools developed through the validation exercise (Goulet et al. ,2014), and to define validation metrics for the assessment of the methods’ performance. This paper summarizes the evaluation process and conclusions from the panel. The five broadband, finite-source simulation methods on the BBP include two deterministic approaches herein referred to as CSM (Anderson, 2014) and UCSB (Crempien and Archuleta, 2014); a band-­‐limited stochastic white noise method called EXSIM (Atkinson and Assatourians, 2014); and two hybrid approaches, referred to as G&P (Graves and Pitarka, 2014) and SDSU (Olsen and Takedatsu, 2014), which utilize a deterministic Green’s function approach for periods longer than 1 second and stochastic methods for periods shorter than 1 second. Two acceptance tests were defined to validate the broadband finite‐source ground methods (Goulet et al., 2014). Part A compared observed and simulated PSAs for periods from 0.01 to 10 seconds for 12 moderate to large earthquakes located in California, Japan, and the eastern US. Part B compared the median simulated PSAs to published NGA-­‐West1 (Abrahamson and Silva, 2008; Boore and Atkinson, 2008; Campbell and Bozorgnia, 2008; and Chiou and Youngs, 2008) ground motion prediction equations (GMPEs) for specific magnitude and distance cases using a pass-­‐fail criteria based on a defined acceptable range around the spectral shape of the GMPEs. For the initial Part A and Part B validation exercises during the summer of 2013, the software for the five methods was locked in at version 13.6 (see Maechling et al., 2014). In the spring of 2014, additional moderate events were considered for the Part A validation, and additional magnitude and distance cases were considered for the Part B validation, for the software locked in at version 14.3. Several of the simulation procedures, specifically UCSB and SDSU, changed significantly between versions 13.6 and 14.3. The CSM code was not submitted in time for the v14.3 evaluation and its detailed performance is not addressed in this paper. As described in Goulet et al. (2014) and Maechling et al. (2014), the BBP generates a variety of products, including three-­‐component acceleration time series. A series of post-­‐processing codes were developed to provide individual component PSAs and average median horizontal-­‐component PSA (referred to as RotD50; Boore, 2010) for oscillator periods ranging from 0.01 to 10 seconds, as well as median PSA values computed using the NGA-­‐West 1 GMPEs. The BBP was also configured to provide statistical analysis of simulation results relative to recordings (Part A) and GMPEs (Part B) as described further in sections below. As part of our evaluation, we reviewed documentation provided by each of the developers, which included the technical basis behind the methods and the developer’s self-­‐assessments regarding the extrapolation capabilities (in terms of magnitude and distance ranges) of their methods. Two workshops were held in which methods and results were presented, and the panel was given the opportunity to question the developers and to have detailed technical discussions. A SCEC report (Dreger et al., 2013) describes the results of this review for BBP version 13.6. This paper summarizes that work and presents results for the more recent BBP 14.3 validation.

  14. Development of a time-dependent incompressible Navier-Stokes solver based on a fractional-step method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe

    1990-01-01

    The main goals are the development, validation, and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems. A solution method that combines a finite volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.

  15. Development and Validation of an Agency for Healthcare Research and Quality Indicator for Mortality After Congenital Heart Surgery Harmonized With Risk Adjustment for Congenital Heart Surgery (RACHS-1) Methodology.

    PubMed

    Jenkins, Kathy J; Koch Kupiec, Jennifer; Owens, Pamela L; Romano, Patrick S; Geppert, Jeffrey J; Gauvreau, Kimberlee

    2016-05-20

    The National Quality Forum previously approved a quality indicator for mortality after congenital heart surgery developed by the Agency for Healthcare Research and Quality (AHRQ). Several parameters of the validated Risk Adjustment for Congenital Heart Surgery (RACHS-1) method were included, but others differed. As part of the National Quality Forum endorsement maintenance process, developers were asked to harmonize the 2 methodologies. Parameters that were identical between the 2 methods were retained. AHRQ's Healthcare Cost and Utilization Project State Inpatient Databases (SID) 2008 were used to select optimal parameters where differences existed, with a goal to maximize model performance and face validity. Inclusion criteria were not changed and included all discharges for patients <18 years with International Classification of Diseases, Ninth Revision, Clinical Modification procedure codes for congenital heart surgery or nonspecific heart surgery combined with congenital heart disease diagnosis codes. The final model includes procedure risk group, age (0-28 days, 29-90 days, 91-364 days, 1-17 years), low birth weight (500-2499 g), other congenital anomalies (Clinical Classifications Software 217, except for 758.xx), multiple procedures, and transfer-in status. Among 17 945 eligible cases in the SID 2008, the c statistic for model performance was 0.82. In the SID 2013 validation data set, the c statistic was 0.82. Risk-adjusted mortality rates by center ranged from 0.9% to 4.1% (5th-95th percentile). Congenital heart surgery programs can now obtain national benchmarking reports by applying AHRQ Quality Indicator software to hospital administrative data, based on the harmonized RACHS-1 method, with high discrimination and face validity. © 2016 The Authors. Published on behalf of the American Heart Association, Inc., by Wiley Blackwell.

  16. The art and science of knowledge synthesis.

    PubMed

    Tricco, Andrea C; Tetzlaff, Jennifer; Moher, David

    2011-01-01

    To review methods for completing knowledge synthesis. We discuss how to complete a broad range of knowledge syntheses. Our article is intended as an introductory guide. Many groups worldwide conduct knowledge syntheses, and some methods are applicable to most reviews. However, variations of these methods are apparent for different types of reviews, such as realist reviews and mixed-model reviews. Review validity is dependent on the validity of the included primary studies and the review process itself. Steps should be taken to avoid bias in the conduct of knowledge synthesis. Transparency in reporting will help readers assess review validity and applicability, increasing its utility. Given the magnitude of the literature, the increasing demands on knowledge syntheses teams, and the diversity of approaches, continuing efforts will be important to increase the efficiency, validity, and applicability of systematic reviews. Future research should focus on increasing the uptake of knowledge synthesis, how best to update reviews, the comparability between different types of reviews (eg, rapid vs. comprehensive reviews), and how to prioritize knowledge synthesis topics. Copyright © 2011 Elsevier Inc. All rights reserved.

  17. Worldwide Protein Data Bank validation information: usage and trends.

    PubMed

    Smart, Oliver S; Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika; Kleywegt, Gerard J; Velankar, Sameer

    2018-03-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrends DB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics.

  18. Worldwide Protein Data Bank validation information: usage and trends

    PubMed Central

    Horský, Vladimír; Gore, Swanand; Svobodová Vařeková, Radka; Bendová, Veronika

    2018-01-01

    Realising the importance of assessing the quality of the biomolecular structures deposited in the Protein Data Bank (PDB), the Worldwide Protein Data Bank (wwPDB) partners established Validation Task Forces to obtain advice on the methods and standards to be used to validate structures determined by X-ray crystallography, nuclear magnetic resonance spectroscopy and three-dimensional electron cryo-microscopy. The resulting wwPDB validation pipeline is an integral part of the wwPDB OneDep deposition, biocuration and validation system. The wwPDB Validation Service webserver (https://validate.wwpdb.org) can be used to perform checks prior to deposition. Here, it is shown how validation metrics can be combined to produce an overall score that allows the ranking of macromolecular structures and domains in search results. The ValTrendsDB database provides users with a convenient way to access and analyse validation information and other properties of X-ray crystal structures in the PDB, including investigating trends in and correlations between different structure properties and validation metrics. PMID:29533231

  19. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide

    NASA Astrophysics Data System (ADS)

    Darwish, Hany W.; Hassan, Said A.; Salem, Maissa Y.; El-Zeany, Badr A.

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively.

  20. Does a Multi-Media Program Enhance Job Matching for a Population with Intellectual Disabilities? A Social Validity Study

    ERIC Educational Resources Information Center

    Michaud, Kim M.

    2017-01-01

    This dissertation describes a mixed method design study on the social validity of a multi-media job search tool, the YES tool, at a four-year Comprehensive Transition Program at an East Coast University. The participants included twelve students, randomly selected from those who, with their parents' assent, agreed to volunteer for this study…

  1. Reliability and Validity of the Computerized Revised Token Test: Comparison of Reading and Listening Versions in Persons with and without Aphasia

    ERIC Educational Resources Information Center

    McNeil, Malcolm R.; Pratt, Sheila R.; Szuminsky, Neil; Sung, Jee Eun; Fossett, Tepanta R. D.; Fassbinder, Wiltrud; Lim, Kyoung Yuel

    2015-01-01

    Purpose: This study assessed the reliability and validity of intermodality associations and differences in persons with aphasia (PWA) and healthy controls (HC) on a computerized listening and 3 reading versions of the Revised Token Test (RTT; McNeil & Prescott, 1978). Method: Thirty PWA and 30 HC completed the test versions, including a…

  2. Humanities, Religion, and the Arts Tomorrow.

    ERIC Educational Resources Information Center

    Hunter, Howard, Ed.

    Intended as a basic resource in new primary sources for interdisciplinary studies, this book consists of twelve essays on contemporary culture, religion, and the arts. The authors, specialists in the humanities, are concerned with interdisciplinary investigation, including such issues as determining methods of study, methods of validating claims…

  3. Educational Research and the Sight of Inquiry: Visual Methodologies before Visual Methods

    ERIC Educational Resources Information Center

    Metcalfe, Amy Scott

    2016-01-01

    As visual methods are increasingly validated in the social sciences, including educational research, we must interrogate the sight of inquiry as we employ visual methods to examine our research sites. In this essay, three layers of engagement with the visual are examined in relation to educational research: looking, seeing, and envisioning. This…

  4. Fourth NASA Langley Formal Methods Workshop

    NASA Technical Reports Server (NTRS)

    Holloway, C. Michael (Compiler); Hayhurst, Kelly J. (Compiler)

    1997-01-01

    This publication consists of papers presented at NASA Langley Research Center's fourth workshop on the application of formal methods to the design and verification of life-critical systems. Topic considered include: Proving properties of accident; modeling and validating SAFER in VDM-SL; requirement analysis of real-time control systems using PVS; a tabular language for system design; automated deductive verification of parallel systems. Also included is a fundamental hardware design in PVS.

  5. MRPrimer: a MapReduce-based method for the thorough design of valid and ranked primers for PCR.

    PubMed

    Kim, Hyerin; Kang, NaNa; Chon, Kang-Wook; Kim, Seonho; Lee, NaHye; Koo, JaeHyung; Kim, Min-Soo

    2015-11-16

    Primer design is a fundamental technique that is widely used for polymerase chain reaction (PCR). Although many methods have been proposed for primer design, they require a great deal of manual effort to generate feasible and valid primers, including homology tests on off-target sequences using BLAST-like tools. That approach is inconvenient for many target sequences of quantitative PCR (qPCR) due to considering the same stringent and allele-invariant constraints. To address this issue, we propose an entirely new method called MRPrimer that can design all feasible and valid primer pairs existing in a DNA database at once, while simultaneously checking a multitude of filtering constraints and validating primer specificity. Furthermore, MRPrimer suggests the best primer pair for each target sequence, based on a ranking method. Through qPCR analysis using 343 primer pairs and the corresponding sequencing and comparative analyses, we showed that the primer pairs designed by MRPrimer are very stable and effective for qPCR. In addition, MRPrimer is computationally efficient and scalable and therefore useful for quickly constructing an entire collection of feasible and valid primers for frequently updated databases like RefSeq. Furthermore, we suggest that MRPrimer can be utilized conveniently for experiments requiring primer design, especially real-time qPCR. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  6. Method validation using weighted linear regression models for quantification of UV filters in water samples.

    PubMed

    da Silva, Claudia Pereira; Emídio, Elissandro Soares; de Marchi, Mary Rosa Rodrigues

    2015-01-01

    This paper describes the validation of a method consisting of solid-phase extraction followed by gas chromatography-tandem mass spectrometry for the analysis of the ultraviolet (UV) filters benzophenone-3, ethylhexyl salicylate, ethylhexyl methoxycinnamate and octocrylene. The method validation criteria included evaluation of selectivity, analytical curve, trueness, precision, limits of detection and limits of quantification. The non-weighted linear regression model has traditionally been used for calibration, but it is not necessarily the optimal model in all cases. Because the assumption of homoscedasticity was not met for the analytical data in this work, a weighted least squares linear regression was used for the calibration method. The evaluated analytical parameters were satisfactory for the analytes and showed recoveries at four fortification levels between 62% and 107%, with relative standard deviations less than 14%. The detection limits ranged from 7.6 to 24.1 ng L(-1). The proposed method was used to determine the amount of UV filters in water samples from water treatment plants in Araraquara and Jau in São Paulo, Brazil. Copyright © 2014 Elsevier B.V. All rights reserved.

  7. Validation of standard method EN ISO 11290-part 2 for the enumeration of Listeria monocytogenes in food.

    PubMed

    Rollier, Patricia; Lombard, Bertrand; Guillier, Laurent; François, Danièle; Romero, Karol; Pierru, Sylvie; Bouhier, Laurence; Gnanou Besse, Nathalie

    2018-05-01

    The reference method for the detection and enumeration of L. monocytogenes in food (Standards EN ISO 11290-1&2) have been validated by inter-laboratory studies in the frame of the Mandate M381 from European Commission to CEN. In this paper, the inter-laboratory studies led in 2013 on 5 matrices (cold-smoked salmon, milk powdered infant food formula, vegetables, environment, and cheese) to validate Standard EN ISO 11290-2 are reported. According to the results obtained, the method of the revised Standard EN ISO 11290-2 can be considered as a good method for the enumeration of L. monocytogenes in foods and food processing environment, in particular for the matrices included in the study. Values of repeatability and reproducibility standard deviations can be considered satisfactory for this type of method with a confirmation stage, since most of them were below 0.3 log 10 , also at low levels, close to the regulatory limit of 100 CFU/g. Copyright © 2018 Elsevier B.V. All rights reserved.

  8. A systematic review and comparison of questionnaires in the management of spinal cord injury, multiple sclerosis and the neurogenic bladder.

    PubMed

    Tsang, B; Stothers, L; Macnab, A; Lazare, D; Nigro, M

    2016-03-01

    Validated questionnaires are increasingly the preferred method used to obtain historical information. Specialized questionnaires exist validated for patients with neurogenic disease including neurogenic bladder. Those currently available are systematically reviewed and their potential for clinical and research use are described. A systematic search via Medline and PubMed using the key terms questionnaire(s) crossed with Multiple Sclerosis (MS) and Spinal Cord Injury (SCI) for the years 1946 to January 22, 2014 inclusive. Additional articles were selected from review of references in the publications identified. Only peer reviewed articles published in English were included. 18 questionnaires exist validated for patients with neurogenic bladder; 14 related to MS, 3 for SCI, and 1 for neurogenic bladder in general; with 4 cross-validated in both MS and SCI. All 18 are validated for both male and female patients; 59% are available only in English. The domains of psychological impact and physical function are represented in 71% and 76% of questionnaires, respectively. None for the female population included elements to measure symptoms of prolapse. The last decade has seen an expansion of validated questionnaires to document bladder symptoms in neurogenic disease. Disease specific instruments are available for incorporation into the clinical setting for MS and SCI patients with neurogenic bladder. The availability of caregiver and interview options enhances suitability in clinical practice as they can be adapted to various extents of disability. Future developments should include expanded language validation to the top 10 global languages reported by the World Health Organization. © 2015 Wiley Periodicals, Inc.

  9. Integration of new biological and physical retrospective dosimetry methods into EU emergency response plans - joint RENEB and EURADOS inter-laboratory comparisons.

    PubMed

    Ainsbury, Elizabeth; Badie, Christophe; Barnard, Stephen; Manning, Grainne; Moquet, Jayne; Abend, Michael; Antunes, Ana Catarina; Barrios, Lleonard; Bassinet, Celine; Beinke, Christina; Bortolin, Emanuela; Bossin, Lily; Bricknell, Clare; Brzoska, Kamil; Buraczewska, Iwona; Castaño, Carlos Huertas; Čemusová, Zina; Christiansson, Maria; Cordero, Santiago Mateos; Cosler, Guillaume; Monaca, Sara Della; Desangles, François; Discher, Michael; Dominguez, Inmaculada; Doucha-Senf, Sven; Eakins, Jon; Fattibene, Paola; Filippi, Silvia; Frenzel, Monika; Georgieva, Dimka; Gregoire, Eric; Guogyte, Kamile; Hadjidekova, Valeria; Hadjiiska, Ljubomira; Hristova, Rositsa; Karakosta, Maria; Kis, Enikő; Kriehuber, Ralf; Lee, Jungil; Lloyd, David; Lumniczky, Katalin; Lyng, Fiona; Macaeva, Ellina; Majewski, Matthaeus; Vanda Martins, S; McKeever, Stephen W S; Meade, Aidan; Medipally, Dinesh; Meschini, Roberta; M'kacher, Radhia; Gil, Octávia Monteiro; Montero, Alegria; Moreno, Mercedes; Noditi, Mihaela; Oestreicher, Ursula; Oskamp, Dominik; Palitti, Fabrizio; Palma, Valentina; Pantelias, Gabriel; Pateux, Jerome; Patrono, Clarice; Pepe, Gaetano; Port, Matthias; Prieto, María Jesús; Quattrini, Maria Cristina; Quintens, Roel; Ricoul, Michelle; Roy, Laurence; Sabatier, Laure; Sebastià, Natividad; Sholom, Sergey; Sommer, Sylwester; Staynova, Albena; Strunz, Sonja; Terzoudi, Georgia; Testa, Antonella; Trompier, Francois; Valente, Marco; Hoey, Olivier Van; Veronese, Ivan; Wojcik, Andrzej; Woda, Clemens

    2017-01-01

    RENEB, 'Realising the European Network of Biodosimetry and Physical Retrospective Dosimetry,' is a network for research and emergency response mutual assistance in biodosimetry within the EU. Within this extremely active network, a number of new dosimetry methods have recently been proposed or developed. There is a requirement to test and/or validate these candidate techniques and inter-comparison exercises are a well-established method for such validation. The authors present details of inter-comparisons of four such new methods: dicentric chromosome analysis including telomere and centromere staining; the gene expression assay carried out in whole blood; Raman spectroscopy on blood lymphocytes, and detection of radiation-induced thermoluminescent signals in glass screens taken from mobile phones. In general the results show good agreement between the laboratories and methods within the expected levels of uncertainty, and thus demonstrate that there is a lot of potential for each of the candidate techniques. Further work is required before the new methods can be included within the suite of reliable dosimetry methods for use by RENEB partners and others in routine and emergency response scenarios.

  10. Testing alternative ground water models using cross-validation and other methods

    USGS Publications Warehouse

    Foglia, L.; Mehl, S.W.; Hill, M.C.; Perona, P.; Burlando, P.

    2007-01-01

    Many methods can be used to test alternative ground water models. Of concern in this work are methods able to (1) rank alternative models (also called model discrimination) and (2) identify observations important to parameter estimates and predictions (equivalent to the purpose served by some types of sensitivity analysis). Some of the measures investigated are computationally efficient; others are computationally demanding. The latter are generally needed to account for model nonlinearity. The efficient model discrimination methods investigated include the information criteria: the corrected Akaike information criterion, Bayesian information criterion, and generalized cross-validation. The efficient sensitivity analysis measures used are dimensionless scaled sensitivity (DSS), composite scaled sensitivity, and parameter correlation coefficient (PCC); the other statistics are DFBETAS, Cook's D, and observation-prediction statistic. Acronyms are explained in the introduction. Cross-validation (CV) is a computationally intensive nonlinear method that is used for both model discrimination and sensitivity analysis. The methods are tested using up to five alternative parsimoniously constructed models of the ground water system of the Maggia Valley in southern Switzerland. The alternative models differ in their representation of hydraulic conductivity. A new method for graphically representing CV and sensitivity analysis results for complex models is presented and used to evaluate the utility of the efficient statistics. The results indicate that for model selection, the information criteria produce similar results at much smaller computational cost than CV. For identifying important observations, the only obviously inferior linear measure is DSS; the poor performance was expected because DSS does not include the effects of parameter correlation and PCC reveals large parameter correlations. ?? 2007 National Ground Water Association.

  11. Alternative Methods of Accounting for Underreporting and Overreporting When Measuring Dietary Intake-Obesity Relations

    PubMed Central

    Mendez, Michelle A.; Popkin, Barry M.; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R.; Sánchez, María-José; González, Carlos A

    2011-01-01

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29–65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = −0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes. PMID:21242302

  12. Alternative methods of accounting for underreporting and overreporting when measuring dietary intake-obesity relations.

    PubMed

    Mendez, Michelle A; Popkin, Barry M; Buckland, Genevieve; Schroder, Helmut; Amiano, Pilar; Barricarte, Aurelio; Huerta, José-María; Quirós, José R; Sánchez, María-José; González, Carlos A

    2011-02-15

    Misreporting characterized by the reporting of implausible energy intakes may undermine the valid estimation of diet-disease relations, but the methods to best identify and account for misreporting are unknown. The present study compared how alternate approaches affected associations between selected dietary factors and body mass index (BMI) by using data from the European Prospective Investigation Into Cancer and Nutrition-Spain. A total of 24,332 women and 15,061 men 29-65 years of age recruited from 1992 to 1996 for whom measured height and weight and validated diet history data were available were included. Misreporters were identified on the basis of disparities between reported energy intakes and estimated requirements calculated using the original Goldberg method and 2 alternatives: one that substituted basal metabolic rate equations that are more valid at higher BMIs and another that used doubly labeled water-predicted total energy expenditure equations. Compared with results obtained using the original method, underreporting was considerably lower and overreporting higher with alternative methods, which were highly concordant. Accounting for misreporters with all methods yielded diet-BMI relations that were more consistent with expectations; alternative methods often strengthened associations. For example, among women, multivariable-adjusted differences in BMI for the highest versus lowest vegetable intake tertile (β = 0.37 (standard error, 0.07)) were neutral after adjusting with the original method (β = 0.01 (standard error, 07)) and negative using the predicted total energy expenditure method with stringent cutoffs (β = -0.15 (standard error, 0.07)). Alternative methods may yield more valid associations between diet and obesity-related outcomes.

  13. Endocrine Disruptor Screening Program Reports to Congress

    EPA Pesticide Factsheets

    This page includes EPA reports to congress on pesticide licensing and endocrine disruptor screening activities, Endocrine Disruptor Methods Validation Subcomittee (EDMVS) progress, and Endocrine Disruptor Screening Program (EDSP) implementation progress.

  14. Review on pen-and-paper-based observational methods for assessing ergonomic risk factors of computer work.

    PubMed

    Rahman, Mohd Nasrull Abdol; Mohamad, Siti Shafika

    2017-01-01

    Computer works are associated with Musculoskeletal Disorders (MSDs). There are several methods have been developed to assess computer work risk factor related to MSDs. This review aims to give an overview of current techniques available for pen-and-paper-based observational methods in assessing ergonomic risk factors of computer work. We searched an electronic database for materials from 1992 until 2015. The selected methods were focused on computer work, pen-and-paper observational methods, office risk factors and musculoskeletal disorders. This review was developed to assess the risk factors, reliability and validity of pen-and-paper observational method associated with computer work. Two evaluators independently carried out this review. Seven observational methods used to assess exposure to office risk factor for work-related musculoskeletal disorders were identified. The risk factors involved in current techniques of pen and paper based observational tools were postures, office components, force and repetition. From the seven methods, only five methods had been tested for reliability. They were proven to be reliable and were rated as moderate to good. For the validity testing, from seven methods only four methods were tested and the results are moderate. Many observational tools already exist, but no single tool appears to cover all of the risk factors including working posture, office component, force, repetition and office environment at office workstations and computer work. Although the most important factor in developing tool is proper validation of exposure assessment techniques, the existing observational method did not test reliability and validity. Futhermore, this review could provide the researchers with ways on how to improve the pen-and-paper-based observational method for assessing ergonomic risk factors of computer work.

  15. System and method for modeling and analyzing complex scenarios

    DOEpatents

    Shevitz, Daniel Wolf

    2013-04-09

    An embodiment of the present invention includes a method for analyzing and solving possibility tree. A possibility tree having a plurality of programmable nodes is constructed and solved with a solver module executed by a processor element. The solver module executes the programming of said nodes, and tracks the state of at least a variable through a branch. When a variable of said branch is out of tolerance with a parameter, the solver disables remaining nodes of the branch and marks the branch as an invalid solution. The valid solutions are then aggregated and displayed as valid tree solutions.

  16. Behavioral Changes Based on a Course in Agroecology: A Mixed Methods Study

    ERIC Educational Resources Information Center

    Harms, Kristyn; King, James; Francis, Charles

    2009-01-01

    This study evaluated and described student perceptions of a course in agroecology to determine if participants experienced changed perceptions and behaviors resulting from the Agroecosystems Analysis course. A triangulation validating quantitative data mixed methods approach included a written survey comprised of both quantitative and open-ended…

  17. Development and validation of a Daphnia magna four-day survival and growth test method

    EPA Science Inventory

    Zooplankton are an important part of the aquatic ecology of all lakes and streams. As a result, numerous methods have been developed to assess the quality of waterbodies using various zooplankton species. Included in these is the freshwater species Daphnia magna. Current test me...

  18. Qualitative Analysis on Stage: Making the Research Process More Public.

    ERIC Educational Resources Information Center

    Anfara, Vincent A., Jr.; Brown, Kathleen M.

    The increased use of qualitative research methods has spurred interest in developing formal standards for assessing its validity. These standards, however, fall short if they do not include public disclosure of methods as a criterion. The researcher must be accountable in documenting the actions associated with establishing internal validity…

  19. The High School & Beyond Data Set: Academic Self-Concept Measures.

    ERIC Educational Resources Information Center

    Strein, William

    A series of confirmatory factor analyses using both LISREL VI (maximum likelihood method) and LISCOMP (weighted least squares method using covariance matrix based on polychoric correlations) and including cross-validation on independent samples were applied to items from the High School and Beyond data set to explore the measurement…

  20. FastaValidator: an open-source Java library to parse and validate FASTA formatted sequences.

    PubMed

    Waldmann, Jost; Gerken, Jan; Hankeln, Wolfgang; Schweer, Timmy; Glöckner, Frank Oliver

    2014-06-14

    Advances in sequencing technologies challenge the efficient importing and validation of FASTA formatted sequence data which is still a prerequisite for most bioinformatic tools and pipelines. Comparative analysis of commonly used Bio*-frameworks (BioPerl, BioJava and Biopython) shows that their scalability and accuracy is hampered. FastaValidator represents a platform-independent, standardized, light-weight software library written in the Java programming language. It targets computer scientists and bioinformaticians writing software which needs to parse quickly and accurately large amounts of sequence data. For end-users FastaValidator includes an interactive out-of-the-box validation of FASTA formatted files, as well as a non-interactive mode designed for high-throughput validation in software pipelines. The accuracy and performance of the FastaValidator library qualifies it for large data sets such as those commonly produced by massive parallel (NGS) technologies. It offers scientists a fast, accurate and standardized method for parsing and validating FASTA formatted sequence data.

  1. Mediating the Cognitive Walkthrough with Patient Groups to achieve Personalized Health in Chronic Disease Self-Management System Evaluation.

    PubMed

    Georgsson, Mattias; Kushniruk, Andre

    2016-01-01

    The cognitive walkthrough (CW) is a task-based, expert inspection usability evaluation method involving benefits such as cost effectiveness and efficiency. A drawback of the method is that it doesn't involve the user perspective from real users but instead is based on experts' predictions about the usability of the system and how users interact. In this paper, we propose a way of involving the user in an expert evaluation method by modifying the CW with patient groups as mediators. This along with other modifications include a dual domain session facilitator, specific patient groups and three different phases: 1) a preparation phase where suitable tasks are developed by a panel of experts and patients, validated through the content validity index 2) a patient user evaluation phase including an individual and collaborative process part 3) an analysis and coding phase where all data is digitalized and synthesized making use of Qualitative Data Analysis Software (QDAS) to determine usability deficiencies. We predict that this way of evaluating will utilize the benefits of the expert methods, also providing a way of including the patient user of these self-management systems. Results from this prospective study should provide evidence of the usefulness of this method modification.

  2. Development And Evaluation Of Stable Isotope And Fluorescent Labeling And Detection Methodologies For Tracking Injected Bacteria During In Situ Bioremediation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Mark E. Fuller; Tullis C. Onstott

    2003-12-17

    This report summarizes the results of a research project conducted to develop new methods to label bacterial cells so that they could be tracked and enumerated as they move in the subsurface after they are introduced into the groundwater (i.e., during bioaugmentation). Labeling methods based on stable isotopes of carbon (13C) and vital fluorescent stains were developed. Both approaches proved successful with regards to the ability to effectively label bacterial cells. Several methods for enumeration of fluorescently-labeled cells were developed and validated, including near-real time microplate spectrofluorometry that could be performed in the field. However, the development of a novelmore » enumeration method for the 13C-enriched cells, chemical reaction interface/mass spectrometry (CRIMS), was not successful due to difficulties with the proposed instrumentation. Both labeling methodologies were successfully evaluated and validated during laboratory- and field-scale bacterial transport experiments. The methods developed during this research should be useful for future bacterial transport work as well as other microbial ecology research in a variety of environments. A full bibliography of research articles and meeting presentations related to this project is included (including web links to abstracts and full text reprints).« less

  3. Translated Versions of Voice Handicap Index (VHI)-30 across Languages: A Systematic Review

    PubMed Central

    SEIFPANAHI, Sadegh; JALAIE, Shohreh; NIKOO, Mohammad Reza; SOBHANI-RAD, Davood

    2015-01-01

    Background: In this systematic review, the aim is to investigate different VHI-30 versions between languages regarding their validity, reliability and their translation process. Methods: Articles were extracted systematically from some of the prime databases including Cochrane, googlescholar, MEDLINE (via PubMed gate), Sciencedirect, Web of science, and their reference lists by Voice Handicap Index keyword with only title limitation and time of publication (from 1997 to 2014). However the other limitations (e.g. excluding non-English, other versions of VHI ones, and so on) applied manually after studying the papers. In order to appraise the methodology of the papers, three authors did it by 12-item diagnostic test checklist in “Critical Appraisal Skills Programme” or (CASP) site. After applying all of the screenings, the papers that had the study eligibility criteria such as; translation, validity, and reliability processes, included in this review. Results: The remained non-repeated articles were 12 from different languages. All of them reported validity, reliability and translation method, which presented in details in this review. Conclusion: Mainly the preferred method for translation in the gathered papers was “Brislin’s classic back-translation model (1970), although the procedure was not performed completely but it was more prominent than other translation procedures. High test-retest reliability, internal consistency and moderate construct validity between different languages in regards to all 3 VHI-30 domains confirm the applicability of translated VHI-30 version across languages. PMID:26056664

  4. Online registration of monthly sports participation after anterior cruciate ligament injury: a reliability and validity study

    PubMed Central

    Grindem, Hege; Eitzen, Ingrid; Snyder-Mackler, Lynn; Risberg, May Arna

    2013-01-01

    Background Current methods measuring sports activity after anterior cruciate ligament (ACL) injury are commonly restricted to the most knee-demanding sport, and do not consider participation in multiple sports. We therefore developed an online activity survey to prospectively record monthly participation in all major sports relevant to our patient-group. Objective To assess the reliability, content validity, and concurrent validity of the survey, and evaluate if it provided more complete data on sports participation than a routine activity questionnaire. Methods One hundred and forty-five consecutively included ACL-injured patients were eligible for the reliability study. The retest of the online activity survey was performed two days after the test response had been recorded. A subsample of 88 ACL-reconstructed patients were included in the validity study. The ACL-reconstructed patients completed the online activity survey from the first to the twelfth postoperative month, and a routine activity questionnaire 6 and 12 months postoperatively. Results The online activity survey was highly reliable (κ ranging from 0.81 to 1). It contained all the common sports reported on the routine activity questionnaire. There was substantial agreement between the two methods on return to preinjury main sport (κ = 0.71 and 0.74 at 6 and 12 months postoperatively). The online activity survey revealed that a significantly higher number of patients reported to participate in running, cycling and strength training, and patients reported to participate in a greater number of sports. Conclusion The online activity survey is a highly reliable way of recording detailed changes in sports participation after ACL injury. The findings of this study support the content and concurrent validity of the survey, and suggest that the online activity survey can provide more complete data on sports participation than a routine activity questionnaire. PMID:23645830

  5. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record

    PubMed Central

    Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W

    2011-01-01

    Background Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. Objective To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. Study design and methods We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100 000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100 000 records to assess its accuracy. Results Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100 000 randomly selected patients showed high sensitivity (range: 62.8–100.0%) and positive predictive value (range: 79.8–99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. Conclusion We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts. PMID:21613643

  6. LACO-Wiki: A land cover validation tool and a new, innovative teaching resource for remote sensing and the geosciences

    NASA Astrophysics Data System (ADS)

    See, Linda; Perger, Christoph; Dresel, Christopher; Hofer, Martin; Weichselbaum, Juergen; Mondel, Thomas; Steffen, Fritz

    2016-04-01

    The validation of land cover products is an important step in the workflow of generating a land cover map from remotely-sensed imagery. Many students of remote sensing will be given exercises on classifying a land cover map followed by the validation process. Many algorithms exist for classification, embedded within proprietary image processing software or increasingly as open source tools. However, there is little standardization for land cover validation, nor a set of open tools available for implementing this process. The LACO-Wiki tool was developed as a way of filling this gap, bringing together standardized land cover validation methods and workflows into a single portal. This includes the storage and management of land cover maps and validation data; step-by-step instructions to guide users through the validation process; sound sampling designs; an easy-to-use environment for validation sample interpretation; and the generation of accuracy reports based on the validation process. The tool was developed for a range of users including producers of land cover maps, researchers, teachers and students. The use of such a tool could be embedded within the curriculum of remote sensing courses at a university level but is simple enough for use by students aged 13-18. A beta version of the tool is available for testing at: http://www.laco-wiki.net.

  7. What does it cost to prevent on-duty firefighter cardiac events? A content valid method for calculating costs.

    PubMed

    Patterson, P Daniel; Suyama, Joe; Reis, Steven E; Weaver, Matthew D; Hostler, David

    2013-01-01

    Cardiac arrest is a leading cause of mortality among firefighters. We sought to develop a valid method for determining the costs of a workplace prevention program for firefighters. In 2012, we developed a draft framework using human resource accounting and in-depth interviews with experts in the firefighting and insurance industries. The interviews produced a draft cost model with 6 components and 26 subcomponents. In 2013, we randomly sampled 100 fire chiefs out of >7,400 affiliated with the International Association of Fire Chiefs. We used the Content Validity Index (CVI) to identify the content valid components of the draft cost model. This was accomplished by having fire chiefs rate the relevancy of cost components using a 4-point Likert scale (highly relevant to not relevant). We received complete survey data from 65 fire chiefs (65% response rate). We retained 5 components and 21 subcomponents based on CVI scores ≥0.70. The five main components include, (1) investment costs, (2) orientation and training costs, (3) medical and pharmaceutical costs, (4) education and continuing education costs, and (5) maintenance costs. Data from a diverse sample of fire chiefs has produced a content valid method for calculating the cost of a prevention program among firefighters.

  8. [Validation of three screening tests used for early detection of cervical cancer].

    PubMed

    Rodriguez-Reyes, Esperanza Rosalba; Cerda-Flores, Ricardo M; Quiñones-Pérez, Juan M; Cortés-Gutiérrez, Elva I

    2008-01-01

    to evaluate the validity (sensitivity, specificity, and accuracy) of three screening methods used in the early detection of the cervical carcinoma versus the histopathology diagnosis. a selected sample of 107 women attended in the Opportune Detection of Cervicouterine Cancer Program in the Hospital de Zona 46, Instituto Mexicano del Seguro Social in Durango, during the 2003 was included. The application of Papa-nicolaou, acetic acid test, and molecular detection of human papillomavirus, and histopatholgy diagnosis were performed in all the patients at the time of the gynecological exam. The detection and tipification of the human papillomavirus was performed by polymerase chain reaction (PCR) and analysis of polymorphisms of length of restriction fragments (RFLP). Histopathology diagnosis was considered the gold standard. The evaluation of the validity was carried out by the Bayesian method for diagnosis test. the positive cases for acetic acid test, Papanicolaou, and PCR were 47, 22, and 19. The accuracy values were 0.70, 0.80 and 0.99, respectively. since the molecular method showed a greater validity in the early detection of the cervical carcinoma we considered of vital importance its implementation in suitable programs of Opportune Detection of Cervicouterino Cancer Program in Mexico. However, in order to validate this conclusion, cross-sectional studies in different region of country must be carried out.

  9. What Does It Cost to Prevent On-Duty Firefighter Cardiac Events? A Content Valid Method for Calculating Costs

    PubMed Central

    Patterson, P. Daniel; Suyama, Joe; Reis, Steven E.; Weaver, Matthew D.; Hostler, David

    2013-01-01

    Cardiac arrest is a leading cause of mortality among firefighters. We sought to develop a valid method for determining the costs of a workplace prevention program for firefighters. In 2012, we developed a draft framework using human resource accounting and in-depth interviews with experts in the firefighting and insurance industries. The interviews produced a draft cost model with 6 components and 26 subcomponents. In 2013, we randomly sampled 100 fire chiefs out of >7,400 affiliated with the International Association of Fire Chiefs. We used the Content Validity Index (CVI) to identify the content valid components of the draft cost model. This was accomplished by having fire chiefs rate the relevancy of cost components using a 4-point Likert scale (highly relevant to not relevant). We received complete survey data from 65 fire chiefs (65% response rate). We retained 5 components and 21 subcomponents based on CVI scores ≥0.70. The five main components include, (1) investment costs, (2) orientation and training costs, (3) medical and pharmaceutical costs, (4) education and continuing education costs, and (5) maintenance costs. Data from a diverse sample of fire chiefs has produced a content valid method for calculating the cost of a prevention program among firefighters. PMID:24455288

  10. BMI curves for preterm infants.

    PubMed

    Olsen, Irene E; Lawson, M Louise; Ferguson, A Nicole; Cantrell, Rebecca; Grabich, Shannon C; Zemel, Babette S; Clark, Reese H

    2015-03-01

    Preterm infants experience disproportionate growth failure postnatally and may be large weight for length despite being small weight for age by hospital discharge. The objective of this study was to create and validate intrauterine weight-for-length growth curves using the contemporary, large, racially diverse US birth parameters sample used to create the Olsen weight-, length-, and head-circumference-for-age curves. Data from 391 681 US infants (Pediatrix Medical Group) born at 22 to 42 weeks' gestational age (born in 1998-2006) included birth weight, length, and head circumference, estimated gestational age, and gender. Separate subsamples were used to create and validate curves. Established methods were used to determine the weight-for-length ratio that was most highly correlated with weight and uncorrelated with length. Final smoothed percentile curves (3rd to 97th) were created by the Lambda Mu Sigma (LMS) method. The validation sample was used to confirm results. The final sample included 254 454 singleton infants (57.2% male) who survived to discharge. BMI was the best overall weight-for-length ratio for both genders and a majority of gestational ages. Gender-specific BMI-for-age curves were created (n = 127 446) and successfully validated (n = 126 988). Mean z scores for the validation sample were ∼0 (∼1 SD). BMI was different across gender and gestational age. We provide a set of validated reference curves (gender-specific) to track changes in BMI for prematurely born infants cared for in the NICU for use with weight-, length-, and head-circumference-for-age intrauterine growth curves. Copyright © 2015 by the American Academy of Pediatrics.

  11. Determination of calcium, copper, iron, magnesium, manganese, potassium, phosphorus, sodium, and zinc in fortified food products by microwave digestion and inductively coupled plasma-optical emission spectrometry: single-laboratory validation and ring trial.

    PubMed

    Poitevin, Eric

    2012-01-01

    A single-laboratory validation (SLV) and a ring trial (RT) were undertaken to determine nine nutritional elements in food products by inductively coupled plasma-optical emission spectrometry in order to modernize AOAC Official Method 984.27. The improvements involved extension of the scope to all food matrixes (including infant formula), optimized microwave digestion, selected analytical lines, internal standardization, and ion buffering. Simultaneous determination of nine elements (calcium, copper, iron, potassium, magnesium, manganese, sodium, phosphorus, and zinc) was made in food products. Sample digestion was performed through wet digestion of food samples by microwave technology with either closed- or open-vessel systems. Validation was performed to characterize the method for selectivity, sensitivity, linearity, accuracy, precision, recovery, ruggedness, and uncertainty. The robustness and efficiency of this method was proven through a successful RT using experienced independent food industry laboratories. Performance characteristics are reported for 13 certified and in-house reference materials, populating the AOAC triangle food sectors, which fulfilled AOAC criteria and recommendations for accuracy (trueness, recovery, and z-scores) and precision (repeatability and reproducibility RSD, and HorRat values) regarding SLVs and RTs. This multielemental method is cost-efficient, time-saving, accurate, and fit-for-purpose according to ISO 17025 Norm and AOAC acceptability criteria, and is proposed as an extended updated version of AOAC Official Method 984.27 for fortified food products, including infant formula.

  12. Development of Advanced Verification and Validation Procedures and Tools for the Certification of Learning Systems in Aerospace Applications

    NASA Technical Reports Server (NTRS)

    Jacklin, Stephen; Schumann, Johann; Gupta, Pramod; Richard, Michael; Guenther, Kurt; Soares, Fola

    2005-01-01

    Adaptive control technologies that incorporate learning algorithms have been proposed to enable automatic flight control and vehicle recovery, autonomous flight, and to maintain vehicle performance in the face of unknown, changing, or poorly defined operating environments. In order for adaptive control systems to be used in safety-critical aerospace applications, they must be proven to be highly safe and reliable. Rigorous methods for adaptive software verification and validation must be developed to ensure that control system software failures will not occur. Of central importance in this regard is the need to establish reliable methods that guarantee convergent learning, rapid convergence (learning) rate, and algorithm stability. This paper presents the major problems of adaptive control systems that use learning to improve performance. The paper then presents the major procedures and tools presently developed or currently being developed to enable the verification, validation, and ultimate certification of these adaptive control systems. These technologies include the application of automated program analysis methods, techniques to improve the learning process, analytical methods to verify stability, methods to automatically synthesize code, simulation and test methods, and tools to provide on-line software assurance.

  13. Agreement between Omron 306 and Biospace InBody 720 Bioelectrical Impedance Analyzers (BIA) in Children and Adolescents

    ERIC Educational Resources Information Center

    Finn, Kevin J.; Saint-Maurice, Pedro F.; Karsai, István; Ihász, Ferenc; Csányi, Tamás

    2015-01-01

    Purpose: The purpose of this study was to test the convergent validity of Omron 306 using Biospace InBody 720. Method: A total of 267 participants (145 boys; aged 10.4-17.9 years) completed testing during a single session. Each measure provided percent body fat (%BF), while the InBody 720 included fat-free mass (FFM). The validity was examined…

  14. New Empirical Evidence on the Validity and the Reliability of the Early Life Stress Questionnaire in a Polish Sample.

    PubMed

    Sokołowski, Andrzej; Dragan, Wojciech Ł

    2017-01-01

    Background: The Early Life Stress Questionnaire (ELSQ) is widely used to estimate the prevalence of negative events during childhood, including emotional, physical, verbal, sexual abuse, negligence, severe conflicts, separation, parental divorce, substance abuse, poverty, and so forth. Objective: This study presents the psychometric properties of the Polish adaptation of the ELSQ. It also verifies if early life stress (ELS) is a good predictor of psychopathology symptoms during adulthood. Materials and Methods: We analyzed data from two samples. Sample 1 was selected by random quota method from across the country and included 609 participants aged 18-50 years, 306 women (50.2%) and 303 men (49.8%). Sample 2 contained 503 young adults (253 women and 250 men) aged 18-25. Confirmatory and exploratory factor analyses were used to measure ELSQ internal consistency. The validity was based on the relation to psychopathological symptoms and substance misuse. Results: Results showed good internal consistency and validity. Exploratory factor analysis indicates a six-factor structure of the ELSQ. ELS was related to psychopathology in adulthood, including depressive, sociophobic, vegetative as well as pain symptoms. ELSQ score correlated also with alcohol use, but not nicotine dependence. Moreover, ELS was correlated with stress in adulthood. Conclusion: The findings indicate that the Polish version of the ELSQ is a valid and reliable instrument for assessing ELS in the Polish population and may be applied in both clinical and community samples.

  15. Cell Cycle Synchronization of HeLa Cells to Assay EGFR Pathway Activation.

    PubMed

    Wee, Ping; Wang, Zhixiang

    2017-01-01

    Progression through the cell cycle causes changes in the cell's signaling pathways that can alter EGFR signal transduction. Here, we describe drug-derived protocols to synchronize HeLa cells in various phases of the cell cycle, including G1 phase, S phase, G2 phase, and mitosis, specifically in the mitotic stages of prometaphase, metaphase, and anaphase/telophase. The synchronization procedures are designed to allow synchronized cells to be treated for EGF and collected for the purpose of Western blotting for EGFR signal transduction components.S phase synchronization is performed by thymidine block, G2 phase with roscovitine, prometaphase with nocodazole, metaphase with MG132, and anaphase/telophase with blebbistatin. G1 phase synchronization is performed by culturing synchronized mitotic cells obtained by mitotic shake-off. We also provide methods to validate the synchronization methods. For validation by Western blotting, we provide the temporal expression of various cell cycle markers that are used to check the quality of the synchronization. For validation of mitotic synchronization by microscopy, we provide a guide that describes the physical properties of each mitotic stage, using their cellular morphology and DNA appearance. For validation by flow cytometry, we describe the use of imaging flow cytometry to distinguish between the phases of the cell cycle, including between each stage of mitosis.

  16. Psychometric properties and confirmatory factor analysis of the CASP-19, a measure of quality of life in early old age: the HAPIEE study

    PubMed Central

    Kim, Gyu Ri; Netuveli, Gopalakrishnan; Blane, David; Peasey, Anne; Malyutina, Sofia; Simonova, Galina; Kubinova, Ruzena; Pajak, Andrzej; Croezen, Simone; Bobak, Martin; Pikhart, Hynek

    2015-01-01

    Objectives: The aim was to assess the reliability and validity of the quality of life (QoL) instrument CASP-19, and three shorter versions of CASP-12 in large population sample of older adults from the HAPIEE (Health, Alcohol, and Psychosocial factors In Eastern Europe) study. Methods: From the Czech Republic, Russia, and Poland, 13,210 HAPIEE participants aged 50 or older completed the retirement questionnaire including CASP-19 at baseline. Three shorter 12-item versions were also derived from original 19-item instrument. Psychometric validation used confirmatory factor analysis, Cronbach's alpha, Pearson's correlation, and construct validity. Results: The second-order four-factor model of CASP-19 did not provide a good fit to the data. Two-factor CASP-12v.3 including residual covariances for negative items to account for the method effect of negative items had the best fit to the data in all countries (CFI = 0.98, TLI = 0.97, RMSEA = 0.05, and WRMR = 1.65 in the Czech Republic; 0.96, 0.94, 0.07, and 2.70 in Poland; and 0.93, 0.90, 0.08, and 3.04 in Russia). Goodness-of-fit indices for the two-factor structure were substantially better than second-order models. Conclusions: This large population-based study is the first validation study of CASP scale in Central and Eastern Europe (CEE), which includes a general population sample in Russia, Poland, and the Czech Republic. The results of this study have demonstrated that the CASP-12v.3 is a valid and reliable tool for assessing QoL among adults aged 50 years or older. This version of CASP is recommended for use in future studies investigating QoL in the CEE populations. PMID:25059754

  17. Recent statistical methods for orientation data

    NASA Technical Reports Server (NTRS)

    Batschelet, E.

    1972-01-01

    The application of statistical methods for determining the areas of animal orientation and navigation are discussed. The method employed is limited to the two-dimensional case. Various tests for determining the validity of the statistical analysis are presented. Mathematical models are included to support the theoretical considerations and tables of data are developed to show the value of information obtained by statistical analysis.

  18. Application of analytical quality by design principles for the determination of alkyl p-toluenesulfonates impurities in Aprepitant by HPLC. Validation using total-error concept.

    PubMed

    Zacharis, Constantinos K; Vastardi, Elli

    2018-02-20

    In the research presented we report the development of a simple and robust liquid chromatographic method for the quantification of two genotoxic alkyl sulphonate impurities (namely methyl p-toluenesulfonate and isopropyl p-toluenesulfonate) in Aprepitant API substances using the Analytical Quality by Design (AQbD) approach. Following the steps of AQbD protocol, the selected critical method attributes (CMAs) were the separation criterions between the critical peak pairs, the analysis time and the peak efficiencies of the analytes. The critical method parameters (CMPs) included the flow rate, the gradient slope and the acetonitrile content at the first step of the gradient elution program. Multivariate experimental designs namely Plackett-Burman and Box-Behnken designs were conducted sequentially for factor screening and optimization of the method parameters. The optimal separation conditions were estimated using the desirability function. The method was fully validated in the range of 10-200% of the target concentration limit of the analytes using the "total error" approach. Accuracy profiles - a graphical decision making tool - were constructed using the results of the validation procedures. The β-expectation tolerance intervals did not exceed the acceptance criteria of±10%, meaning that 95% of future results will be included in the defined bias limits. The relative bias ranged between - 1.3-3.8% for both analytes, while the RSD values for repeatability and intermediate precision were less than 1.9% in all cases. The achieved limit of detection (LOD) and the limit of quantification (LOQ) were adequate for the specific purpose and found to be 0.02% (corresponding to 48μgg -1 in sample) for both methyl and isopropyl p-toluenesulfonate. As proof-of-concept, the validated method was successfully applied in the analysis of several Aprepitant batches indicating that this methodology could be used for routine quality control analyses. Copyright © 2017 Elsevier B.V. All rights reserved.

  19. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods

    PubMed Central

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-01-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L2-norm regularization. However, sparse representation methods via L1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72–88, 2013. PMID:23847452

  20. Laplace Inversion of Low-Resolution NMR Relaxometry Data Using Sparse Representation Methods.

    PubMed

    Berman, Paula; Levi, Ofer; Parmet, Yisrael; Saunders, Michael; Wiesman, Zeev

    2013-05-01

    Low-resolution nuclear magnetic resonance (LR-NMR) relaxometry is a powerful tool that can be harnessed for characterizing constituents in complex materials. Conversion of the relaxation signal into a continuous distribution of relaxation components is an ill-posed inverse Laplace transform problem. The most common numerical method implemented today for dealing with this kind of problem is based on L 2 -norm regularization. However, sparse representation methods via L 1 regularization and convex optimization are a relatively new approach for effective analysis and processing of digital images and signals. In this article, a numerical optimization method for analyzing LR-NMR data by including non-negativity constraints and L 1 regularization and by applying a convex optimization solver PDCO, a primal-dual interior method for convex objectives, that allows general linear constraints to be treated as linear operators is presented. The integrated approach includes validation of analyses by simulations, testing repeatability of experiments, and validation of the model and its statistical assumptions. The proposed method provides better resolved and more accurate solutions when compared with those suggested by existing tools. © 2013 Wiley Periodicals, Inc. Concepts Magn Reson Part A 42A: 72-88, 2013.

  1. [The Confusion Assessment Method: Transcultural adaptation of a French version].

    PubMed

    Antoine, V; Belmin, J; Blain, H; Bonin-Guillaume, S; Goldsmith, L; Guerin, O; Kergoat, M-J; Landais, P; Mahmoudi, R; Morais, J A; Rataboul, P; Saber, A; Sirvain, S; Wolfklein, G; de Wazieres, B

    2018-05-01

    The Confusion Assessment Method (CAM) is a validated key tool in clinical practice and research programs to diagnose delirium and assess its severity. There is no validated French version of the CAM training manual and coding guide (Inouye SK). The aim of this study was to establish a consensual French version of the CAM and its manual. Cross-cultural adaptation to achieve equivalence between the original version and a French adapted version of the CAM manual. A rigorous process was conducted including control of cultural adequacy of the tool's components, double forward and back translations, reconciliation, expert committee review (including bilingual translators with different nationalities, a linguist, highly qualified clinicians, methodologists) and pretesting. A consensual French version of the CAM was achieved. Implementation of the CAM French version in daily clinical practice will enable optimal diagnosis of delirium diagnosis and enhance communication between health professionals in French speaking countries. Validity and psychometric properties are being tested in a French multicenter cohort, opening up new perspectives for improved quality of care and research programs in French speaking countries. Copyright © 2018 Elsevier Masson SAS. All rights reserved.

  2. How to test validity in orthodontic research: a mixed dentition analysis example.

    PubMed

    Donatelli, Richard E; Lee, Shin-Jae

    2015-02-01

    The data used to test the validity of a prediction method should be different from the data used to generate the prediction model. In this study, we explored whether an independent data set is mandatory for testing the validity of a new prediction method and how validity can be tested without independent new data. Several validation methods were compared in an example using the data from a mixed dentition analysis with a regression model. The validation errors of real mixed dentition analysis data and simulation data were analyzed for increasingly large data sets. The validation results of both the real and the simulation studies demonstrated that the leave-1-out cross-validation method had the smallest errors. The largest errors occurred in the traditional simple validation method. The differences between the validation methods diminished as the sample size increased. The leave-1-out cross-validation method seems to be an optimal validation method for improving the prediction accuracy in a data set with limited sample sizes. Copyright © 2015 American Association of Orthodontists. Published by Elsevier Inc. All rights reserved.

  3. Comparative study between derivative spectrophotometry and multivariate calibration as analytical tools applied for the simultaneous quantitation of Amlodipine, Valsartan and Hydrochlorothiazide.

    PubMed

    Darwish, Hany W; Hassan, Said A; Salem, Maissa Y; El-Zeany, Badr A

    2013-09-01

    Four simple, accurate and specific methods were developed and validated for the simultaneous estimation of Amlodipine (AML), Valsartan (VAL) and Hydrochlorothiazide (HCT) in commercial tablets. The derivative spectrophotometric methods include Derivative Ratio Zero Crossing (DRZC) and Double Divisor Ratio Spectra-Derivative Spectrophotometry (DDRS-DS) methods, while the multivariate calibrations used are Principal Component Regression (PCR) and Partial Least Squares (PLSs). The proposed methods were applied successfully in the determination of the drugs in laboratory-prepared mixtures and in commercial pharmaceutical preparations. The validity of the proposed methods was assessed using the standard addition technique. The linearity of the proposed methods is investigated in the range of 2-32, 4-44 and 2-20 μg/mL for AML, VAL and HCT, respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  4. A systematic review of validated methods for identifying pulmonary fibrosis and interstitial lung disease using administrative and claims data.

    PubMed

    Jones, Natalie; Schneider, Gary; Kachroo, Sumesh; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aimed to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest (HOIs) from administrative and claims data. This paper summarizes the process and findings of the algorithm review of pulmonary fibrosis and interstitial lung disease. PubMed and Iowa Drug Information Service Web searches were conducted to identify citations applicable to the pulmonary fibrosis/interstitial lung disease HOI. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify pulmonary fibrosis and interstitial lung disease, including validation estimates of the coding algorithms. Our search revealed a deficiency of literature focusing on pulmonary fibrosis and interstitial lung disease algorithms and validation estimates. Only five studies provided codes; none provided validation estimates. Because interstitial lung disease includes a broad spectrum of diseases, including pulmonary fibrosis, the scope of these studies varied, as did the corresponding diagnostic codes used. Research needs to be conducted on designing validation studies to test pulmonary fibrosis and interstitial lung disease algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  5. Validation methodology in publications describing epidemiological registration methods of dental caries: a systematic review.

    PubMed

    Sjögren, P; Ordell, S; Halling, A

    2003-12-01

    The aim was to describe and systematically review the methodology and reporting of validation in publications describing epidemiological registration methods for dental caries. BASIC RESEARCH METHODOLOGY: Literature searches were conducted in six scientific databases. All publications fulfilling the predetermined inclusion criteria were assessed for methodology and reporting of validation using a checklist including items described previously as well as new items. The frequency of endorsement of the assessed items was analysed. Moreover, the type and strength of evidence, was evaluated. Reporting of predetermined items relating to methodology of validation and the frequency of endorsement of the assessed items were of primary interest. Initially 588 publications were located. 74 eligible publications were obtained, 23 of which fulfilled the inclusion criteria and remained throughout the analyses. A majority of the studies reported the methodology of validation. The reported methodology of validation was generally inadequate, according to the recommendations of evidence-based medicine. The frequencies of reporting the assessed items (frequencies of endorsement) ranged from four to 84 per cent. A majority of the publications contributed to a low strength of evidence. There seems to be a need to improve the methodology and the reporting of validation in publications describing professionally registered caries epidemiology. Four of the items assessed in this study are potentially discriminative for quality assessments of reported validation.

  6. Development of a time-dependent incompressible Navier-Stokes solver based on a fractional-step method

    NASA Technical Reports Server (NTRS)

    Rosenfeld, Moshe

    1990-01-01

    The development, validation and application of a fractional step solution method of the time-dependent incompressible Navier-Stokes equations in generalized coordinate systems are discussed. A solution method that combines a finite-volume discretization with a novel choice of the dependent variables and a fractional step splitting to obtain accurate solutions in arbitrary geometries was previously developed for fixed-grids. In the present research effort, this solution method is extended to include more general situations, including cases with moving grids. The numerical techniques are enhanced to gain efficiency and generality.

  7. Fast method for the simultaneous quantification of toxic polyphenols applied to the selection of genotypes of yam bean (Pachyrhizus sp.) seeds.

    PubMed

    Lautié, E; Rozet, E; Hubert, P; Vandelaer, N; Billard, F; Felde, T Zum; Grüneberg, W J; Quetin-Leclercq, J

    2013-12-15

    The purpose of the research was to develop and validate a rapid quantification method able to screen many samples of yam bean seeds to determine the content of two toxic polyphenols, namely pachyrrhizine and rotenone. The analytical procedure described is based on the use of an internal standard (dihydrorotenone) and is divided in three steps: microwave assisted extraction, purification by solid phase extraction and assay by ultra high performance liquid chromatography (UHPLC). Each step was included in the validation protocol and the accuracy profiles methodology was used to fully validate the method. The method was fully validated between 0.25 mg and 5 mg pachyrrhizin per gram of seeds and between 0.58 mg/g and 4 mg/g for rotenone. More than one hundred samples from different accessions, locations of growth and harvest dates were screened. Pachyrrhizine concentrations ranged from 3.29 mg/g to lower than 0.25 mg/g while rotenone concentrations ranged from 3.53 mg/g to lower than 0.58 mg/g. This screening along with principal component analysis (PCA) and discriminant analysis (DA) analyses allowed the selection of the more interesting genotypes in terms of low concentrations of these two toxic polyphenols. © 2013 Elsevier B.V. All rights reserved.

  8. Three-dimensional registration of intravascular optical coherence tomography and cryo-image volumes for microscopic-resolution validation.

    PubMed

    Prabhu, David; Mehanna, Emile; Gargesha, Madhusudhana; Brandt, Eric; Wen, Di; van Ditzhuijzen, Nienke S; Chamie, Daniel; Yamamoto, Hirosada; Fujino, Yusuke; Alian, Ali; Patel, Jaymin; Costa, Marco; Bezerra, Hiram G; Wilson, David L

    2016-04-01

    Evidence suggests high-resolution, high-contrast, [Formula: see text] intravascular optical coherence tomography (IVOCT) can distinguish plaque types, but further validation is needed, especially for automated plaque characterization. We developed experimental and three-dimensional (3-D) registration methods to provide validation of IVOCT pullback volumes using microscopic, color, and fluorescent cryo-image volumes with optional registered cryo-histology. A specialized registration method matched IVOCT pullback images acquired in the catheter reference frame to a true 3-D cryo-image volume. Briefly, an 11-parameter registration model including a polynomial virtual catheter was initialized within the cryo-image volume, and perpendicular images were extracted, mimicking IVOCT image acquisition. Virtual catheter parameters were optimized to maximize cryo and IVOCT lumen overlap. Multiple assessments suggested that the registration error was better than the [Formula: see text] spacing between IVOCT image frames. Tests on a digital synthetic phantom gave a registration error of only [Formula: see text] (signed distance). Visual assessment of randomly presented nearby frames suggested registration accuracy within 1 IVOCT frame interval ([Formula: see text]). This would eliminate potential misinterpretations confronted by the typical histological approaches to validation, with estimated 1-mm errors. The method can be used to create annotated datasets and automated plaque classification methods and can be extended to other intravascular imaging modalities.

  9. Life cycle management of analytical methods.

    PubMed

    Parr, Maria Kristina; Schmidt, Alexander H

    2018-01-05

    In modern process management, the life cycle concept gains more and more importance. It focusses on the total costs of the process from invest to operation and finally retirement. Also for analytical procedures an increasing interest for this concept exists in the recent years. The life cycle of an analytical method consists of design, development, validation (including instrumental qualification, continuous method performance verification and method transfer) and finally retirement of the method. It appears, that also regulatory bodies have increased their awareness on life cycle management for analytical methods. Thus, the International Council for Harmonisation of Technical Requirements for Pharmaceuticals for Human Use (ICH), as well as the United States Pharmacopeial Forum discuss the enrollment of new guidelines that include life cycle management of analytical methods. The US Pharmacopeia (USP) Validation and Verification expert panel already proposed a new General Chapter 〈1220〉 "The Analytical Procedure Lifecycle" for integration into USP. Furthermore, also in the non-regulated environment a growing interest on life cycle management is seen. Quality-by-design based method development results in increased method robustness. Thereby a decreased effort is needed for method performance verification, and post-approval changes as well as minimized risk of method related out-of-specification results. This strongly contributes to reduced costs of the method during its life cycle. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. The predictive validity of selection for entry into postgraduate training in general practice: evidence from three longitudinal studies.

    PubMed

    Patterson, Fiona; Lievens, Filip; Kerrin, Máire; Munro, Neil; Irish, Bill

    2013-11-01

    The selection methodology for UK general practice is designed to accommodate several thousand applicants per year and targets six core attributes identified in a multi-method job-analysis study To evaluate the predictive validity of selection methods for entry into postgraduate training, comprising a clinical problem-solving test, a situational judgement test, and a selection centre. A three-part longitudinal predictive validity study of selection into training for UK general practice. In sample 1, participants were junior doctors applying for training in general practice (n = 6824). In sample 2, participants were GP registrars 1 year into training (n = 196). In sample 3, participants were GP registrars sitting the licensing examination after 3 years, at the end of training (n = 2292). The outcome measures include: assessor ratings of performance in a selection centre comprising job simulation exercises (sample 1); supervisor ratings of trainee job performance 1 year into training (sample 2); and licensing examination results, including an applied knowledge examination and a 12-station clinical skills objective structured clinical examination (OSCE; sample 3). Performance ratings at selection predicted subsequent supervisor ratings of job performance 1 year later. Selection results also significantly predicted performance on both the clinical skills OSCE and applied knowledge examination for licensing at the end of training. In combination, these longitudinal findings provide good evidence of the predictive validity of the selection methods, and are the first reported for entry into postgraduate training. Results show that the best predictor of work performance and training outcomes is a combination of a clinical problem-solving test, a situational judgement test, and a selection centre. Implications for selection methods for all postgraduate specialties are considered.

  11. Basis Selection for Wavelet Regression

    NASA Technical Reports Server (NTRS)

    Wheeler, Kevin R.; Lau, Sonie (Technical Monitor)

    1998-01-01

    A wavelet basis selection procedure is presented for wavelet regression. Both the basis and the threshold are selected using cross-validation. The method includes the capability of incorporating prior knowledge on the smoothness (or shape of the basis functions) into the basis selection procedure. The results of the method are demonstrated on sampled functions widely used in the wavelet regression literature. The results of the method are contrasted with other published methods.

  12. The validation of Huffaz Intelligence Test (HIT)

    NASA Astrophysics Data System (ADS)

    Rahim, Mohd Azrin Mohammad; Ahmad, Tahir; Awang, Siti Rahmah; Safar, Ajmain

    2017-08-01

    In general, a hafiz who can memorize the Quran has many specialties especially in respect to their academic performances. In this study, the theory of multiple intelligences introduced by Howard Gardner is embedded in a developed psychometric instrument, namely Huffaz Intelligence Test (HIT). This paper presents the validation and the reliability of HIT of some tahfiz students in Malaysia Islamic schools. A pilot study was conducted involving 87 huffaz who were randomly selected to answer the items in HIT. The analysis method used includes Partial Least Square (PLS) on reliability, convergence and discriminant validation. The study has validated nine intelligences. The findings also indicated that the composite reliabilities for the nine types of intelligences are greater than 0.8. Thus, the HIT is a valid and reliable instrument to measure the multiple intelligences among huffaz.

  13. Traceability validation of a high speed short-pulse testing method used in LED production

    NASA Astrophysics Data System (ADS)

    Revtova, Elena; Vuelban, Edgar Moreno; Zhao, Dongsheng; Brenkman, Jacques; Ulden, Henk

    2017-12-01

    Industrial processes of LED (light-emitting diode) production include LED light output performance testing. Most of them are monitored and controlled by optically, electrically and thermally measuring LEDs by high speed short-pulse measurement methods. However, these are not standardized and a lot of information is proprietary that it is impossible for third parties, such as NMIs, to trace and validate. It is known, that these techniques have traceability issue and metrological inadequacies. Often due to these, the claimed performance specifications of LEDs are overstated, which consequently results to manufacturers experiencing customers' dissatisfaction and a large percentage of failures in daily use of LEDs. In this research a traceable setup is developed to validate one of the high speed testing techniques, investigate inadequacies and work out the traceability issues. A well-characterised short square pulse of 25 ms is applied to chip-on-board (CoB) LED modules to investigate the light output and colour content. We conclude that the short-pulse method is very efficient in case a well-defined electrical current pulse is applied and the stabilization time of the device is "a priori" accurately determined. No colour shift is observed. The largest contributors to the measurement uncertainty include badly-defined current pulse and inaccurate calibration factor.

  14. Design and validation of general biology learning program based on scientific inquiry skills

    NASA Astrophysics Data System (ADS)

    Cahyani, R.; Mardiana, D.; Noviantoro, N.

    2018-03-01

    Scientific inquiry is highly recommended to teach science. The reality in the schools and colleges is that many educators still have not implemented inquiry learning because of their lack of understanding. The study aims to1) analyze students’ difficulties in learning General Biology, 2) design General Biology learning program based on multimedia-assisted scientific inquiry learning, and 3) validate the proposed design. The method used was Research and Development. The subjects of the study were 27 pre-service students of general elementary school/Islamic elementary schools. The workflow of program design includes identifying learning difficulties of General Biology, designing course programs, and designing instruments and assessment rubrics. The program design is made for four lecture sessions. Validation of all learning tools were performed by expert judge. The results showed that: 1) there are some problems identified in General Biology lectures; 2) the designed products include learning programs, multimedia characteristics, worksheet characteristics, and, scientific attitudes; and 3) expert validation shows that all program designs are valid and can be used with minor revisions. The first section in your paper.

  15. Validity of contents of a paediatric critical comfort scale using mixed methodology.

    PubMed

    Bosch-Alcaraz, A; Jordan-Garcia, I; Alcolea-Monge, S; Fernández-Lorenzo, R; Carrasquer-Feixa, E; Ferrer-Orona, M; Falcó-Pegueroles, A

    Critical illness in paediatric patients includes acute conditions in a healthy child as well as exacerbations of chronic disease, and therefore these situations must be clinically managed in Critical Care Units. The role of the paediatric nurse is to ensure the comfort of these critically ill patients. To that end, instruments are required that correctly assess critical comfort. To describe the process for validating the content of a paediatric critical comfort scale using mixed-method research. Initially, a cross-cultural adaptation of the Comfort Behavior Scale from English to Spanish using the translation and back-translation method was made. After that, its content was evaluated using mixed method research. This second step was divided into a quantitative stage in which an ad hoc questionnaire was used in order to assess each scale's item relevance and wording and a qualitative stage with two meetings with health professionals, patients and a family member following the Delphi Method recommendations. All scale items obtained a content validity index >0.80, except physical movement in its relevance, which obtained 0.76. Global content scale validity was 0.87 (high). During the qualitative stage, items from each of the scale domains were reformulated or eliminated in order to make the scale more comprehensible and applicable. The use of a mixed-method research methodology during the scale content validity phase allows the design of a richer and more assessment-sensitive instrument. Copyright © 2017 Sociedad Española de Enfermería Intensiva y Unidades Coronarias (SEEIUC). Publicado por Elsevier España, S.L.U. All rights reserved.

  16. A nearest neighbor approach for automated transporter prediction and categorization from protein sequences.

    PubMed

    Li, Haiquan; Dai, Xinbin; Zhao, Xuechun

    2008-05-01

    Membrane transport proteins play a crucial role in the import and export of ions, small molecules or macromolecules across biological membranes. Currently, there are a limited number of published computational tools which enable the systematic discovery and categorization of transporters prior to costly experimental validation. To approach this problem, we utilized a nearest neighbor method which seamlessly integrates homologous search and topological analysis into a machine-learning framework. Our approach satisfactorily distinguished 484 transporter families in the Transporter Classification Database, a curated and representative database for transporters. A five-fold cross-validation on the database achieved a positive classification rate of 72.3% on average. Furthermore, this method successfully detected transporters in seven model and four non-model organisms, ranging from archaean to mammalian species. A preliminary literature-based validation has cross-validated 65.8% of our predictions on the 11 organisms, including 55.9% of our predictions overlapping with 83.6% of the predicted transporters in TransportDB.

  17. Safe surgery: validation of pre and postoperative checklists 1

    PubMed Central

    Alpendre, Francine Taporosky; Cruz, Elaine Drehmer de Almeida; Dyniewicz, Ana Maria; Mantovani, Maria de Fátima; Silva, Ana Elisa Bauer de Camargo e; dos Santos, Gabriela de Souza

    2017-01-01

    ABSTRACT Objective: to develop, evaluate and validate a surgical safety checklist for patients in the pre and postoperative periods in surgical hospitalization units. Method: methodological research carried out in a large public teaching hospital in the South of Brazil, with application of the principles of the Safe Surgery Saves Lives Programme of the World Health Organization. The checklist was applied to 16 nurses of 8 surgical units and submitted for validation by a group of eight experts using the Delphi method online. Results: the instrument was validated and it was achieved a mean score ≥1, level of agreement ≥75% and Cronbach’s alpha >0.90. The final version included 97 safety indicators organized into six categories: identification, preoperative, immediate postoperative, immediate postoperative, other surgical complications, and hospital discharge. Conclusion: the Surgical Safety Checklist in the Pre and Postoperative periods is another strategy to promote patient safety, as it allows the monitoring of predictive signs and symptoms of surgical complications and the early detection of adverse events. PMID:28699994

  18. Prediction of jump phenomena in rotationally-coupled maneuvers of aircraft, including nonlinear aerodynamic effects

    NASA Technical Reports Server (NTRS)

    Young, J. W.; Schy, A. A.; Johnson, K. G.

    1977-01-01

    An analytical method has been developed for predicting critical control inputs for which nonlinear rotational coupling may cause sudden jumps in aircraft response. The analysis includes the effect of aerodynamics which are nonlinear in angle of attack. The method involves the simultaneous solution of two polynomials in roll rate, whose coefficients are functions of angle of attack and the control inputs. Results obtained using this procedure are compared with calculated time histories to verify the validity of the method for predicting jump-like instabilities.

  19. Single-laboratory validation of a saponification method for the determination of four polycyclic aromatic hydrocarbons in edible oils by HPLC-fluorescence detection.

    PubMed

    Akdoğan, Abdullah; Buttinger, Gerhard; Wenzl, Thomas

    2016-01-01

    An analytical method is reported for the determination of four polycyclic aromatic hydrocarbons (benzo[a]pyrene (BaP), benz[a]anthracene (BaA), benzo[b]fluoranthene (BbF) and chrysene (CHR)) in edible oils (sesame, maize, sunflower and olive oil) by high-performance liquid chromatography. Sample preparation is based on three steps including saponification, liquid-liquid partitioning and, finally, clean-up by solid phase extraction on 2 g of silica. Guidance on single-laboratory validation of the proposed analysis method was taken from the second edition of the Eurachem guide on method validation. The lower level of the working range of the method was determined by the limits of quantification of the individual analytes, and the upper level was equal to 5.0 µg kg(-1). The limits of detection and quantification of the four PAHs ranged from 0.06 to 0.12 µg kg(-1) and from 0.13 to 0.24 µg kg(-1). Recoveries of more than 84.8% were achieved for all four PAHs at two concentration levels (2.5 and 5.0 µg kg(-1)), and expanded relative measurement uncertainties were below 20%. The performance of the validated method was in all aspects compliant with provisions set in European Union legislation for the performance of analytical methods employed in the official control of food. The applicability of the method to routine samples was evaluated based on a limited number of commercial edible oil samples.

  20. The revised EEMCO guidance for the in vivo measurement of water in the skin.

    PubMed

    Berardesca, Enzo; Loden, Marie; Serup, Jorgen; Masson, Philippe; Rodrigues, Luis Monteiro

    2018-06-20

    Noninvasive quantification of stratum corneum water content is widely used in skin research and topical product development. The original EEMCO guidelines on measurements of skin hydration by electrical methods and transepidermal water loss (TEWL) by evaporimeter published in 1997 and 2001 have been revisited and updated with the incorporation of recently available technologies. Electrical methods and open-chamber evaporimeters for measurement of TEWL are still the preferred techniques to measure the water balance in the stratum corneum. The background technology and biophysics of these instruments remain relevant and valid. However, new methods that can image surface hydration and measure depth profiles of dermal water content now available. Open-chamber measurement of TEWL has been supplemented with semiopen and closed chamber probes, which are more robust to environmental influence and therefore convenient to use and more applicable to field studies. However, closed chamber methods interfere with the evaporation of water, and the methods cannot be used for continuous monitoring. Validation of methods with respect to intra- and inter-instrument variation remains challenging. No validation standard or test phantom is available. The established methods for measurement of epidermal water content and TEWL have been supplemented with important new technologies including methods that allow imaging of epidermal water distribution and water depth profiles. A much more complete and sophisticated characterization of the various aspects of the dermal water barrier has been accomplished by means of today's noninvasive techniques; however, instrument standardization and validation remain a challenge. © 2018 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  1. Development and validation of an educational booklet for healthy eating during pregnancy1

    PubMed Central

    de Oliveira, Sheyla Costa; Lopes, Marcos Venícios de Oliveira; Fernandes, Ana Fátima Carvalho

    2014-01-01

    OBJECTIVE: to describe the validation process of an educational booklet for healthy eating in pregnancy using local and regional food. METHODS: methodological study, developed in three steps: construction of the educational booklet, validation of the educational material by judges, and by pregnant women. The validation process was conducted by 22 judges and 20 pregnant women, by convenience selection. We considered a p-value<0.85 to validate the booklet compliance and relevance, according to the six items of the instrument. As for content validation, the item-level Content Validity Index (I-CVI) was considered when a minimum score of at least 0.80 was obtained. RESULTS: five items were considered relevant by the judges. The mean I-CVI was 0.91. The pregnant women evaluated positively the booklet. The suggestions were accepted and included in the final version of the material. CONCLUSION: the booklet was validated in terms of content and relevance, and should be used by nurses for advice on healthy eating during pregnancy. PMID:25296145

  2. Examining Factor Structure and Validating the Persian Version of the Pregnancy’s Worries and Stress Questionnaire for Pregnant Iranian Women

    PubMed Central

    Navidpour, Fariba; Dolatian, Mahrokh; Yaghmaei, Farideh; Majd, Hamid Alavi; Hashemi, Seyed Saeed

    2015-01-01

    Background and Objectives: Pregnant women tend to experience anxiety and stress when faced with the changes to their biology, environment and personal relationships. The identification of these factors and the prevention of their side effects are vital for both mother and fetus. The present study was conducted to validate and to examine the factor structure of the Persian version of the Pregnancy’s Worries and Stress Questionnaire. Materials and Methods: The 25-item PWSQ was first translated by specialists into Persian. The questionnaire’s validity was determined using face, content, criterion and construct validity and reliability of questionnaire was examined using Cronbach’s alpha. Confirmatory factor analysis was performed in AMOS and SPSS 21. Participants included healthy Iranian pregnant women (8-39 weeks) who refer to selected hospitals for prenatal care. Hospitals included private, social security and university hospitals and selected through the random cluster sampling method. Findings: The results of validity and reliability assessments of the questionnaire were acceptable. Cronbach’s alpha calculated showed a high internal consistency of 0.89. The confirmatory factor analysis using the χ2, CMIN/DF, IFI, CFI, NFI and NNFI indexes showed the 6-factor model to be the best fitted model for explaining the data. Conclusion: The questionnaire was translated into Persian to examine stress and worry specific to Iranian pregnant women. The psychometric results showed that the questionnaire is suitable for identifying Iranian pregnant women with pregnancy-related stress. PMID:26153186

  3. Determination of glomerular filtration rate (GFR) from fractional renal accumulation of iodinated contrast material: a convenient and rapid single-kidney CT-GFR technique.

    PubMed

    Yuan, XiaoDong; Tang, Wei; Shi, WenWei; Yu, Libao; Zhang, Jing; Yuan, Qing; You, Shan; Wu, Ning; Ao, Guokun; Ma, Tingting

    2018-07-01

    To develop a convenient and rapid single-kidney CT-GFR technique. One hundred and twelve patients referred for multiphasic renal CT and 99mTc-DTPA renal dynamic imaging Gates-GFR measurement were prospectively included and randomly divided into two groups of 56 patients each: the training group and the validation group. On the basis of the nephrographic phase images, the fractional renal accumulation (FRA) was calculated and correlated with the Gates-GFR in the training group. From this correlation a formula was derived for single-kidney CT-GFR calculation, which was validated by a paired t test and linear regression analysis with the single-kidney Gates-GFR in the validation group. In the training group, the FRA (x-axis) correlated well (r = 0.95, p < 0.001) with single-kidney Gates-GFR (y-axis), producing a regression equation of y = 1665x + 1.5 for single-kidney CT-GFR calculation. In the validation group, the difference between the methods of single-kidney GFR measurements was 0.38 ± 5.57 mL/min (p = 0.471); the regression line is identical to the diagonal (intercept = 0 and slope = 1) (p = 0.727 and p = 0.473, respectively), with a standard deviation of residuals of 5.56 mL/min. A convenient and rapid single-kidney CT-GFR technique was presented and validated in this investigation. • The new CT-GFR method takes about 2.5 min of patient time. • The CT-GFR method demonstrated identical results to the Gates-GFR method. • The CT-GFR method is based on the fractional renal accumulation of iodinated CM. • The CT-GFR method is achieved without additional radiation dose to the patient.

  4. A user-targeted synthesis of the VALUE perfect predictor experiment

    NASA Astrophysics Data System (ADS)

    Maraun, Douglas; Widmann, Martin; Gutierrez, Jose; Kotlarski, Sven; Hertig, Elke; Wibig, Joanna; Rössler, Ole; Huth, Radan

    2016-04-01

    VALUE is an open European network to validate and compare downscaling methods for climate change research. A key deliverable of VALUE is the development of a systematic validation framework to enable the assessment and comparison of both dynamical and statistical downscaling methods. VALUE's main approach to validation is user-focused: starting from a specific user problem, a validation tree guides the selection of relevant validation indices and performance measures. We consider different aspects: (1) marginal aspects such as mean, variance and extremes; (2) temporal aspects such as spell length characteristics; (3) spatial aspects such as the de-correlation length of precipitation extremes; and multi-variate aspects such as the interplay of temperature and precipitation or scale-interactions. Several experiments have been designed to isolate specific points in the downscaling procedure where problems may occur. Experiment 1 (perfect predictors): what is the isolated downscaling skill? How do statistical and dynamical methods compare? How do methods perform at different spatial scales? Experiment 2 (Global climate model predictors): how is the overall representation of regional climate, including errors inherited from global climate models? Experiment 3 (pseudo reality): do methods fail in representing regional climate change? Here, we present a user-targeted synthesis of the results of the first VALUE experiment. In this experiment, downscaling methods are driven with ERA-Interim reanalysis data to eliminate global climate model errors, over the period 1979-2008. As reference data we use, depending on the question addressed, (1) observations from 86 meteorological stations distributed across Europe; (2) gridded observations at the corresponding 86 locations or (3) gridded spatially extended observations for selected European regions. With more than 40 contributing methods, this study is the most comprehensive downscaling inter-comparison project so far. The results clearly indicate that for several aspects, the downscaling skill varies considerably between different methods. For specific purposes, some methods can therefore clearly be excluded.

  5. Psychometric properties of an instrument to measure nursing students' quality of life.

    PubMed

    Chu, Yanxiang; Xu, Min; Li, Xiuyun

    2015-07-01

    It is important for clinical nursing teachers and managers to recognize the importance of nursing students' quality of life (QOL) since they are the source of future nurses. As yet, there is no quality of life evaluation scale (QOLES) specific to them. This study designed a quantitative instrument for evaluating QOL of nursing students. The study design was a descriptive survey with mixed methods including literature review, panel discussion, Delphi method, and statistical analysis. The data were collected from 880 nursing students from four teaching hospitals in Wuhan, China. The reliability and validity of the scale were tested through completion of the QOLES in a cluster sampling method. The total scale included 18 items in three domains: physical, psychological, and social functional. The cumulative contributing rate of the three common factors was 65.23%. Cronbach's alpha coefficient of the scale was 0.82. This scale had good reliability and validity to evaluate nursing students' QOL. Copyright © 2015 Elsevier Ltd. All rights reserved.

  6. Validity of a New Quantitative Evaluation Method that Uses the Depth of the Surface Imprint as an Indicator for Pitting Edema.

    PubMed

    Kogo, Haruki; Murata, Jun; Murata, Shin; Higashi, Toshio

    2017-01-01

    This study examined the validity of a practical evaluation method for pitting edema by comparing it to other methods, including circumference measurements and ultrasound image measurements. Fifty-one patients (102 legs) from a convalescent ward in Maruyama Hospital were recruited for study 1, and 47 patients (94 legs) from a convalescent ward in Morinaga Hospital were recruited for study 2. The relationship between the depth of the surface imprint and circumferential measurements, as well as the relationship between the depth of the surface imprint and the thickness of the subcutaneous soft tissue on an ultrasonogram, were analyzed using a Spearman correlation coefficient by rank. There was no significant relationship between the surface imprint depth and circumferential measurements. However, there was a significant relationship between the depth of the surface imprint and the thickness of the subcutaneous soft tissue as measured on an ultrasonogram (correlation coefficient 0.736). Our findings suggest that our novel evaluation method for pitting edema, based on a measurement of the surface imprint depth, is both valid and useful.

  7. High-throughput method for the determination of residues of β-lactam antibiotics in bovine milk by LC-MS/MS.

    PubMed

    Jank, Louise; Martins, Magda Targa; Arsand, Juliana Bazzan; Hoff, Rodrigo Barcellos; Barreto, Fabiano; Pizzolato, Tânia Mara

    2015-01-01

    This study describes the development and validation procedures for scope extension of a method for the determination of β-lactam antibiotic residues (ampicillin, amoxicillin, penicillin G, penicillin V, oxacillin, cloxacillin, dicloxacillin, nafcillin, ceftiofur, cefquinome, cefoperazone, cephapirine, cefalexin and cephalonium) in bovine milk. Sample preparation was performed by liquid-liquid extraction (LLE) followed by two clean-up steps, including low temperature purification (LTP) and a solid phase dispersion clean-up. Extracts were analysed using a liquid chromatography-electrospray-tandem mass spectrometry system (LC-ESI-MS/MS). Chromatographic separation was performed in a C18 column, using methanol and water (both with 0.1% of formic acid) as mobile phase. Method validation was performed according to the criteria of Commission Decision 2002/657/EC. Main validation parameters such as linearity, limit of detection, decision limit (CCα), detection capability (CCβ), accuracy, and repeatability were determined and were shown to be adequate. The method was applied to real samples (more than 250) and two milk samples had levels above maximum residues limits (MRLs) for cloxacillin - CLX and cefapirin - CFAP.

  8. Experimental Methodology in English Teaching and Learning: Method Features, Validity Issues, and Embedded Experimental Design

    ERIC Educational Resources Information Center

    Lee, Jang Ho

    2012-01-01

    Experimental methods have played a significant role in the growth of English teaching and learning studies. The paper presented here outlines basic features of experimental design, including the manipulation of independent variables, the role and practicality of randomised controlled trials (RCTs) in educational research, and alternative methods…

  9. 16 CFR 1500.41 - Method of testing primary irritant substances.

    Code of Federal Regulations, 2014 CFR

    2014-01-01

    ... corrosivity properties of substances, including testing that does not require animals, are presented in the CPSC's animal testing policy set forth in 16 CFR 1500.232. A weight-of-evidence analysis or a validated... conducted, a sequential testing strategy is recommended to reduce the number of test animals. The method of...

  10. La pronunciacion espanola y los metodos de investigacion. (Spanish Pronunciation and Methods of Investigation.)

    ERIC Educational Resources Information Center

    Torreblanca, Maximo

    1988-01-01

    Discusses the validity of studies of Spanish pronunciation in terms of research methods employed. Topics include data collection in the laboratory vs. in a natural setting; recorded vs. non-recorded data; quality of the recording; aural analysis vs. spectrographic analysis; and transcriber reliability. Suggestions for improving data collection are…

  11. Contemporary Test Validity in Theory and Practice: A Primer for Discipline-Based Education Researchers.

    PubMed

    Reeves, Todd D; Marbach-Ad, Gili

    2016-01-01

    Most discipline-based education researchers (DBERs) were formally trained in the methods of scientific disciplines such as biology, chemistry, and physics, rather than social science disciplines such as psychology and education. As a result, DBERs may have never taken specific courses in the social science research methodology--either quantitative or qualitative--on which their scholarship often relies so heavily. One particular aspect of (quantitative) social science research that differs markedly from disciplines such as biology and chemistry is the instrumentation used to quantify phenomena. In response, this Research Methods essay offers a contemporary social science perspective on test validity and the validation process. The instructional piece explores the concepts of test validity, the validation process, validity evidence, and key threats to validity. The essay also includes an in-depth example of a validity argument and validation approach for a test of student argument analysis. In addition to DBERs, this essay should benefit practitioners (e.g., lab directors, faculty members) in the development, evaluation, and/or selection of instruments for their work assessing students or evaluating pedagogical innovations. © 2016 T. D. Reeves and G. Marbach-Ad. CBE—Life Sciences Education © 2016 The American Society for Cell Biology. This article is distributed by The American Society for Cell Biology under license from the author(s). It is available to the public under an Attribution–Noncommercial–Share Alike 3.0 Unported Creative Commons License (http://creativecommons.org/licenses/by-nc-sa/3.0).

  12. Development, validation and evaluation of an analytical method for the determination of monomeric and oligomeric procyanidins in apple extracts.

    PubMed

    Hollands, Wendy J; Voorspoels, Stefan; Jacobs, Griet; Aaby, Kjersti; Meisland, Ane; Garcia-Villalba, Rocio; Tomas-Barberan, Francisco; Piskula, Mariusz K; Mawson, Deborah; Vovk, Irena; Needs, Paul W; Kroon, Paul A

    2017-04-28

    There is a lack of data for individual oligomeric procyanidins in apples and apple extracts. Our aim was to develop, validate and evaluate an analytical method for the separation, identification and quantification of monomeric and oligomeric flavanols in apple extracts. To achieve this, we prepared two types of flavanol extracts from freeze-dried apples; one was an epicatechin-rich extract containing ∼30% (w/w) monomeric (-)-epicatechin which also contained oligomeric procyanidins (Extract A), the second was an oligomeric procyanidin-rich extract depleted of epicatechin (Extract B). The parameters considered for method optimisation were HPLC columns and conditions, sample heating, mass of extract and dilution volumes. The performance characteristics considered for method validation included standard linearity, method sensitivity, precision and trueness. Eight laboratories participated in the method evaluation. Chromatographic separation of the analytes was best achieved utilizing a Hilic column with a binary mobile phase consisting of acidic acetonitrile and acidic aqueous methanol. The final method showed linearity for epicatechin in the range 5-100μg/mL with a correlation co-efficient >0.999. Intra-day and inter-day precision of the analytes ranged from 2 to 6% and 2 to 13% respectively. Up to dp3, trueness of the method was >95% but decreased with increasing dp. Within laboratory precision showed RSD values <5 and 10% for monomers and oligomers, respectively. Between laboratory precision was 4 and 15% (Extract A) and 7 and 30% (Extract B) for monomers and oligomers, respectively. An analytical method for the separation, identification and quantification of procyanidins in an apple extract was developed, validated and assessed. The results of the inter-laboratory evaluation indicate that the method is reliable and reproducible. Copyright © 2017. Published by Elsevier B.V.

  13. Determination of some phenolic compounds in red wine by RP-HPLC: method development and validation.

    PubMed

    Burin, Vívian Maria; Arcari, Stefany Grützmann; Costa, Léa Luzia Freitas; Bordignon-Luiz, Marilde T

    2011-09-01

    A methodology employing reversed-phase high-performance liquid chromatography (RP-HPLC) was developed and validated for simultaneous determination of five phenolic compounds in red wine. The chromatographic separation was carried out in a C(18) column with water acidify with acetic acid (pH 2.6) (solvent A) and 20% solvent A and 80% acetonitrile (solvent B) as the mobile phase. The validation parameters included: selectivity, linearity, range, limits of detection and quantitation, precision and accuracy, using an internal standard. All calibration curves were linear (R(2) > 0.999) within the range, and good precision (RSD < 2.6%) and recovery (80-120%) was obtained for all compounds. This method was applied to quantify phenolics in red wine samples from Santa Catarina State, Brazil, and good separation peaks for phenolic compounds in these wines were observed.

  14. Method Development in Forensic Toxicology.

    PubMed

    Peters, Frank T; Wissenbach, Dirk K; Busardo, Francesco Paolo; Marchei, Emilia; Pichini, Simona

    2017-01-01

    In the field of forensic toxicology, the quality of analytical methods is of great importance to ensure the reliability of results and to avoid unjustified legal consequences. A key to high quality analytical methods is a thorough method development. The presented article will provide an overview on the process of developing methods for forensic applications. This includes the definition of the method's purpose (e.g. qualitative vs quantitative) and the analytes to be included, choosing an appropriate sample matrix, setting up separation and detection systems as well as establishing a versatile sample preparation. Method development is concluded by an optimization process after which the new method is subject to method validation. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  15. Validation of catchment models for predicting land-use and climate change impacts. 2. Case study for a Mediterranean catchment

    NASA Astrophysics Data System (ADS)

    Parkin, G.; O'Donnell, G.; Ewen, J.; Bathurst, J. C.; O'Connell, P. E.; Lavabre, J.

    1996-02-01

    Validation methods commonly used to test catchment models are not capable of demonstrating a model's fitness for making predictions for catchments where the catchment response is not known (including hypothetical catchments, and future conditions of existing catchments which are subject to land-use or climate change). This paper describes the first use of a new method of validation (Ewen and Parkin, 1996. J. Hydrol., 175: 583-594) designed to address these types of application; the method involves making 'blind' predictions of selected hydrological responses which are considered important for a particular application. SHETRAN (a physically based, distributed catchment modelling system) is tested on a small Mediterranean catchment. The test involves quantification of the uncertainty in four predicted features of the catchment response (continuous hydrograph, peak discharge rates, monthly runoff, and total runoff), and comparison of observations with the predicted ranges for these features. The results of this test are considered encouraging.

  16. Ab initio analytical Raman intensities for periodic systems through a coupled perturbed Hartree-Fock/Kohn-Sham method in an atomic orbital basis. II. Validation and comparison with experiments

    NASA Astrophysics Data System (ADS)

    Maschio, Lorenzo; Kirtman, Bernard; Rérat, Michel; Orlando, Roberto; Dovesi, Roberto

    2013-10-01

    In this work, we validate a new, fully analytical method for calculating Raman intensities of periodic systems, developed and presented in Paper I [L. Maschio, B. Kirtman, M. Rérat, R. Orlando, and R. Dovesi, J. Chem. Phys. 139, 164101 (2013)]. Our validation of this method and its implementation in the CRYSTAL code is done through several internal checks as well as comparison with experiment. The internal checks include consistency of results when increasing the number of periodic directions (from 0D to 1D, 2D, 3D), comparison with numerical differentiation, and a test of the sum rule for derivatives of the polarizability tensor. The choice of basis set as well as the Hamiltonian is also studied. Simulated Raman spectra of α-quartz and of the UiO-66 Metal-Organic Framework are compared with the experimental data.

  17. Validation and long-term evaluation of a modified on-line chiral analytical method for therapeutic drug monitoring of (R,S)-methadone in clinical samples.

    PubMed

    Ansermot, Nicolas; Rudaz, Serge; Brawand-Amey, Marlyse; Fleury-Souverain, Sandrine; Veuthey, Jean-Luc; Eap, Chin B

    2009-08-01

    Matrix effects, which represent an important issue in liquid chromatography coupled to mass spectrometry or tandem mass spectrometry detection, should be closely assessed during method development. In the case of quantitative analysis, the use of stable isotope-labelled internal standard with physico-chemical properties and ionization behaviour similar to the analyte is recommended. In this paper, an example of the choice of a co-eluting deuterated internal standard to compensate for short-term and long-term matrix effect in the case of chiral (R,S)-methadone plasma quantification is reported. The method was fully validated over a concentration range of 5-800 ng/mL for each methadone enantiomer with satisfactory relative bias (-1.0 to 1.0%), repeatability (0.9-4.9%) and intermediate precision (1.4-12.0%). From the results obtained during validation, a control chart process during 52 series of routine analysis was established using both intermediate precision standard deviation and FDA acceptance criteria. The results of routine quality control samples were generally included in the +/-15% variability around the target value and mainly in the two standard deviation interval illustrating the long-term stability of the method. The intermediate precision variability estimated in method validation was found to be coherent with the routine use of the method. During this period, 257 trough concentration and 54 peak concentration plasma samples of patients undergoing (R,S)-methadone treatment were successfully analysed for routine therapeutic drug monitoring.

  18. Development and application of a validated HPLC method for the analysis of dissolution samples of levothyroxine sodium drug products.

    PubMed

    Collier, J W; Shah, R B; Bryant, A R; Habib, M J; Khan, M A; Faustino, P J

    2011-02-20

    A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (L-T(4)) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250 mm × 3.9 mm) using a 0.01 M phosphate buffer (pH 3.0)-methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 μL and the column temperature was maintained at 28°C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r(2)>0.99) over the analytical range of 0.08-0.8 μg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for L-T(4) over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. Published by Elsevier B.V.

  19. Development and application of a validated HPLC method for the analysis of dissolution samples of levothyroxine sodium drug products

    PubMed Central

    Collier, J.W.; Shah, R.B.; Bryant, A.R.; Habib, M.J.; Khan, M.A.; Faustino, P.J.

    2011-01-01

    A rapid, selective, and sensitive gradient HPLC method was developed for the analysis of dissolution samples of levothyroxine sodium tablets. Current USP methodology for levothyroxine (l-T4) was not adequate to resolve co-elutants from a variety of levothyroxine drug product formulations. The USP method for analyzing dissolution samples of the drug product has shown significant intra- and inter-day variability. The sources of method variability include chromatographic interferences introduced by the dissolution media and the formulation excipients. In the present work, chromatographic separation of levothyroxine was achieved on an Agilent 1100 Series HPLC with a Waters Nova-pak column (250mm × 3.9mm) using a 0.01 M phosphate buffer (pH 3.0)–methanol (55:45, v/v) in a gradient elution mobile phase at a flow rate of 1.0 mL/min and detection UV wavelength of 225 nm. The injection volume was 800 µL and the column temperature was maintained at 28 °C. The method was validated according to USP Category I requirements. The validation characteristics included accuracy, precision, specificity, linearity, and analytical range. The standard curve was found to have a linear relationship (r2 > 0.99) over the analytical range of 0.08–0.8 µg/mL. Accuracy ranged from 90 to 110% for low quality control (QC) standards and 95 to 105% for medium and high QC standards. Precision was <2% at all QC levels. The method was found to be accurate, precise, selective, and linear for l-T4 over the analytical range. The HPLC method was successfully applied to the analysis of dissolution samples of marketed levothyroxine sodium tablets. PMID:20947276

  20. Optimized multiparametric flow cytometric analysis of circulating endothelial cells and their subpopulations in peripheral blood of patients with solid tumors: a technical analysis.

    PubMed

    Zhou, Fangbin; Zhou, Yaying; Yang, Ming; Wen, Jinli; Dong, Jun; Tan, Wenyong

    2018-01-01

    Circulating endothelial cells (CECs) and their subpopulations could be potential novel biomarkers for various malignancies. However, reliable enumerable methods are warranted to further improve their clinical utility. This study aimed to optimize a flow cytometric method (FCM) assay for CECs and subpopulations in peripheral blood for patients with solid cancers. An FCM assay was used to detect and identify CECs. A panel of 60 blood samples, including 44 metastatic cancer patients and 16 healthy controls, were used in this study. Some key issues of CEC enumeration, including sample material and anticoagulant selection, optimal titration of antibodies, lysis/wash procedures of blood sample preparation, conditions of sample storage, sufficient cell events to enhance the signal, fluorescence-minus-one controls instead of isotype controls to reduce background noise, optimal selection of cell surface markers, and evaluating the reproducibility of our method, were integrated and investigated. Wilcoxon and Mann-Whitney U tests were used to determine statistically significant differences. In this validation study, we refined a five-color FCM method to detect CECs and their subpopulations in peripheral blood of patients with solid tumors. Several key technical issues regarding preanalytical elements, FCM data acquisition, and analysis were addressed. Furthermore, we clinically validated the utility of our method. The baseline levels of mature CECs, endothelial progenitor cells, and activated CECs were higher in cancer patients than healthy subjects ( P <0.01). However, there was no significant difference in resting CEC levels between healthy subjects and cancer patients ( P =0.193). We integrated and comprehensively addressed significant technical issues found in previously published assays and validated the reproducibility and sensitivity of our proposed method. Future work is required to explore the potential of our optimized method in clinical oncologic applications.

  1. Determination of lipophilic toxins by LC/MS/MS: single-laboratory validation.

    PubMed

    Villar-González, Adriano; Rodríguez-Velasco, María Luisa; Gago-Martínez, Ana

    2011-01-01

    An LC/MS/MS method has been developed, assessed, and intralaboratory-validated for the analysis of the lipophilic toxins currently regulated by European Union legislation: okadaic acid (OA) and dinophysistoxins 1 and 2, including their ester forms; azaspiracids 1, 2, and 3; pectenotoxins 1 and 2; yessotoxin (YTX), and the analogs 45 OH-YTX, Homo YTX, and 45 OH-Homo YTX; as well as for the analysis of 13-desmetil-spirolide C. The method consists of duplicate sample extraction with methanol and direct analysis of the crude extract without further cleanup or concentration. Ester forms of OA and dinophysistoxins are detected as the parent ions after alkaline hydrolysis of the extract. The validation process of this method was performed using both fortified and naturally contaminated samples, and experiments were designed according to International Organization for Standardization, International Union of Pure and Applied Chemistry, and AOAC guidelines. With the exception of YTX in fortified samples, RSDr below 15% and RSDR were below 25%. Recovery values were between 77 and 95%, and LOQs were below 60 microg/kg. These data together with validation experiments for recovery, selectivity, robustness, traceability, and linearity, as well as uncertainty calculations, are presented in this paper.

  2. Validation of an instrument to evaluate health promotion at schools

    PubMed Central

    Pinto, Raquel Oliveira; Pattussi, Marcos Pascoal; Fontoura, Larissa do Prado; Poletto, Simone; Grapiglia, Valenca Lemes; Balbinot, Alexandre Didó; Teixeira, Vanessa Andina; Horta, Rogério Lessa

    2016-01-01

    ABSTRACT OBJECTIVE To validate an instrument designed to assess health promotion in the school environment. METHODS A questionnaire, based on guidelines from the World Health Organization and in line with the Brazilian school health context, was developed to validate the research instrument. There were 60 items in the instrument that included 40 questions for the school manager and 20 items with direct observations made by the interviewer. The items’ content validation was performed using the Delphi technique, with the instrument being applied in 53 schools from two medium-sized cities in the South region of Brazil. Reliability (Cronbach’s alpha and split-half) and validity (principal component analysis) analyses were performed. RESULTS The final instrument remained composed of 28 items, distributed into three dimensions: pedagogical, structural and relational. The resulting components showed good factorial loads (> 0.4) and acceptable reliability (> 0.6) for most items. The pedagogical dimension identifies educational activities regarding drugs and sexuality, violence and prejudice, auto care and peace and quality of life. The structural dimension is comprised of access, sanitary structure, and conservation and equipment. The relational dimension includes relationships within the school and with the community. CONCLUSIONS The proposed instrument presents satisfactory validity and reliability values, which include aspects relevant to promote health in schools. Its use allows the description of the health promotion conditions to which students from each educational institution are exposed. Because this instrument includes items directly observed by the investigator, it should only be used during periods when there are full and regular activities at the school in question. PMID:26982958

  3. A new method for motion capture of the scapula using an optoelectronic tracking device: a feasibility study.

    PubMed

    Šenk, Miroslav; Chèze, Laurence

    2010-06-01

    Optoelectronic tracking systems are rarely used in 3D studies examining shoulder movements including the scapula. Among the reasons is the important slippage of skin markers with respect to scapula. Methods using electromagnetic tracking devices are validated and frequently applied. Thus, the aim of this study was to develop a new method for in vivo optoelectronic scapular capture dealing with the accepted accuracy issues of validated methods. Eleven arm positions in three anatomical planes were examined using five subjects in static mode. The method was based on local optimisation, and recalculation procedures were made using a set of five scapular surface markers. The scapular rotations derived from the recalculation-based method yielded RMS errors comparable with the frequently used electromagnetic scapular methods (RMS up to 12.6° for 150° arm elevation). The results indicate that the present method can be used under careful considerations for 3D kinematical studies examining different shoulder movements.

  4. Comparative Validation of the Determination of Sofosbuvir in Pharmaceuticals by Several Inexpensive Ecofriendly Chromatographic, Electrophoretic, and Spectrophotometric Methods.

    PubMed

    El-Yazbi, Amira F

    2017-07-01

    Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virus infection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with P-value > 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.

  5. Development and Validation of a Porcine (Sus scrofa) Sepsis Model

    DTIC Science & Technology

    2018-03-01

    last IACUC approval, have any methods been identified to reduce the number of live animals used in this protocol? None 10. PUBLICATIONS...SUMMARY: (Please provide, in "ABSTRACT" format, a summary of the protocol objectives, materials and methods , results - include tables/figures, and...Materials and methods : Animals were anesthetized and instrumented for cardiovascular monitoring. Lipopolysaccharide (LPS, a large molecule present on the

  6. A gas-kinetic BGK scheme for the compressible Navier-Stokes equations

    NASA Technical Reports Server (NTRS)

    Xu, Kun

    2000-01-01

    This paper presents an improved gas-kinetic scheme based on the Bhatnagar-Gross-Krook (BGK) model for the compressible Navier-Stokes equations. The current method extends the previous gas-kinetic Navier-Stokes solver developed by Xu and Prendergast by implementing a general nonequilibrium state to represent the gas distribution function at the beginning of each time step. As a result, the requirement in the previous scheme, such as the particle collision time being less than the time step for the validity of the BGK Navier-Stokes solution, is removed. Therefore, the applicable regime of the current method is much enlarged and the Navier-Stokes solution can be obtained accurately regardless of the ratio between the collision time and the time step. The gas-kinetic Navier-Stokes solver developed by Chou and Baganoff is the limiting case of the current method, and it is valid only under such a limiting condition. Also, in this paper, the appropriate implementation of boundary condition for the kinetic scheme, different kinetic limiting cases, and the Prandtl number fix are presented. The connection among artificial dissipative central schemes, Godunov-type schemes, and the gas-kinetic BGK method is discussed. Many numerical tests are included to validate the current method.

  7. 21 CFR 200.10 - Contract facilities (including consulting laboratories) utilized as extramural facilities by...

    Code of Federal Regulations, 2010 CFR

    2010-04-01

    ... (IND) Application, any information obtained during the inspection of an extramural facility having a... Administration does not consider results of validation studies of analytical and assay methods and control...

  8. Apparatus and method for managing digital resources by passing digital resource tokens between queues

    DOEpatents

    Crawford, H.J.; Lindenstruth, V.

    1999-06-29

    A method of managing digital resources of a digital system includes the step of reserving token values for certain digital resources in the digital system. A selected token value in a free-buffer-queue is then matched to an incoming digital resource request. The selected token value is then moved to a valid-request-queue. The selected token is subsequently removed from the valid-request-queue to allow a digital agent in the digital system to process the incoming digital resource request associated with the selected token. Thereafter, the selected token is returned to the free-buffer-queue. 6 figs.

  9. Apparatus and method for managing digital resources by passing digital resource tokens between queues

    DOEpatents

    Crawford, Henry J.; Lindenstruth, Volker

    1999-01-01

    A method of managing digital resources of a digital system includes the step of reserving token values for certain digital resources in the digital system. A selected token value in a free-buffer-queue is then matched to an incoming digital resource request. The selected token value is then moved to a valid-request-queue. The selected token is subsequently removed from the valid-request-queue to allow a digital agent in the digital system to process the incoming digital resource request associated with the selected token. Thereafter, the selected token is returned to the free-buffer-queue.

  10. Evolving forecasting classifications and applications in health forecasting

    PubMed Central

    Soyiri, Ireneous N; Reidpath, Daniel D

    2012-01-01

    Health forecasting forewarns the health community about future health situations and disease episodes so that health systems can better allocate resources and manage demand. The tools used for developing and measuring the accuracy and validity of health forecasts commonly are not defined although they are usually adapted forms of statistical procedures. This review identifies previous typologies used in classifying the forecasting methods commonly used in forecasting health conditions or situations. It then discusses the strengths and weaknesses of these methods and presents the choices available for measuring the accuracy of health-forecasting models, including a note on the discrepancies in the modes of validation. PMID:22615533

  11. Identification of validated questionnaires to measure adherence to pharmacological antihypertensive treatments

    PubMed Central

    Pérez-Escamilla, Beatriz; Franco-Trigo, Lucía; Moullin, Joanna C; Martínez-Martínez, Fernando; García-Corpas, José P

    2015-01-01

    Background Low adherence to pharmacological treatments is one of the factors associated with poor blood pressure control. Questionnaires are an indirect measurement method that is both economic and easy to use. However, questionnaires should meet specific criteria, to minimize error and ensure reproducibility of results. Numerous studies have been conducted to design questionnaires that quantify adherence to pharmacological antihypertensive treatments. Nevertheless, it is unknown whether questionnaires fulfil the minimum requirements of validity and reliability. The aim of this study was to compile validated questionnaires measuring adherence to pharmacological antihypertensive treatments that had at least one measure of validity and one measure of reliability. Methods A literature search was undertaken in PubMed, the Excerpta Medica Database (EMBASE), and the Latin American and Caribbean Health Sciences Literature database (Literatura Latino-Americana e do Caribe em Ciências da Saúde [LILACS]). References from included articles were hand-searched. The included papers were all that were published in English, French, Portuguese, and Spanish from the beginning of the database’s indexing until July 8, 2013, where a validation of a questionnaire (at least one demonstration of the validity and at least one of reliability) was performed to measure adherence to antihypertensive pharmacological treatments. Results A total of 234 potential papers were identified in the electronic database search; of these, 12 met the eligibility criteria. Within these 12 papers, six questionnaires were validated: the Morisky–Green–Levine; Brief Medication Questionnaire; Hill-Bone Compliance to High Blood Pressure Therapy Scale; Morisky Medication Adherence Scale; Treatment Adherence Questionnaire for Patients with Hypertension (TAQPH); and Martín–Bayarre–Grau. Questionnaire length ranged from four to 28 items. Internal consistency, assessed by Cronbach’s α, varied from 0.43 to 0.889. Additional statistical techniques utilized to assess the psychometric properties of the questionnaires varied greatly across studies. Conclusion At this stage, none of the six questionnaires included could be considered a gold standard. However, this revision will assist health professionals in the selection of the most appropriate tool for their individual circumstances. PMID:25926723

  12. Identifying Wrist Fracture Patients with High Accuracy by Automatic Categorization of X-ray Reports

    PubMed Central

    de Bruijn, Berry; Cranney, Ann; O’Donnell, Siobhan; Martin, Joel D.; Forster, Alan J.

    2006-01-01

    The authors performed this study to determine the accuracy of several text classification methods to categorize wrist x-ray reports. We randomly sampled 751 textual wrist x-ray reports. Two expert reviewers rated the presence (n = 301) or absence (n = 450) of an acute fracture of wrist. We developed two information retrieval (IR) text classification methods and a machine learning method using a support vector machine (TC-1). In cross-validation on the derivation set (n = 493), TC-1 outperformed the two IR based methods and six benchmark classifiers, including Naive Bayes and a Neural Network. In the validation set (n = 258), TC-1 demonstrated consistent performance with 93.8% accuracy; 95.5% sensitivity; 92.9% specificity; and 87.5% positive predictive value. TC-1 was easy to implement and superior in performance to the other classification methods. PMID:16929046

  13. Persistency of accuracy of genomic breeding values for different simulated pig breeding programs in developing countries.

    PubMed

    Akanno, E C; Schenkel, F S; Sargolzaei, M; Friendship, R M; Robinson, J A B

    2014-10-01

    Genetic improvement of pigs in tropical developing countries has focused on imported exotic populations which have been subjected to intensive selection with attendant high population-wide linkage disequilibrium (LD). Presently, indigenous pig population with limited selection and low LD are being considered for improvement. Given that the infrastructure for genetic improvement using the conventional BLUP selection methods are lacking, a genome-wide selection (GS) program was proposed for developing countries. A simulation study was conducted to evaluate the option of using 60 K SNP panel and observed amount of LD in the exotic and indigenous pig populations. Several scenarios were evaluated including different size and structure of training and validation populations, different selection methods and long-term accuracy of GS in different population/breeding structures and traits. The training set included previously selected exotic population, unselected indigenous population and their crossbreds. Traits studied included number born alive (NBA), average daily gain (ADG) and back fat thickness (BFT). The ridge regression method was used to train the prediction model. The results showed that accuracies of genomic breeding values (GBVs) in the range of 0.30 (NBA) to 0.86 (BFT) in the validation population are expected if high density marker panels are utilized. The GS method improved accuracy of breeding values better than pedigree-based approach for traits with low heritability and in young animals with no performance data. Crossbred training population performed better than purebreds when validation was in populations with similar or a different structure as in the training set. Genome-wide selection holds promise for genetic improvement of pigs in the tropics. © 2014 Blackwell Verlag GmbH.

  14. Multilaboratory Validation of First Action Method 2016.04 for Determination of Four Arsenic Species in Fruit Juice by High-Performance Liquid Chromatography-Inductively Coupled Plasma-Mass Spectrometry.

    PubMed

    Kubachka, Kevin; Heitkemper, Douglas T; Conklin, Sean

    2017-07-01

    Before being designated AOAC First Action Official MethodSM 2016.04, the U.S. Food and Drug Administration's method, EAM 4.10 High Performance Liquid Chromatography-Inductively Coupled Plasma-Mass Spectrometric Determination of Four Arsenic Species in Fruit Juice, underwent both a single-laboratory validation and a multilaboratory validation (MLV) study. Three federal and five state regulatory laboratories participated in the MLV study, which is the primary focus of this manuscript. The method was validated for inorganic arsenic (iAs) measured as the sum of the two iAs species arsenite [As(III)] and arsenate [As(V)], dimethylarsinic acid (DMA), and monomethylarsonic acid (MMA) by analyses of 13 juice samples, including three apple juice, three apple juice concentrate, four grape juice, and three pear juice samples. In addition, two water Standard Reference Materials (SRMs) were analyzed. The method LODs and LOQs obtained among the eight laboratories were approximately 0.3 and 2 ng/g, respectively, for each of the analytes and were adequate for the intended purpose of the method. Each laboratory analyzed method blanks, fortified method blanks, reference materials, triplicate portions of each juice sample, and duplicate fortified juice samples (one for each matrix type) at three fortification levels. In general, repeatability and reproducibility of the method was ≤15% RSD for each species present at a concentration >LOQ. The average recovery of fortified analytes for all laboratories ranged from 98 to 104% iAs, DMA, and MMA for all four juice sample matrixes. The average iAs results for SRMs 1640a and 1643e agreed within the range of 96-98% of certified values for total arsenic.

  15. Nonimaging detectors in drug development and approval.

    PubMed

    Wagner, H N

    2001-07-01

    Regulatory applications for imaging biomarkers will expand in proportion to the validation of specific parameters as they apply to individual questions in the management of disease. This validation is likely to be applicable only to a particular class of drug or a single mechanism of action. Awareness among the world's regulatory authorities of the potential for these emerging technologies is high, but so is the cost to the sponsor (including the logistics of including images in a dossier), and therefore the pharmaceutical industry must evaluate carefully the potential benefit of each technology for its drug development programs, just as the authorities must consider carefully the extent to which the method is valid for the use to which the applicant has put it. For well-characterized tracer systems, it may be possible to design inexpensive cameras that make rapid assessments.

  16. Modified Confidence Intervals for the Mean of an Autoregressive Process.

    DTIC Science & Technology

    1985-08-01

    Validity of the method 45 3.6 Theorem 47 4 Derivation of corrections 48 Introduction 48 The zero order pivot 50 4.1 Algorithm 50 CONTENTS The first...of standard confidence intervals. There are several standard methods of setting confidence intervals in simulations, including the regener- ative... method , batch means, and time series methods . We-will focus-s on improved confidence intervals for the mean of an autoregressive process, and as such our

  17. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing

    PubMed Central

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-01-01

    Aims A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R2), using R2 as the primary metric of assay agreement. However, the use of R2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. Methods We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Results Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. Conclusions The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. PMID:28747393

  18. Validation of an optimized method for the determination of iodine in human breast milk by inductively coupled plasma mass spectrometry (ICPMS) after tetramethylammonium hydroxide extraction.

    PubMed

    Huynh, Dao; Zhou, Shao Jia; Gibson, Robert; Palmer, Lyndon; Muhlhausler, Beverly

    2015-01-01

    In this study a novel method to determine iodine concentrations in human breast milk was developed and validated. The iodine was analyzed by inductively coupled plasma mass spectrometry (ICPMS) following tetramethylammonium hydroxide (TMAH) extraction at 90°C in disposable polypropylene tubes. While similar approaches have been used previously, this method adopted a shorter extraction time (1h vs. 3h) and used antimony (Sb) as the internal standard, which exhibited greater stability in breast milk and milk powder matrices compared to tellurium (Te). Method validation included: defining iodine linearity up to 200μgL(-1); confirming recovery of iodine from NIST 1549 milk powder. A recovery of 94-98% was also achieved for the NIST 1549 milk powder and human breast milk samples spiked with sodium iodide and thyroxine (T4) solutions. The method quantitation limit (MQL) for human breast milk was 1.6μgL(-1). The intra-assay and inter-assay coefficient of variation for the breast milk samples and NIST powder were <1% and <3.5%, respectively. NIST 1549 milk powder, human breast milk samples and calibration standards spiked with the internal standard were all stable for at least 2.5 months after extraction. The results of the validation process confirmed that this newly developed method provides greater accuracy and precision in the assessment of iodine concentrations in human breast milk than previous methods and therefore offers a more reliable approach for assessing iodine concentrations in human breast milk. Copyright © 2014 Elsevier GmbH. All rights reserved.

  19. Quantification of free and total desmosine and isodesmosine in human urine by liquid chromatography tandem mass spectrometry: a comparison of the surrogate-analyte and the surrogate-matrix approach for quantitation.

    PubMed

    Ongay, Sara; Hendriks, Gert; Hermans, Jos; van den Berge, Maarten; ten Hacken, Nick H T; van de Merbel, Nico C; Bischoff, Rainer

    2014-01-24

    In spite of the data suggesting the potential of urinary desmosine (DES) and isodesmosine (IDS) as biomarkers for elevated lung elastic fiber turnover, further validation in large-scale studies of COPD populations, as well as the analysis of longitudinal samples is required. Validated analytical methods that allow the accurate and precise quantification of DES and IDS in human urine are mandatory in order to properly evaluate the outcome of such clinical studies. In this work, we present the development and full validation of two methods that allow DES and IDS measurement in human urine, one for the free and one for the total (free+peptide-bound) forms. To this end we compared the two principle approaches that are used for the absolute quantification of endogenous compounds in biological samples, analysis against calibrators containing authentic analyte in surrogate matrix or containing surrogate analyte in authentic matrix. The validated methods were employed for the analysis of a small set of samples including healthy never-smokers, healthy current-smokers and COPD patients. This is the first time that the analysis of urinary free DES, free IDS, total DES, and total IDS has been fully validated and that the surrogate analyte approach has been evaluated for their quantification in biological samples. Results indicate that the presented methods have the necessary quality and level of validation to assess the potential of urinary DES and IDS levels as biomarkers for the progression of COPD and the effect of therapeutic interventions. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. [A Simultaneous Determination Method with Acetonitrile-n-Hexane Partitioning and Solid-Phase Extraction for Pesticide Residues in Livestock and Marine Products by GC-MS].

    PubMed

    Yoshizaki, Mayuko; Kobayashi, Yukari; Shimizu, Masanori; Maruyama, Kouichi

    2015-01-01

    A simultaneous determination method was examined for 312 pesticides (including isomers) in muscle of livestock and marine products by GC-MS. The pesticide residues extracted from samples with acetone and n-hexane were purified by acetonitrile-n-hexane partitioning, and C18 and SAX/PSA solid-phase extraction without using GPC. Matrix components such as cholesterol were effectively removed. In recovery tests performed by this method using pork, beef, chicken and shrimp, 237-257 pesticides showed recoveries within the range of 70-120% in each sample. Validity was confirmed for 214 of the target pesticides by means of a validation test using pork. In comparison with the Japanese official method using GPC, the treatment time of samples and the quantity of solvent were reduced substantially.

  1. Reliability and validity of two multidimensional self-reported physical activity questionnaires in people with chronic low back pain.

    PubMed

    Carvalho, Flávia A; Morelhão, Priscila K; Franco, Marcia R; Maher, Chris G; Smeets, Rob J E M; Oliveira, Crystian B; Freitas Júnior, Ismael F; Pinto, Rafael Z

    2017-02-01

    Although there is some evidence for reliability and validity of self-report physical activity (PA) questionnaires in the general adult population, it is unclear whether we can assume similar measurement properties in people with chronic low back pain (LBP). To determine the test-retest reliability of the International Physical Activity Questionnaire (IPAQ) long-version and the Baecke Physical Activity Questionnaire (BPAQ) and their criterion-related validity against data derived from accelerometers in patients with chronic LBP. Cross-sectional study. Patients with non-specific chronic LBP were recruited. Each participant attended the clinic twice (one week interval) and completed self-report PA. Accelerometer measures >7 days included time spent in moderate-and-vigorous physical activity, steps/day, counts/minute, and vector magnitude counts/minute. Intraclass Correlation Coefficients (ICC) and Bland and Altman method were used to determine reliability and spearman rho correlation were used for criterion-related validity. A total of 73 patients were included in our analyses. The reliability analyses revealed that the BPAQ and its subscales have moderate to excellent reliability (ICC 2,1 : 0.61 to 0.81), whereas IPAQ and most IPAQ domains (except walking) showed poor reliability (ICC 2,1 : 0.20 to 0.40). The Bland and Altman method revealed larger discrepancies for the IPAQ. For the validity analysis, questionnaire and accelerometer measures showed at best fair correlation (rho < 0.37). Although the BPAQ showed better reliability than the IPAQ long-version, both questionnaires did not demonstrate acceptable validity against accelerometer data. These findings suggest that questionnaire and accelerometer PA measures should not be used interchangeably in this population. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. 2-D Circulation Control Airfoil Benchmark Experiments Intended for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Englar, Robert J.; Jones, Gregory S.; Allan, Brian G.; Lin, Johb C.

    2009-01-01

    A current NASA Research Announcement (NRA) project being conducted by Georgia Tech Research Institute (GTRI) personnel and NASA collaborators includes the development of Circulation Control (CC) blown airfoils to improve subsonic aircraft high-lift and cruise performance. The emphasis of this program is the development of CC active flow control concepts for both high-lift augmentation, drag control, and cruise efficiency. A collaboration in this project includes work by NASA research engineers, whereas CFD validation and flow physics experimental research are part of NASA s systematic approach to developing design and optimization tools for CC applications to fixed-wing aircraft. The design space for CESTOL type aircraft is focusing on geometries that depend on advanced flow control technologies that include Circulation Control aerodynamics. The ability to consistently predict advanced aircraft performance requires improvements in design tools to include these advanced concepts. Validation of these tools will be based on experimental methods applied to complex flows that go beyond conventional aircraft modeling techniques. This paper focuses on recent/ongoing benchmark high-lift experiments and CFD efforts intended to provide 2-D CFD validation data sets related to NASA s Cruise Efficient Short Take Off and Landing (CESTOL) study. Both the experimental data and related CFD predictions are discussed.

  3. Validation of alternative methods for toxicity testing.

    PubMed Central

    Bruner, L H; Carr, G J; Curren, R D; Chamberlain, M

    1998-01-01

    Before nonanimal toxicity tests may be officially accepted by regulatory agencies, it is generally agreed that the validity of the new methods must be demonstrated in an independent, scientifically sound validation program. Validation has been defined as the demonstration of the reliability and relevance of a test method for a particular purpose. This paper provides a brief review of the development of the theoretical aspects of the validation process and updates current thinking about objectively testing the performance of an alternative method in a validation study. Validation of alternative methods for eye irritation testing is a specific example illustrating important concepts. Although discussion focuses on the validation of alternative methods intended to replace current in vivo toxicity tests, the procedures can be used to assess the performance of alternative methods intended for other uses. Images Figure 1 PMID:9599695

  4. Bad data packet capture device

    DOEpatents

    Chen, Dong; Gara, Alan; Heidelberger, Philip; Vranas, Pavlos

    2010-04-20

    An apparatus and method for capturing data packets for analysis on a network computing system includes a sending node and a receiving node connected by a bi-directional communication link. The sending node sends a data transmission to the receiving node on the bi-directional communication link, and the receiving node receives the data transmission and verifies the data transmission to determine valid data and invalid data and verify retransmissions of invalid data as corresponding valid data. A memory device communicates with the receiving node for storing the invalid data and the corresponding valid data. A computing node communicates with the memory device and receives and performs an analysis of the invalid data and the corresponding valid data received from the memory device.

  5. Validation of DESS as a DNA Preservation Method for the Detection of Strongyloides spp. in Canine Feces.

    PubMed

    Beknazarova, Meruyert; Millsteed, Shelby; Robertson, Gemma; Whiley, Harriet; Ross, Kirstin

    2017-06-09

    Strongyloides stercoralis is a gastrointestinal parasitic nematode with a life cycle that includes free-living and parasitic forms. For both clinical (diagnostic) and environmental evaluation, it is important that we can detect Strongyloides spp. in both human and non-human fecal samples. Real-time PCR is the most feasible method for detecting the parasite in both clinical and environmental samples that have been preserved. However, one of the biggest challenges with PCR detection is DNA degradation during the postage time from rural and remote areas to the laboratory. This study included a laboratory assessment and field validation of DESS (dimethyl sulfoxide, disodium EDTA, and saturated NaCl) preservation of Strongyloides spp. DNA in fecal samples. The laboratory study investigated the capacity of 1:1 and 1:3 sample to DESS ratios to preserve Strongyloides ratti in spike canine feces. It was found that both ratios of DESS significantly prevented DNA degradation compared to the untreated sample. This method was then validated by applying it to the field-collected canine feces and detecting Strongyloides DNA using PCR. A total of 37 canine feces samples were collected and preserved in the 1:3 ratio (sample: DESS) and of these, 17 were positive for Strongyloides spp. The study shows that both 1:1 and 1:3 sample to DESS ratios were able to preserve the Strongyloides spp. DNA in canine feces samples stored at room temperature for up to 56 days. This DESS preservation method presents the most applicable and feasible method for the Strongyloides DNA preservation in field-collected feces.

  6. The Quality and Validation of Structures from Structural Genomics

    PubMed Central

    Domagalski, Marcin J.; Zheng, Heping; Zimmerman, Matthew D.; Dauter, Zbigniew; Wlodawer, Alexander; Minor, Wladek

    2014-01-01

    Quality control of three-dimensional structures of macromolecules is a critical step to ensure the integrity of structural biology data, especially those produced by structural genomics centers. Whereas the Protein Data Bank (PDB) has proven to be a remarkable success overall, the inconsistent quality of structures reveals a lack of universal standards for structure/deposit validation. Here, we review the state-of-the-art methods used in macromolecular structure validation, focusing on validation of structures determined by X-ray crystallography. We describe some general protocols used in the rebuilding and re-refinement of problematic structural models. We also briefly discuss some frontier areas of structure validation, including refinement of protein–ligand complexes, automation of structure redetermination, and the use of NMR structures and computational models to solve X-ray crystal structures by molecular replacement. PMID:24203341

  7. Validate or falsify: Lessons learned from a microscopy method claimed to be useful for detecting Borrelia and Babesia organisms in human blood.

    PubMed

    Aase, Audun; Hajdusek, Ondrej; Øines, Øivind; Quarsten, Hanne; Wilhelmsson, Peter; Herstad, Tove K; Kjelland, Vivian; Sima, Radek; Jalovecka, Marie; Lindgren, Per-Eric; Aaberge, Ingeborg S

    2016-01-01

    A modified microscopy protocol (the LM-method) was used to demonstrate what was interpreted as Borrelia spirochetes and later also Babesia sp., in peripheral blood from patients. The method gained much publicity, but was not validated prior to publication, which became the purpose of this study using appropriate scientific methodology, including a control group. Blood from 21 patients previously interpreted as positive for Borrelia and/or Babesia infection by the LM-method and 41 healthy controls without known history of tick bite were collected, blinded and analysed for these pathogens by microscopy in two laboratories by the LM-method and conventional method, respectively, by PCR methods in five laboratories and by serology in one laboratory. Microscopy by the LM-method identified structures claimed to be Borrelia- and/or Babesia in 66% of the blood samples of the patient group and in 85% in the healthy control group. Microscopy by the conventional method for Babesia only did not identify Babesia in any samples. PCR analysis detected Borrelia DNA in one sample of the patient group and in eight samples of the control group; whereas Babesia DNA was not detected in any of the blood samples using molecular methods. The structures interpreted as Borrelia and Babesia by the LM-method could not be verified by PCR. The method was, thus, falsified. This study underlines the importance of doing proper test validation before new or modified assays are introduced.

  8. Support Vector Data Description Model to Map Specific Land Cover with Optimal Parameters Determined from a Window-Based Validation Set.

    PubMed

    Zhang, Jinshui; Yuan, Zhoumiqi; Shuai, Guanyuan; Pan, Yaozhong; Zhu, Xiufang

    2017-04-26

    This paper developed an approach, the window-based validation set for support vector data description (WVS-SVDD), to determine optimal parameters for support vector data description (SVDD) model to map specific land cover by integrating training and window-based validation sets. Compared to the conventional approach where the validation set included target and outlier pixels selected visually and randomly, the validation set derived from WVS-SVDD constructed a tightened hypersphere because of the compact constraint by the outlier pixels which were located neighboring to the target class in the spectral feature space. The overall accuracies for wheat and bare land achieved were as high as 89.25% and 83.65%, respectively. However, target class was underestimated because the validation set covers only a small fraction of the heterogeneous spectra of the target class. The different window sizes were then tested to acquire more wheat pixels for validation set. The results showed that classification accuracy increased with the increasing window size and the overall accuracies were higher than 88% at all window size scales. Moreover, WVS-SVDD showed much less sensitivity to the untrained classes than the multi-class support vector machine (SVM) method. Therefore, the developed method showed its merits using the optimal parameters, tradeoff coefficient ( C ) and kernel width ( s ), in mapping homogeneous specific land cover.

  9. Multiclass method for the determination of 62 antibiotics in milk.

    PubMed

    Moretti, Simone; Cruciani, Gabriele; Romanelli, Sara; Rossi, Rosanna; Saluti, Giorgio; Galarini, Roberta

    2016-09-01

    A multiclass method for screening and confirmatory analysis of antimicrobial residues in milk has been developed and validated. Sixty-two antibiotics belonging to ten different drug families (amphenicols, cephalosporins, lincosamides, macrolides, penicillin, pleuromutilins, quinolones, rifamycins, sulfonamides and tetracyclines) have been included. After the addition of an aqueous solution of EDTA, the milk samples were extracted twice with acetonitrile, evaporated and dissolved in ammonium acetate. After centrifugation, 10 µl were analysed using LC-Q-Orbitrap operating in positive electrospray ionization mode. The method was validated in bovine milk in the range 2-150 µg kg(-1) for all antibiotics; for four compounds with maximum residue limits higher than 100 µg kg(-1) , the validation interval has been extended until 333 µg kg(-1) . The estimated performance characteristics were satisfactory complying with the requirements of Commission Decision 2002/657/EC. Good accuracies were obtained also taking advantage from the versatility of the hybrid mass analyser. Identification criteria were achieved verifying the mass accuracy and ion ratio of two ions, including the pseudomolecular one, where possible. Finally, the developed procedure was applied to 13 real cases of suspect milk samples (microbiological assay) confirming the presence of one or more antibiotics, although frequently, the maximum residue limits were not exceeded. The availability of rapid multiclass confirmatory methods can avoid wastes of suspect, but compliant, raw milk samples. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  10. Landing Gear Noise Prediction and Analysis for Tube-and-Wing and Hybrid-Wing-Body Aircraft

    NASA Technical Reports Server (NTRS)

    Guo, Yueping; Burley, Casey L.; Thomas, Russell H.

    2016-01-01

    Improvements and extensions to landing gear noise prediction methods are developed. New features include installation effects such as reflection from the aircraft, gear truck angle effect, local flow calculation at the landing gear locations, gear size effect, and directivity for various gear designs. These new features have not only significantly improved the accuracy and robustness of the prediction tools, but also have enabled applications to unconventional aircraft designs and installations. Systematic validations of the improved prediction capability are then presented, including parametric validations in functional trends as well as validations in absolute amplitudes, covering a wide variety of landing gear designs, sizes, and testing conditions. The new method is then applied to selected concept aircraft configurations in the portfolio of the NASA Environmentally Responsible Aviation Project envisioned for the timeframe of 2025. The landing gear noise levels are on the order of 2 to 4 dB higher than previously reported predictions due to increased fidelity in accounting for installation effects and gear design details. With the new method, it is now possible to reveal and assess the unique noise characteristics of landing gear systems for each type of aircraft. To address the inevitable uncertainties in predictions of landing gear noise models for future aircraft, an uncertainty analysis is given, using the method of Monte Carlo simulation. The standard deviation of the uncertainty in predicting the absolute level of landing gear noise is quantified and determined to be 1.4 EPNL dB.

  11. A new hybrid double divisor ratio spectra method for the analysis of ternary mixtures

    NASA Astrophysics Data System (ADS)

    Youssef, Rasha M.; Maher, Hadir M.

    2008-10-01

    A new spectrophotometric method was developed for the simultaneous determination of ternary mixtures, without prior separation steps. This method is based on convolution of the double divisor ratio spectra, obtained by dividing the absorption spectrum of the ternary mixture by a standard spectrum of two of the three compounds in the mixture, using combined trigonometric Fourier functions. The magnitude of the Fourier function coefficients, at either maximum or minimum points, is related to the concentration of each drug in the mixture. The mathematical explanation of the procedure is illustrated. The method was applied for the assay of a model mixture consisting of isoniazid (ISN), rifampicin (RIF) and pyrazinamide (PYZ) in synthetic mixtures, commercial tablets and human urine samples. The developed method was compared with the double divisor ratio spectra derivative method (DDRD) and derivative ratio spectra-zero-crossing method (DRSZ). Linearity, validation, accuracy, precision, limits of detection, limits of quantitation, and other aspects of analytical validation are included in the text.

  12. Validation on milk and sprouts of EN ISO 16654:2001 - Microbiology of food and animal feeding stuffs - Horizontal method for the detection of Escherichia coli O157.

    PubMed

    Tozzoli, Rosangela; Maugliani, Antonella; Michelacci, Valeria; Minelli, Fabio; Caprioli, Alfredo; Morabito, Stefano

    2018-05-08

    In 2006, the European Committee for standardisation (CEN)/Technical Committee 275 - Food analysis - Horizontal methods/Working Group 6 - Microbiology of the food chain (TC275/WG6), launched the project of validating the method ISO 16654:2001 for the detection of Escherichia coli O157 in foodstuff by the evaluation of its performance, in terms of sensitivity and specificity, through collaborative studies. Previously, a validation study had been conducted to assess the performance of the Method No 164 developed by the Nordic Committee for Food Analysis (NMKL), which aims at detecting E. coli O157 in food as well, and is based on a procedure equivalent to that of the ISO 16654:2001 standard. Therefore, CEN established that the validation data obtained for the NMKL Method 164 could be exploited for the ISO 16654:2001 validation project, integrated with new data obtained through two additional interlaboratory studies on milk and sprouts, run in the framework of the CEN mandate No. M381. The ISO 16654:2001 validation project was led by the European Union Reference Laboratory for Escherichia coli including VTEC (EURL-VTEC), which organized the collaborative validation study on milk in 2012 with 15 participating laboratories and that on sprouts in 2014, with 14 participating laboratories. In both studies, a total of 24 samples were tested by each laboratory. Test materials were spiked with different concentration of E. coli O157 and the 24 samples corresponded to eight replicates of three levels of contamination: zero, low and high spiking level. The results submitted by the participating laboratories were analyzed to evaluate the sensitivity and specificity of the ISO 16654:2001 method when applied to milk and sprouts. The performance characteristics calculated on the data of the collaborative validation studies run under the CEN mandate No. M381 returned sensitivity and specificity of 100% and 94.4%, respectively for the milk study. As for sprouts matrix, the sensitivity resulted in 75.9% in the low level of contamination samples and 96.4% in samples spiked with high level of E. coli O157 and specificity was calculated as 99.1%. Copyright © 2018 Elsevier B.V. All rights reserved.

  13. LC-MS/MS determination of 2-(4-((2-(2S,5R)-2-Cyano-5-ethynyl-1-pyrrolidinyl)-2-oxoethylamino)-4-methyl-1-piperidinyl)-4-pyridinecarboxylic acid (ABT-279) in dog plasma with high-throughput protein precipitation sample preparation.

    PubMed

    Kim, Joseph; Flick, Jeanette; Reimer, Michael T; Rodila, Ramona; Wang, Perry G; Zhang, Jun; Ji, Qin C; El-Shourbagy, Tawakol A

    2007-11-01

    As an effective DPP-IV inhibitor, 2-(4-((2-(2S,5R)-2-Cyano-5-ethynyl-1-pyrrolidinyl)-2-oxoethylamino)-4-methyl-1-piperidinyl)-4-pyridinecarboxylic acid (ABT-279), is an investigational drug candidate under development at Abbott Laboratories for potential treatment of type 2 diabetes. In order to support the development of ABT-279, multiple analytical methods for an accurate, precise and selective concentration determination of ABT-279 in different matrices were developed and validated in accordance with the US Food and Drug Administration Guidance on Bioanalytical Method Validation. The analytical method for ABT-279 in dog plasma was validated in parallel to other validations for ABT-279 determination in different matrices. In order to shorten the sample preparation time and increase method precision, an automated multi-channel liquid handler was used to perform high-throughput protein precipitation and all other liquid transfers. The separation was performed through a Waters YMC ODS-AQ column (2.0 x 150 mm, 5 microm, 120 A) with a mobile phase of 20 mm ammonium acetate in 20% acetonitrile at a flow rate of 0.3 mL/min. Data collection started at 2.2 min and continued for 2.0 min. The validated linear dynamic range in dog plasma was between 3.05 and 2033.64 ng/mL using a 50 microL sample volume. The achieved r(2) coefficient of determination from three consecutive runs was between 0.998625 and 0.999085. The mean bias was between -4.1 and 4.3% for all calibration standards including lower limit of quantitation. The mean bias was between -8.0 and 0.4% for the quality control samples. The precision, expressed as a coefficient of variation (CV), was < or =4.1% for all levels of quality control samples. The validation results demonstrated that the high-throughput method was accurate, precise and selective for the determination of ABT-279 in dog plasma. The validated method was also employed to support two toxicology studies. The passing rate was 100% for all 49 runs from one validation study and two toxicology studies. Copyright (c) 2007 John Wiley & Sons, Ltd.

  14. Bibliometrics for Social Validation.

    PubMed

    Hicks, Daniel J

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion.

  15. Bibliometrics for Social Validation

    PubMed Central

    2016-01-01

    This paper introduces a bibliometric, citation network-based method for assessing the social validation of novel research, and applies this method to the development of high-throughput toxicology research at the US Environmental Protection Agency. Social validation refers to the acceptance of novel research methods by a relevant scientific community; it is formally independent of the technical validation of methods, and is frequently studied in history, philosophy, and social studies of science using qualitative methods. The quantitative methods introduced here find that high-throughput toxicology methods are spread throughout a large and well-connected research community, which suggests high social validation. Further assessment of social validation involving mixed qualitative and quantitative methods are discussed in the conclusion. PMID:28005974

  16. A systematic review of publications assessing reliability and validity of the Behavioral Risk Factor Surveillance System (BRFSS), 2004–2011

    PubMed Central

    2013-01-01

    Background In recent years response rates on telephone surveys have been declining. Rates for the behavioral risk factor surveillance system (BRFSS) have also declined, prompting the use of new methods of weighting and the inclusion of cell phone sampling frames. A number of scholars and researchers have conducted studies of the reliability and validity of the BRFSS estimates in the context of these changes. As the BRFSS makes changes in its methods of sampling and weighting, a review of reliability and validity studies of the BRFSS is needed. Methods In order to assess the reliability and validity of prevalence estimates taken from the BRFSS, scholarship published from 2004–2011 dealing with tests of reliability and validity of BRFSS measures was compiled and presented by topics of health risk behavior. Assessments of the quality of each publication were undertaken using a categorical rubric. Higher rankings were achieved by authors who conducted reliability tests using repeated test/retest measures, or who conducted tests using multiple samples. A similar rubric was used to rank validity assessments. Validity tests which compared the BRFSS to physical measures were ranked higher than those comparing the BRFSS to other self-reported data. Literature which undertook more sophisticated statistical comparisons was also ranked higher. Results Overall findings indicated that BRFSS prevalence rates were comparable to other national surveys which rely on self-reports, although specific differences are noted for some categories of response. BRFSS prevalence rates were less similar to surveys which utilize physical measures in addition to self-reported data. There is very little research on reliability and validity for some health topics, but a great deal of information supporting the validity of the BRFSS data for others. Conclusions Limitations of the examination of the BRFSS were due to question differences among surveys used as comparisons, as well as mode of data collection differences. As the BRFSS moves to incorporating cell phone data and changing weighting methods, a review of reliability and validity research indicated that past BRFSS landline only data were reliable and valid as measured against other surveys. New analyses and comparisons of BRFSS data which include the new methodologies and cell phone data will be needed to ascertain the impact of these changes on estimates in the future. PMID:23522349

  17. Validating Alternative Modes of Scoring for Coloured Progressive Matrices.

    ERIC Educational Resources Information Center

    Razel, Micha; Eylon, Bat-Sheva

    Conventional scoring of the Coloured Progressive Matrices (CPM) was compared with three methods of multiple weight scoring. The methods include: (1) theoretical weighting in which the weights were based on a theory of cognitive processing; (2) judged weighting in which the weights were given by a group of nine adult expert judges; and (3)…

  18. Senate Concurrent Resolution 83: Screening for Learning Disabilities. A Report to the 70th Legislature.

    ERIC Educational Resources Information Center

    Texas Education Agency, Austin.

    In response to Senate Concurrent Resolution 83, the Texas Education Agency studied methods for screening all students upon entry to school for significant developmental lags that could lead to learning disabilities. The resulting report includes: (1) identification of screening techniques; (2) methods currently in use and validated for treatment…

  19. Considerations regarding the validation of chromatographic mass spectrometric methods for the quantification of endogenous substances in forensics.

    PubMed

    Hess, Cornelius; Sydow, Konrad; Kueting, Theresa; Kraemer, Michael; Maas, Alexandra

    2018-02-01

    The requirement for correct evaluation of forensic toxicological results in daily routine work and scientific studies is reliable analytical data based on validated methods. Validation of a method gives the analyst tools to estimate the efficacy and reliability of the analytical method. Without validation, data might be contested in court and lead to unjustified legal consequences for a defendant. Therefore, new analytical methods to be used in forensic toxicology require careful method development and validation of the final method. Until now, there are no publications on the validation of chromatographic mass spectrometric methods for the detection of endogenous substances although endogenous analytes can be important in Forensic Toxicology (alcohol consumption marker, congener alcohols, gamma hydroxy butyric acid, human insulin and C-peptide, creatinine, postmortal clinical parameters). For these analytes, conventional validation instructions cannot be followed completely. In this paper, important practical considerations in analytical method validation for endogenous substances will be discussed which may be used as guidance for scientists wishing to develop and validate analytical methods for analytes produced naturally in the human body. Especially the validation parameters calibration model, analytical limits, accuracy (bias and precision) and matrix effects and recovery have to be approached differently. Highest attention should be paid to selectivity experiments. Copyright © 2017 Elsevier B.V. All rights reserved.

  20. Unsteady Analysis of Separated Aerodynamic Flows Using an Unstructured Multigrid Algorithm

    NASA Technical Reports Server (NTRS)

    Pelaez, Juan; Mavriplis, Dimitri J.; Kandil, Osama

    2001-01-01

    An implicit method for the computation of unsteady flows on unstructured grids is presented. The resulting nonlinear system of equations is solved at each time step using an agglomeration multigrid procedure. The method allows for arbitrarily large time steps and is efficient in terms of computational effort and storage. Validation of the code using a one-equation turbulence model is performed for the well-known case of flow over a cylinder. A Detached Eddy Simulation model is also implemented and its performance compared to the one equation Spalart-Allmaras Reynolds Averaged Navier-Stokes (RANS) turbulence model. Validation cases using DES and RANS include flow over a sphere and flow over a NACA 0012 wing including massive stall regimes. The project was driven by the ultimate goal of computing separated flows of aerodynamic interest, such as massive stall or flows over complex non-streamlined geometries.

  1. Validation and Assessment of Three Methods to Estimate 24-h Urinary Sodium Excretion from Spot Urine Samples in Chinese Adults

    PubMed Central

    Peng, Yaguang; Li, Wei; Wang, Yang; Chen, Hui; Bo, Jian; Wang, Xingyu; Liu, Lisheng

    2016-01-01

    24-h urinary sodium excretion is the gold standard for evaluating dietary sodium intake, but it is often not feasible in large epidemiological studies due to high participant burden and cost. Three methods—Kawasaki, INTERSALT, and Tanaka—have been proposed to estimate 24-h urinary sodium excretion from a spot urine sample, but these methods have not been validated in the general Chinese population. This aim of this study was to assess the validity of three methods for estimating 24-h urinary sodium excretion using spot urine samples against measured 24-h urinary sodium excretion in a Chinese sample population. Data are from a substudy of the Prospective Urban Rural Epidemiology (PURE) study that enrolled 120 participants aged 35 to 70 years and collected their morning fasting urine and 24-h urine specimens. Bias calculations (estimated values minus measured values) and Bland-Altman plots were used to assess the validity of the three estimation methods. 116 participants were included in the final analysis. Mean bias for the Kawasaki method was -740 mg/day (95% CI: -1219, 262 mg/day), and was the lowest among the three methods. Mean bias for the Tanaka method was -2305 mg/day (95% CI: -2735, 1875 mg/day). Mean bias for the INTERSALT method was -2797 mg/day (95% CI: -3245, 2349 mg/day), and was the highest of the three methods. Bland-Altman plots indicated that all three methods underestimated 24-h urinary sodium excretion. The Kawasaki, INTERSALT and Tanaka methods for estimation of 24-h urinary sodium excretion using spot urines all underestimated true 24-h urinary sodium excretion in this sample of Chinese adults. Among the three methods, the Kawasaki method was least biased, but was still relatively inaccurate. A more accurate method is needed to estimate the 24-h urinary sodium excretion from spot urine for assessment of dietary sodium intake in China. PMID:26895296

  2. Development and validation of an HPLC–MS/MS method to determine clopidogrel in human plasma

    PubMed Central

    Liu, Gangyi; Dong, Chunxia; Shen, Weiwei; Lu, Xiaopei; Zhang, Mengqi; Gui, Yuzhou; Zhou, Qinyi; Yu, Chen

    2015-01-01

    A quantitative method for clopidogrel using online-SPE tandem LC–MS/MS was developed and fully validated according to the well-established FDA guidelines. The method achieves adequate sensitivity for pharmacokinetic studies, with lower limit of quantifications (LLOQs) as low as 10 pg/mL. Chromatographic separations were performed on reversed phase columns Kromasil Eternity-2.5-C18-UHPLC for both methods. Positive electrospray ionization in multiple reaction monitoring (MRM) mode was employed for signal detection and a deuterated analogue (clopidogrel-d4) was used as internal standard (IS). Adjustments in sample preparation, including introduction of an online-SPE system proved to be the most effective method to solve the analyte back-conversion in clinical samples. Pooled clinical samples (two levels) were prepared and successfully used as real-sample quality control (QC) in the validation of back-conversion testing under different conditions. The result showed that the real samples were stable in room temperature for 24 h. Linearity, precision, extraction recovery, matrix effect on spiked QC samples and stability tests on both spiked QCs and real sample QCs stored in different conditions met the acceptance criteria. This online-SPE method was successfully applied to a bioequivalence study of 75 mg single dose clopidogrel tablets in 48 healthy male subjects. PMID:26904399

  3. Validating Analytical Protocols to Determine Selected Pesticides and PCBs Using Routine Samples.

    PubMed

    Pindado Jiménez, Oscar; García Alonso, Susana; Pérez Pastor, Rosa María

    2017-01-01

    This study aims at providing recommendations concerning the validation of analytical protocols by using routine samples. It is intended to provide a case-study on how to validate the analytical methods in different environmental matrices. In order to analyze the selected compounds (pesticides and polychlorinated biphenyls) in two different environmental matrices, the current work has performed and validated two analytical procedures by GC-MS. A description is given of the validation of the two protocols by the analysis of more than 30 samples of water and sediments collected along nine months. The present work also scopes the uncertainty associated with both analytical protocols. In detail, uncertainty of water sample was performed through a conventional approach. However, for the sediments matrices, the estimation of proportional/constant bias is also included due to its inhomogeneity. Results for the sediment matrix are reliable, showing a range 25-35% of analytical variability associated with intermediate conditions. The analytical methodology for the water matrix determines the selected compounds with acceptable recoveries and the combined uncertainty ranges between 20 and 30%. Analyzing routine samples is rarely applied to assess trueness of novel analytical methods and up to now this methodology was not focused on organochlorine compounds in environmental matrices.

  4. Validation Evidence for the Elementary School Version of the MUSIC® Model of Academic Motivation Inventory (Pruebas de validación para el Modelo MUSIC® de Inventario de Motivación Educativa para Escuela Primaria)

    ERIC Educational Resources Information Center

    Jones, Brett D.; Sigmon, Miranda L.

    2016-01-01

    Introduction: The purpose of our study was to assess whether the Elementary School version of the MUSIC® Model of Academic Motivation Inventory was valid for use with elementary students in classrooms with regular classroom teachers and student teachers enrolled in a university teacher preparation program. Method: The participants included 535…

  5. Validating the Inactivation Effectiveness of Chemicals on Ebola Virus.

    PubMed

    Haddock, Elaine; Feldmann, Friederike

    2017-01-01

    While viruses such as Ebola virus must be handled in high-containment laboratories, there remains the need to process virus-infected samples for downstream research testing. This processing often includes removal to lower containment areas and therefore requires assurance of complete viral inactivation within the sample before removal from high-containment. Here we describe methods for the removal of chemical reagents used in inactivation procedures, allowing for validation of the effectiveness of various inactivation protocols.

  6. Comparison of the prevalence of malnutrition diagnosis in head and neck, gastrointestinal and lung cancer patients by three classification methods

    PubMed Central

    Platek, Mary E.; Popp KPf, Johann V.; Possinger, Candi S.; DeNysschen, Carol A.; Horvath, Peter; Brown, Jean K.

    2011-01-01

    Background Malnutrition is prevalent among patients within certain cancer types. There is lack of universal standard of care for nutrition screening, lack of agreement on an operational definition and on validity of malnutrition indicators. Objective In a secondary data analysis, we investigated prevalence of malnutrition diagnosis by three classification methods using data from medical records of a National Cancer Institute (NCI)-designated comprehensive cancer center. Interventions/Methods Records of 227 patients hospitalized during 1998 with head and neck, gastrointestinal or lung cancer were reviewed for malnutrition based on three methods: 1) physician diagnosed malnutrition related ICD-9 codes; 2) in-hospital nutritional assessment summary conducted by Registered Dietitians; and 3) body mass index (BMI). For patients with multiple admissions, only data from the first hospitalization was included. Results Prevalence of malnutrition diagnosis ranged from 8.8% based on BMI to approximately 26% of all cases based on dietitian assessment. Kappa coefficients between any methods indicated a weak (kappa=0.23, BMI and Dietitians and kappa=0.28, Dietitians and Physicians) to fair strength of agreement (kappa=0.38, BMI and Physicians). Conclusions Available methods to identify patients with malnutrition in an NCI designated comprehensive cancer center resulted in varied prevalence of malnutrition diagnosis. Universal standard of care for nutrition screening that utilizes validated tools is needed. Implications for Practice The Joint Commission on the Accreditation of Healthcare Organizations requires nutritional screening of patients within 24 hours of admission. For this purpose, implementation of a validated tool that can be used by various healthcare practitioners, including nurses, needs to be considered. PMID:21242767

  7. Do Current Recommendations for Upper Instrumented Vertebra Predict Shoulder Imbalance? An Attempted Validation of Level Selection for Adolescent Idiopathic Scoliosis.

    PubMed

    Bjerke, Benjamin T; Cheung, Zoe B; Shifflett, Grant D; Iyer, Sravisht; Derman, Peter B; Cunningham, Matthew E

    2015-10-01

    Shoulder balance for adolescent idiopathic scoliosis (AIS) patients is associated with patient satisfaction and self-image. However, few validated systems exist for selecting the upper instrumented vertebra (UIV) post-surgical shoulder balance. The purpose is to examine the existing UIV selection criteria and correlate with post-surgical shoulder balance in AIS patients. Patients who underwent spinal fusion at age 10-18 years for AIS over a 6-year period were reviewed. All patients with a minimum of 1-year radiographic follow-up were included. Imbalance was determined to be radiographic shoulder height |RSH| ≥ 15 mm at latest follow-up. Three UIV selection methods were considered: Lenke, Ilharreborde, and Trobisch. A recommended UIV was determined using each method from pre-surgical radiographs. The recommended UIV for each method was compared to the actual UIV instrumented for all three methods; concordance between these levels was defined as "Correct" UIV selection, and discordance was defined as "Incorrect" selection. One hundred seventy-one patients were included with 2.3 ± 1.1 year follow-up. For all methods, "Correct" UIV selection resulted in more shoulder imbalance than "Incorrect" UIV selection. Overall shoulder imbalance incidence was improved from 31.0% (53/171) to 15.2% (26/171). New shoulder imbalance incidence for patients with previously level shoulders was 8.8%. We could not identify a set of UIV selection criteria that accurately predicted post-surgical shoulder balance. Further validated measures are needed in this area. The complexity of proximal thoracic curve correction is underscored in a case example, where shoulder imbalance occurred despite "Correct" UIV selection by all methods.

  8. Single-Laboratory Validation for the Determination of Flavonoids in Hawthorn Leaves and Finished Products by LC-UV.

    PubMed

    Mudge, Elizabeth M; Liu, Ying; Lund, Jensen A; Brown, Paula N

    2016-11-01

    Suitably validated analytical methods that can be used to quantify medicinally active phytochemicals in natural health products are required by regulators, manufacturers, and consumers. Hawthorn ( Crataegus ) is a botanical ingredient in natural health products used for the treatment of cardiovascular disorders. A method for the quantitation of vitexin-2″- O - rhamnoside, vitexin, isovitexin, rutin, and hyperoside in hawthorn leaf and flower raw materials and finished products was optimized and validated according to AOAC International guidelines. A two-level partial factorial study was used to guide the optimization of the sample preparation. The optimal conditions were found to be a 60-minute extraction using 50 : 48 : 2 methanol : water : acetic acid followed by a 25-minute separation using a reversed-phased liquid chromatography column with ultraviolet absorbance detection. The single-laboratory validation study evaluated method selectivity, accuracy, repeatability, linearity, limit of quantitation, and limit of detection. Individual flavonoid content ranged from 0.05 mg/g to 17.5 mg/g in solid dosage forms and raw materials. Repeatability ranged from 0.7 to 11.7 % relative standard deviation corresponding to HorRat ranges from 0.2 to 1.6. Calibration curves for each flavonoid were linear within the analytical ranges with correlation coefficients greater than 99.9 %. Herein is the first report of a validated method that is fit for the purpose of quantifying five major phytochemical marker compounds in both raw materials and finished products made from North American ( Crataegus douglasii ) and European ( Crataegus monogyna and Crataegus laevigata) hawthorn species. The method includes optimized extraction of samples without a prolonged drying process and reduced liquid chromatography separation time. Georg Thieme Verlag KG Stuttgart · New York.

  9. Enhancement of Chemical Entity Identification in Text Using Semantic Similarity Validation

    PubMed Central

    Grego, Tiago; Couto, Francisco M.

    2013-01-01

    With the amount of chemical data being produced and reported in the literature growing at a fast pace, it is increasingly important to efficiently retrieve this information. To tackle this issue text mining tools have been applied, but despite their good performance they still provide many errors that we believe can be filtered by using semantic similarity. Thus, this paper proposes a novel method that receives the results of chemical entity identification systems, such as Whatizit, and exploits the semantic relationships in ChEBI to measure the similarity between the entities found in the text. The method assigns a single validation score to each entity based on its similarities with the other entities also identified in the text. Then, by using a given threshold, the method selects a set of validated entities and a set of outlier entities. We evaluated our method using the results of two state-of-the-art chemical entity identification tools, three semantic similarity measures and two text window sizes. The method was able to increase precision without filtering a significant number of correctly identified entities. This means that the method can effectively discriminate the correctly identified chemical entities, while discarding a significant number of identification errors. For example, selecting a validation set with 75% of all identified entities, we were able to increase the precision by 28% for one of the chemical entity identification tools (Whatizit), maintaining in that subset 97% the correctly identified entities. Our method can be directly used as an add-on by any state-of-the-art entity identification tool that provides mappings to a database, in order to improve their results. The proposed method is included in a freely accessible web tool at www.lasige.di.fc.ul.pt/webtools/ice/. PMID:23658791

  10. A newly validated high-performance liquid chromatography method with diode array ultraviolet detection for analysis of the antimalarial drug primaquine in the blood plasma.

    PubMed

    Carmo, Ana Paula Barbosa do; Borborema, Manoella; Ribeiro, Stephan; De-Oliveira, Ana Cecilia Xavier; Paumgartten, Francisco Jose Roma; Moreira, Davyson de Lima

    2017-01-01

    Primaquine (PQ) diphosphate is an 8-aminoquinoline antimalarial drug with unique therapeutic properties. It is the only drug that prevents relapses of Plasmodium vivax or Plasmodium ovale infections. In this study, a fast, sensitive, cost-effective, and robust method for the extraction and high-performance liquid chromatography with diode array ultraviolet detection (HPLC-DAD-UV ) analysis of PQ in the blood plasma was developed and validated. After plasma protein precipitation, PQ was obtained by liquid-liquid extraction and analyzed by HPLC-DAD-UV with a modified-silica cyanopropyl column (250mm × 4.6mm i.d. × 5μm) as the stationary phase and a mixture of acetonitrile and 10mM ammonium acetate buffer (pH = 3.80) (45:55) as the mobile phase. The flow rate was 1.0mL·min-1, the oven temperature was 50OC, and absorbance was measured at 264nm. The method was validated for linearity, intra-day and inter-day precision, accuracy, recovery, and robustness. The detection (LOD) and quantification (LOQ) limits were 1.0 and 3.5ng·mL-1, respectively. The method was used to analyze the plasma of female DBA-2 mice treated with 20mg.kg-1 (oral) PQ diphosphate. By combining a simple, low-cost extraction procedure with a sensitive, precise, accurate, and robust method, it was possible to analyze PQ in small volumes of plasma. The new method presents lower LOD and LOQ limits and requires a shorter analysis time and smaller plasma volumes than those of previously reported HPLC methods with DAD-UV detection. The new validated method is suitable for kinetic studies of PQ in small rodents, including mouse models for the study of malaria.

  11. Development and Validation of Chemometric Spectrophotometric Methods for Simultaneous Determination of Simvastatin and Nicotinic Acid in Binary Combinations.

    PubMed

    Alahmad, Shoeb; Elfatatry, Hamed M; Mabrouk, Mokhtar M; Hammad, Sherin F; Mansour, Fotouh R

    2018-01-01

    The development and introduction of combined therapy represent a challenge for analysis due to severe overlapping of their UV spectra in case of spectroscopy or the requirement of a long tedious and high cost separation technique in case of chromatography. Quality control laboratories have to develop and validate suitable analytical procedures in order to assay such multi component preparations. New spectrophotometric methods for the simultaneous determination of simvastatin (SIM) and nicotinic acid (NIA) in binary combinations were developed. These methods are based on chemometric treatment of data, the applied chemometric techniques are multivariate methods including classical least squares (CLS), principal component regression (PCR) and partial least squares (PLS). In these techniques, the concentration data matrix were prepared by using the synthetic mixtures containing SIM and NIA dissolved in ethanol. The absorbance data matrix corresponding to the concentration data matrix was obtained by measuring the absorbance at 12 wavelengths in the range 216 - 240 nm at 2 nm intervals in the zero-order. The spectrophotometric procedures do not require any separation step. The accuracy, precision and the linearity ranges of the methods have been determined and validated by analyzing synthetic mixtures containing the studied drugs. Chemometric spectrophotometric methods have been developed in the present study for the simultaneous determination of simvastatin and nicotinic acid in their synthetic binary mixtures and in their mixtures with possible excipients present in tablet dosage form. The validation was performed successfully. The developed methods have been shown to be accurate, linear, precise, and so simple. The developed methods can be used routinely for the determination dosage form. Copyright© Bentham Science Publishers; For any queries, please email at epub@benthamscience.org.

  12. Validation and evaluation of the advanced aeronautical CFD system SAUNA: A method developer's view

    NASA Astrophysics Data System (ADS)

    Shaw, J. A.; Peace, A. J.; Georgala, J. M.; Childs, P. N.

    1993-09-01

    This paper is concerned with a detailed validation and evaluation of the SAUNA CFD system for complex aircraft configurations. The methodology of the complete system is described in brief, including its unique use of differing grid generation strategies (structured, unstructured or both) depending on the geometric complexity of the configuration. A wide range of configurations and flow conditions are chosen in the validation and evaluation exercise to demonstrate the scope of SAUNA. A detailed description of the results from the method is preceded by a discussion on the philosophy behind the strategy followed in the exercise, in terms of equality assessment and the differing roles of the code developer and the code user. It is considered that SAUNA has grown into a highly usable tool for the aircraft designer, in combining flexibility and accuracy in an efficient manner.

  13. Method development and validation for simultaneous determination of IEA-R1 reactor’s pool water uranium and silicon content by ICP OES

    NASA Astrophysics Data System (ADS)

    Ulrich, J. C.; Guilhen, S. N.; Cotrim, M. E. B.; Pires, M. A. F.

    2018-03-01

    IPEN’s research reactor, IEA-R1, an open pool type research reactor moderated and cooled by light water. High quality water is a key factor in preventing the corrosion of the spent fuel stored in the pool. Leaching of radionuclides from the corroded fuel cladding may be prevented by an efficient water treatment and purification system. However, as a safety management policy, IPEN has adopted a water chemistry control which periodically monitors the levels of uranium (U) and silicon (Si) in the pool’s reactor, since IEA-R1 employs U3Si2-Al dispersion fuel. An analytical method was developed and validated for the determination of uranium and silicon by ICP OES. This work describes the validation process, in a context of quality assurance, including the parameters selectivity, linearity, quantification limit, precision and recovery.

  14. The Application of the FDTD Method to Millimeter-Wave Filter Circuits Including the Design and Analysis of a Compact Coplanar

    NASA Technical Reports Server (NTRS)

    Oswald, J. E.; Siegel, P. H.

    1994-01-01

    The finite difference time domain (FDTD) method is applied to the analysis of microwave, millimeter-wave and submillimeter-wave filter circuits. In each case, the validity of this method is confirmed by comparison with measured data. In addition, the FDTD calculations are used to design a new ultra-thin coplanar-strip filter for feeding a THz planar-antenna mixer.

  15. [Finite Element Modelling of the Eye for the Investigation of Accommodation].

    PubMed

    Martin, H; Stachs, O; Guthoff, R; Grabow, N

    2016-12-01

    Background: Accommodation research increasingly uses engineering methods. This article presents the use of the finite element method in accommodation research. Material and Methods: Geometry, material data and boundary conditions are prerequisites for the application of the finite element method. Published data on geometry and materials are reviewed. It is shown how boundary conditions are important and how they influence the results. Results: Two dimensional and three dimensional models of the anterior chamber of the eye are presented. With simple two dimensional models, it is shown that realistic results for the accommodation amplitude can always be achieved. More complex three dimensional models of the accommodation mechanism - including the ciliary muscle - require further investigations of the material data and of the morphology of the ciliary muscle, if they are to achieve realistic results for accommodation. Discussion and Conclusion: The efficiency and the limitations of the finite element method are especially clear for accommodation. Application of the method requires extensive preparation, including acquisition of geometric and material data and experimental validation. However, a validated model can be used as a basis for parametric studies, by systematically varying material data and geometric dimensions. This allows systematic investigation of how essential input parameters influence the results. Georg Thieme Verlag KG Stuttgart · New York.

  16. Development and validation of an automated, microscopy-based method for enumeration of groups of intestinal bacteria.

    PubMed

    Jansen, G J; Wildeboer-Veloo, A C; Tonk, R H; Franks, A H; Welling, G W

    1999-09-01

    An automated microscopy-based method using fluorescently labelled 16S rRNA-targeted oligonucleotide probes directed against the predominant groups of intestinal bacteria was developed and validated. The method makes use of the Leica 600HR image analysis system, a Kodak MegaPlus camera model 1.4 and a servo-controlled Leica DM/RXA ultra-violet microscope. Software for automated image acquisition and analysis was developed and tested. The performance of the method was validated using a set of four fluorescent oligonucleotide probes: a universal probe for the detection of all bacterial species, one probe specific for Bifidobacterium spp., a digenus-probe specific for Bacteroides spp. and Prevotella spp. and a trigenus-probe specific for Ruminococcus spp., Clostridium spp. and Eubacterium spp. A nucleic acid stain, 4',6-diamidino-2-phenylindole (DAPI), was also included in the validation. In order to quantify the assay-error, one faecal sample was measured 20 times using each separate probe. Thereafter faecal samples of 20 different volunteers were measured following the same procedure in order to quantify the error due to individual-related differences in gut flora composition. It was concluded that the combination of automated microscopy and fluorescent whole-cell hybridisation enables distinction in gut flora-composition between volunteers at a significant level. With this method it is possible to process 48 faecal samples overnight, with coefficients of variation ranging from 0.07 to 0.30.

  17. Comparison of scoring approaches for the NEI VFQ-25 in low vision.

    PubMed

    Dougherty, Bradley E; Bullimore, Mark A

    2010-08-01

    The aim of this study was to evaluate different approaches to scoring the National Eye Institute Visual Functioning Questionnaire-25 (NEI VFQ-25) in patients with low vision including scoring by the standard method, by Rasch analysis, and by use of an algorithm created by Massof to approximate Rasch person measure. Subscale validity and use of a 7-item short form instrument proposed by Ryan et al. were also investigated. NEI VFQ-25 data from 50 patients with low vision were analyzed using the standard method of summing Likert-type scores and calculating an overall average, Rasch analysis using Winsteps software, and the Massof algorithm in Excel. Correlations between scores were calculated. Rasch person separation reliability and other indicators were calculated to determine the validity of the subscales and of the 7-item instrument. Scores calculated using all three methods were highly correlated, but evidence of floor and ceiling effects was found with the standard scoring method. None of the subscales investigated proved valid. The 7-item instrument showed acceptable person separation reliability and good targeting and item performance. Although standard scores and Rasch scores are highly correlated, Rasch analysis has the advantages of eliminating floor and ceiling effects and producing interval-scaled data. The Massof algorithm for approximation of the Rasch person measure performed well in this group of low-vision patients. The validity of the subscales VFQ-25 should be reconsidered.

  18. SHERMAN, a shape-based thermophysical model. I. Model description and validation

    NASA Astrophysics Data System (ADS)

    Magri, Christopher; Howell, Ellen S.; Vervack, Ronald J.; Nolan, Michael C.; Fernández, Yanga R.; Marshall, Sean E.; Crowell, Jenna L.

    2018-03-01

    SHERMAN, a new thermophysical modeling package designed for analyzing near-infrared spectra of asteroids and other solid bodies, is presented. The model's features, the methods it uses to solve for surface and subsurface temperatures, and the synthetic data it outputs are described. A set of validation tests demonstrates that SHERMAN produces accurate output in a variety of special cases for which correct results can be derived from theory. These cases include a family of solutions to the heat equation for which thermal inertia can have any value and thermophysical properties can vary with depth and with temperature. An appendix describes a new approximation method for estimating surface temperatures within spherical-section craters, more suitable for modeling infrared beaming at short wavelengths than the standard method.

  19. Validating MEDIQUAL Constructs

    NASA Astrophysics Data System (ADS)

    Lee, Sang-Gun; Min, Jae H.

    In this paper, we validate MEDIQUAL constructs through the different media users in help desk service. In previous research, only two end-users' constructs were used: assurance and responsiveness. In this paper, we extend MEDIQUAL constructs to include reliability, empathy, assurance, tangibles, and responsiveness, which are based on the SERVQUAL theory. The results suggest that: 1) five MEDIQUAL constructs are validated through the factor analysis. That is, importance of the constructs have relatively high correlations between measures of the same construct using different methods and low correlations between measures of the constructs that are expected to differ; and 2) five MEDIQUAL constructs are statistically significant on media users' satisfaction in help desk service by regression analysis.

  20. Fast Running Urban Dispersion Model for Radiological Dispersal Device (RDD) Releases: Model Description and Validation

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Gowardhan, Akshay; Neuscamman, Stephanie; Donetti, John

    Aeolus is an efficient three-dimensional computational fluid dynamics code based on finite volume method developed for predicting transport and dispersion of contaminants in a complex urban area. It solves the time dependent incompressible Navier-Stokes equation on a regular Cartesian staggered grid using a fractional step method. It also solves a scalar transport equation for temperature and using the Boussinesq approximation. The model also includes a Lagrangian dispersion model for predicting the transport and dispersion of atmospheric contaminants. The model can be run in an efficient Reynolds Average Navier-Stokes (RANS) mode with a run time of several minutes, or a moremore » detailed Large Eddy Simulation (LES) mode with run time of hours for a typical simulation. This report describes the model components, including details on the physics models used in the code, as well as several model validation efforts. Aeolus wind and dispersion predictions are compared to field data from the Joint Urban Field Trials 2003 conducted in Oklahoma City (Allwine et al 2004) including both continuous and instantaneous releases. Newly implemented Aeolus capabilities include a decay chain model and an explosive Radiological Dispersal Device (RDD) source term; these capabilities are described. Aeolus predictions using the buoyant explosive RDD source are validated against two experimental data sets: the Green Field explosive cloud rise experiments conducted in Israel (Sharon et al 2012) and the Full-Scale RDD Field Trials conducted in Canada (Green et al 2016).« less

  1. 78 FR 56718 - Draft Guidance for Industry on Bioanalytical Method Validation; Availability

    Federal Register 2010, 2011, 2012, 2013, 2014

    2013-09-13

    ...] Draft Guidance for Industry on Bioanalytical Method Validation; Availability AGENCY: Food and Drug... availability of a draft guidance for industry entitled ``Bioanalytical Method Validation.'' The draft guidance is intended to provide recommendations regarding analytical method development and validation for the...

  2. Validity and consistency assessment of accident analysis methods in the petroleum industry.

    PubMed

    Ahmadi, Omran; Mortazavi, Seyed Bagher; Khavanin, Ali; Mokarami, Hamidreza

    2017-11-17

    Accident analysis is the main aspect of accident investigation. It includes the method of connecting different causes in a procedural way. Therefore, it is important to use valid and reliable methods for the investigation of different causal factors of accidents, especially the noteworthy ones. This study aimed to prominently assess the accuracy (sensitivity index [SI]) and consistency of the six most commonly used accident analysis methods in the petroleum industry. In order to evaluate the methods of accident analysis, two real case studies (process safety and personal accident) from the petroleum industry were analyzed by 10 assessors. The accuracy and consistency of these methods were then evaluated. The assessors were trained in the workshop of accident analysis methods. The systematic cause analysis technique and bowtie methods gained the greatest SI scores for both personal and process safety accidents, respectively. The best average results of the consistency in a single method (based on 10 independent assessors) were in the region of 70%. This study confirmed that the application of methods with pre-defined causes and a logic tree could enhance the sensitivity and consistency of accident analysis.

  3. Computer-Assisted Update of a Consumer Health Vocabulary Through Mining of Social Network Data

    PubMed Central

    2011-01-01

    Background Consumer health vocabularies (CHVs) have been developed to aid consumer health informatics applications. This purpose is best served if the vocabulary evolves with consumers’ language. Objective Our objective was to create a computer assisted update (CAU) system that works with live corpora to identify new candidate terms for inclusion in the open access and collaborative (OAC) CHV. Methods The CAU system consisted of three main parts: a Web crawler and an HTML parser, a candidate term filter that utilizes natural language processing tools including term recognition methods, and a human review interface. In evaluation, the CAU system was applied to the health-related social network website PatientsLikeMe.com. The system’s utility was assessed by comparing the candidate term list it generated to a list of valid terms hand extracted from the text of the crawled webpages. Results The CAU system identified 88,994 unique terms 1- to 7-grams (“n-grams” are n consecutive words within a sentence) in 300 crawled PatientsLikeMe.com webpages. The manual review of the crawled webpages identified 651 valid terms not yet included in the OAC CHV or the Unified Medical Language System (UMLS) Metathesaurus, a collection of vocabularies amalgamated to form an ontology of medical terms, (ie, 1 valid term per 136.7 candidate n-grams). The term filter selected 774 candidate terms, of which 237 were valid terms, that is, 1 valid term among every 3 or 4 candidates reviewed. Conclusion The CAU system is effective for generating a list of candidate terms for human review during CHV development. PMID:21586386

  4. A method and knowledge base for automated inference of patient problems from structured data in an electronic medical record.

    PubMed

    Wright, Adam; Pang, Justine; Feblowitz, Joshua C; Maloney, Francine L; Wilcox, Allison R; Ramelson, Harley Z; Schneider, Louise I; Bates, David W

    2011-01-01

    Accurate knowledge of a patient's medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100,000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100,000 records to assess its accuracy. Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100,000 randomly selected patients showed high sensitivity (range: 62.8-100.0%) and positive predictive value (range: 79.8-99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts.

  5. Validation of experimental molecular crystal structures with dispersion-corrected density functional theory calculations.

    PubMed

    van de Streek, Jacco; Neumann, Marcus A

    2010-10-01

    This paper describes the validation of a dispersion-corrected density functional theory (d-DFT) method for the purpose of assessing the correctness of experimental organic crystal structures and enhancing the information content of purely experimental data. 241 experimental organic crystal structures from the August 2008 issue of Acta Cryst. Section E were energy-minimized in full, including unit-cell parameters. The differences between the experimental and the minimized crystal structures were subjected to statistical analysis. The r.m.s. Cartesian displacement excluding H atoms upon energy minimization with flexible unit-cell parameters is selected as a pertinent indicator of the correctness of a crystal structure. All 241 experimental crystal structures are reproduced very well: the average r.m.s. Cartesian displacement for the 241 crystal structures, including 16 disordered structures, is only 0.095 Å (0.084 Å for the 225 ordered structures). R.m.s. Cartesian displacements above 0.25 A either indicate incorrect experimental crystal structures or reveal interesting structural features such as exceptionally large temperature effects, incorrectly modelled disorder or symmetry breaking H atoms. After validation, the method is applied to nine examples that are known to be ambiguous or subtly incorrect.

  6. Risks Associated with Federal Construction Projects

    DTIC Science & Technology

    2011-06-01

    awarding contracts for construction projects (USACE, 2010). BIM offers a method to effectively design a facility while maximizing work performance during...includes Requirements, Programming, Funding, Solicitation, AEC Evaluation, Award , Project Validation, Design and Construction, and Project Management...includes the Solicitation, AEC Evaluation, and Award Steps. In this Phase, BIM is only used in the Solicitation and the AEC Evaluation steps

  7. Combined quantification of paclitaxel, docetaxel and ritonavir in human feces and urine using LC-MS/MS.

    PubMed

    Hendrikx, Jeroen J M A; Rosing, Hilde; Schinkel, Alfred H; Schellens, Jan H M; Beijnen, Jos H

    2014-02-01

    A combined assay for the determination of paclitaxel, docetaxel and ritonavir in human feces and urine is described. The drugs were extracted from 200 μL urine or 50 mg feces followed by high-performance liquid chromatography analysis coupled with positive ionization electrospray tandem mass spectrometry. The validation program included calibration model, accuracy and precision, carry-over, dilution test, specificity and selectivity, matrix effect, recovery and stability. Acceptance criteria were according to US Food and Drug Administration guidelines on bioanalytical method validation. The validated range was 0.5-500 ng/mL for paclitaxel and docetaxel, 2-2000 ng/mL for ritonavir in urine, 2-2000 ng/mg for paclitaxel and docetaxel, and 8-8000 ng/mg for ritonavir in feces. Inter-assay accuracy and precision were tested for all analytes at four concentration levels and were within 8.5% and <10.2%, respectively, in both matrices. Recovery at three concentration levels was between 77 and 94% in feces samples and between 69 and 85% in urine samples. Method development, including feces homogenization and spiking blank urine samples, are discussed. We demonstrated that each of the applied drugs could be quantified successfully in urine and feces using the described assay. The method was successfully applied for quantification of the analytes in feces and urine samples of patients. Copyright © 2013 John Wiley & Sons, Ltd.

  8. Development and Validation of a Job Exposure Matrix for Physical Risk Factors in Low Back Pain

    PubMed Central

    Solovieva, Svetlana; Pehkonen, Irmeli; Kausto, Johanna; Miranda, Helena; Shiri, Rahman; Kauppinen, Timo; Heliövaara, Markku; Burdorf, Alex; Husgafvel-Pursiainen, Kirsti; Viikari-Juntura, Eira

    2012-01-01

    Objectives The aim was to construct and validate a gender-specific job exposure matrix (JEM) for physical exposures to be used in epidemiological studies of low back pain (LBP). Materials and Methods We utilized two large Finnish population surveys, one to construct the JEM and another to test matrix validity. The exposure axis of the matrix included exposures relevant to LBP (heavy physical work, heavy lifting, awkward trunk posture and whole body vibration) and exposures that increase the biomechanical load on the low back (arm elevation) or those that in combination with other known risk factors could be related to LBP (kneeling or squatting). Job titles with similar work tasks and exposures were grouped. Exposure information was based on face-to-face interviews. Validity of the matrix was explored by comparing the JEM (group-based) binary measures with individual-based measures. The predictive validity of the matrix against LBP was evaluated by comparing the associations of the group-based (JEM) exposures with those of individual-based exposures. Results The matrix includes 348 job titles, representing 81% of all Finnish job titles in the early 2000s. The specificity of the constructed matrix was good, especially in women. The validity measured with kappa-statistic ranged from good to poor, being fair for most exposures. In men, all group-based (JEM) exposures were statistically significantly associated with one-month prevalence of LBP. In women, four out of six group-based exposures showed an association with LBP. Conclusions The gender-specific JEM for physical exposures showed relatively high specificity without compromising sensitivity. The matrix can therefore be considered as a valid instrument for exposure assessment in large-scale epidemiological studies, when more precise but more labour-intensive methods are not feasible. Although the matrix was based on Finnish data we foresee that it could be applicable, with some modifications, in other countries with a similar level of technology. PMID:23152793

  9. Validity of Cognitive Load Measures in Simulation-Based Training: A Systematic Review.

    PubMed

    Naismith, Laura M; Cavalcanti, Rodrigo B

    2015-11-01

    Cognitive load theory (CLT) provides a rich framework to inform instructional design. Despite the applicability of CLT to simulation-based medical training, findings from multimedia learning have not been consistently replicated in this context. This lack of transferability may be related to issues in measuring cognitive load (CL) during simulation. The authors conducted a review of CLT studies across simulation training contexts to assess the validity evidence for different CL measures. PRISMA standards were followed. For 48 studies selected from a search of MEDLINE, EMBASE, PsycInfo, CINAHL, and ERIC databases, information was extracted about study aims, methods, validity evidence of measures, and findings. Studies were categorized on the basis of findings and prevalence of validity evidence collected, and statistical comparisons between measurement types and research domains were pursued. CL during simulation training has been measured in diverse populations including medical trainees, pilots, and university students. Most studies (71%; 34) used self-report measures; others included secondary task performance, physiological indices, and observer ratings. Correlations between CL and learning varied from positive to negative. Overall validity evidence for CL measures was low (mean score 1.55/5). Studies reporting greater validity evidence were more likely to report that high CL impaired learning. The authors found evidence that inconsistent correlations between CL and learning may be related to issues of validity in CL measures. Further research would benefit from rigorous documentation of validity and from triangulating measures of CL. This can better inform CLT instructional design for simulation-based medical training.

  10. ANALYSIS OF FISH HOMOGENATES FOR PERFLUORINATED COMPOUNDS

    EPA Science Inventory

    Perfluorinated compounds (PFCs) which include PFOS and PFOA are widely distributed in wildlife. Whole fish homogenates were analyzed for PFCs from the upper Mississippi, the Missouri and the Ohio rivers. Methods development, validation data, and preliminary study results will b...

  11. Procedures for Constructing and Using Criterion-Referenced Performance Tests.

    ERIC Educational Resources Information Center

    Campbell, Clifton P.; Allender, Bill R.

    1988-01-01

    Criterion-referenced performance tests (CRPT) provide a realistic method for objectively measuring task proficiency against predetermined attainment standards. This article explains the procedures of constructing, validating, and scoring CRPTs and includes a checklist for a welding test. (JOW)

  12. Student mathematical imagination instruments: construction, cultural adaptation and validity

    NASA Astrophysics Data System (ADS)

    Dwijayanti, I.; Budayasa, I. K.; Siswono, T. Y. E.

    2018-03-01

    Imagination has an important role as the center of sensorimotor activity of the students. The purpose of this research is to construct the instrument of students’ mathematical imagination in understanding concept of algebraic expression. The researcher performs validity using questionnaire and test technique and data analysis using descriptive method. Stages performed include: 1) the construction of the embodiment of the imagination; 2) determine the learning style questionnaire; 3) construct instruments; 4) translate to Indonesian as well as adaptation of learning style questionnaire content to student culture; 5) perform content validation. The results stated that the constructed instrument is valid by content validation and empirical validation so that it can be used with revisions. Content validation involves Indonesian linguists, english linguists and mathematics material experts. Empirical validation is done through a legibility test (10 students) and shows that in general the language used can be understood. In addition, a questionnaire test (86 students) was analyzed using a biserial point correlation technique resulting in 16 valid items with a reliability test using KR 20 with medium reability criteria. While the test instrument test (32 students) to find all items are valid and reliability test using KR 21 with reability is 0,62.

  13. Development and validation of a yoga module for Parkinson disease.

    PubMed

    Kakde, Noopur; Metri, Kashinath G; Varambally, Shivarama; Nagaratna, Raghuram; Nagendra, H R

    2017-03-25

    Background Parkinson's disease (PD), a progressive neurodegenerative disease, affects motor and nonmotor functions, leading to severe debility and poor quality of life. Studies have reported the beneficial role of yoga in alleviating the symptoms of PD; however, a validated yoga module for PD is unavailable. This study developed and validated an integrated yoga module(IYM) for PD. Methods The IYM was prepared after a thorough review of classical yoga texts and previous findings. Twenty experienced yoga experts, who fulfilled the inclusion criteria, were selected validating the content of the IYM. A total of 28 practices were included in the IYM, and each practice was discussed and rated as (i) not essential, (ii) useful but not essential, and (iii) essential; the content validity ratio (CVR) was calculated using Lawshe's formula. Results Data analysis revealed that of the 28 IYM practices, 21 exhibited significant content validity (cut-off value: 0.42, as calculated by applying Lawshe's formula for the CVR). Conclusions The IYM is valid for PD, with good content validity. However, future studies must determine the feasibility and efficacy of the developed module.

  14. State of the art in the validation of screening methods for the control of antibiotic residues: is there a need for further development?

    PubMed

    Gaudin, Valérie

    2017-09-01

    Screening methods are used as a first-line approach to detect the presence of antibiotic residues in food of animal origin. The validation process guarantees that the method is fit-for-purpose, suited to regulatory requirements, and provides evidence of its performance. This article is focused on intra-laboratory validation. The first step in validation is characterisation of performance, and the second step is the validation itself with regard to pre-established criteria. The validation approaches can be absolute (a single method) or relative (comparison of methods), overall (combination of several characteristics in one) or criterion-by-criterion. Various approaches to validation, in the form of regulations, guidelines or standards, are presented and discussed to draw conclusions on their potential application for different residue screening methods, and to determine whether or not they reach the same conclusions. The approach by comparison of methods is not suitable for screening methods for antibiotic residues. The overall approaches, such as probability of detection (POD) and accuracy profile, are increasingly used in other fields of application. They may be of interest for screening methods for antibiotic residues. Finally, the criterion-by-criterion approach (Decision 2002/657/EC and of European guideline for the validation of screening methods), usually applied to the screening methods for antibiotic residues, introduced a major characteristic and an improvement in the validation, i.e. the detection capability (CCβ). In conclusion, screening methods are constantly evolving, thanks to the development of new biosensors or liquid chromatography coupled to tandem-mass spectrometry (LC-MS/MS) methods. There have been clear changes in validation approaches these last 20 years. Continued progress is required and perspectives for future development of guidelines, regulations and standards for validation are presented here.

  15. Determination of the transmission coefficients for quantum structures using FDTD method.

    PubMed

    Peng, Yangyang; Wang, Xiaoying; Sui, Wenquan

    2011-12-01

    The purpose of this work is to develop a simple method to incorporate quantum effect in traditional finite-difference time-domain (FDTD) simulators. Witch could make it possible to co-simulate systems include quantum structures and traditional components. In this paper, tunneling transmission coefficient is calculated by solving time-domain Schrödinger equation with a developed FDTD technique, called FDTD-S method. To validate the feasibility of the method, a simple resonant tunneling diode (RTD) structure model has been simulated using the proposed method. The good agreement between the numerical and analytical results proves its accuracy. The effectness and accuracy of this approach makes it a potential method for analysis and design of hybrid systems includes quantum structures and traditional components.

  16. Extension of the ratio method to low energy

    DOE PAGES

    Colomer, Frederic; Capel, Pierre; Nunes, F. M.; ...

    2016-05-25

    The ratio method has been proposed as a means to remove the reaction model dependence in the study of halo nuclei. Originally, it was developed for higher energies but given the potential interest in applying the method at lower energy, in this work we explore its validity at 20 MeV/nucleon. The ratio method takes the ratio of the breakup angular distribution and the summed angular distribution (which includes elastic, inelastic and breakup) and uses this observable to constrain the features of the original halo wave function. In this work we use the Continuum Discretized Coupled Channel method and the Coulomb-correctedmore » Dynamical Eikonal Approximation for the study. We study the reactions of 11Be on 12C, 40Ca and 208Pb at 20 MeV/nucleon. We compare the various theoretical descriptions and explore the dependence of our result on the core-target interaction. Lastly, our study demonstrates that the ratio method is valid at these lower beam energies.« less

  17. Method of analysis at the U.S. Geological Survey California Water Science Center, Sacramento Laboratory - determination of haloacetic acid formation potential, method validation, and quality-control practices

    USGS Publications Warehouse

    Zazzi, Barbara C.; Crepeau, Kathryn L.; Fram, Miranda S.; Bergamaschi, Brian A.

    2005-01-01

    An analytical method for the determination of haloacetic acid formation potential of water samples has been developed by the U.S. Geological Survey California Water Science Center Sacramento Laboratory. The haloacetic acid formation potential is measured by dosing water samples with chlorine under specified conditions of pH, temperature, incubation time, darkness, and residual-free chlorine. The haloacetic acids formed are bromochloroacetic acid, bromodichloroacetic acid, dibromochloroacetic acid, dibromoacetic acid, dichloroacetic acid, monobromoacetic acid, monochloroacetic acid, tribromoacetic acid, and trichloroacetic acid. They are extracted, methylated, and then analyzed using a gas chromatograph equipped with an electron capture detector. Method validation experiments were performed to determine the method accuracy, precision, and detection limit for each of the compounds. Method detection limits for these nine haloacetic acids ranged from 0.11 to 0.45 microgram per liter. Quality-control practices include the use of blanks, quality-control samples, calibration verification standards, surrogate recovery, internal standard, matrix spikes, and duplicates.

  18. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE PAGES

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; ...

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 ® problems. These benchmark and scaling studies show promising results.« less

  19. Design and Optimization of a Chemometric-Assisted Spectrophotometric Determination of Telmisartan and Hydrochlorothiazide in Pharmaceutical Dosage Form

    PubMed Central

    Lakshmi, KS; Lakshmi, S

    2010-01-01

    Two chemometric methods were developed for the simultaneous determination of telmisartan and hydrochlorothiazide. The chemometric methods applied were principal component regression (PCR) and partial least square (PLS-1). These approaches were successfully applied to quantify the two drugs in the mixture using the information included in the UV absorption spectra of appropriate solutions in the range of 200-350 nm with the intervals Δλ = 1 nm. The calibration of PCR and PLS-1 models was evaluated by internal validation (prediction of compounds in its own designed training set of calibration) and by external validation over laboratory prepared mixtures and pharmaceutical preparations. The PCR and PLS-1 methods require neither any separation step, nor any prior graphical treatment of the overlapping spectra of the two drugs in a mixture. The results of PCR and PLS-1 methods were compared with each other and a good agreement was found. PMID:21331198

  20. Design and optimization of a chemometric-assisted spectrophotometric determination of telmisartan and hydrochlorothiazide in pharmaceutical dosage form.

    PubMed

    Lakshmi, Ks; Lakshmi, S

    2010-01-01

    Two chemometric methods were developed for the simultaneous determination of telmisartan and hydrochlorothiazide. The chemometric methods applied were principal component regression (PCR) and partial least square (PLS-1). These approaches were successfully applied to quantify the two drugs in the mixture using the information included in the UV absorption spectra of appropriate solutions in the range of 200-350 nm with the intervals Δλ = 1 nm. The calibration of PCR and PLS-1 models was evaluated by internal validation (prediction of compounds in its own designed training set of calibration) and by external validation over laboratory prepared mixtures and pharmaceutical preparations. The PCR and PLS-1 methods require neither any separation step, nor any prior graphical treatment of the overlapping spectra of the two drugs in a mixture. The results of PCR and PLS-1 methods were compared with each other and a good agreement was found.

  1. Quantitative determination of additive Chlorantraniliprole in Abamectin preparation: Investigation of bootstrapping soft shrinkage approach by mid-infrared spectroscopy

    NASA Astrophysics Data System (ADS)

    Yan, Hong; Song, Xiangzhong; Tian, Kuangda; Chen, Yilin; Xiong, Yanmei; Min, Shungeng

    2018-02-01

    A novel method, mid-infrared (MIR) spectroscopy, which enables the determination of Chlorantraniliprole in Abamectin within minutes, is proposed. We further evaluate the prediction ability of four wavelength selection methods, including bootstrapping soft shrinkage approach (BOSS), Monte Carlo uninformative variable elimination (MCUVE), genetic algorithm partial least squares (GA-PLS) and competitive adaptive reweighted sampling (CARS) respectively. The results showed that BOSS method obtained the lowest root mean squared error of cross validation (RMSECV) (0.0245) and root mean squared error of prediction (RMSEP) (0.0271), as well as the highest coefficient of determination of cross-validation (Qcv2) (0.9998) and the coefficient of determination of test set (Q2test) (0.9989), which demonstrated that the mid infrared spectroscopy can be used to detect Chlorantraniliprole in Abamectin conveniently. Meanwhile, a suitable wavelength selection method (BOSS) is essential to conducting a component spectral analysis.

  2. Bayesian data analysis in observational comparative effectiveness research: rationale and examples.

    PubMed

    Olson, William H; Crivera, Concetta; Ma, Yi-Wen; Panish, Jessica; Mao, Lian; Lynch, Scott M

    2013-11-01

    Many comparative effectiveness research and patient-centered outcomes research studies will need to be observational for one or both of two reasons: first, randomized trials are expensive and time-consuming; and second, only observational studies can answer some research questions. It is generally recognized that there is a need to increase the scientific validity and efficiency of observational studies. Bayesian methods for the design and analysis of observational studies are scientifically valid and offer many advantages over frequentist methods, including, importantly, the ability to conduct comparative effectiveness research/patient-centered outcomes research more efficiently. Bayesian data analysis is being introduced into outcomes studies that we are conducting. Our purpose here is to describe our view of some of the advantages of Bayesian methods for observational studies and to illustrate both realized and potential advantages by describing studies we are conducting in which various Bayesian methods have been or could be implemented.

  3. Discrimination of whisky brands and counterfeit identification by UV-Vis spectroscopy and multivariate data analysis.

    PubMed

    Martins, Angélica Rocha; Talhavini, Márcio; Vieira, Maurício Leite; Zacca, Jorge Jardim; Braga, Jez Willian Batista

    2017-08-15

    The discrimination of whisky brands and counterfeit identification were performed by UV-Vis spectroscopy combined with partial least squares for discriminant analysis (PLS-DA). In the proposed method all spectra were obtained with no sample preparation. The discrimination models were built with the employment of seven whisky brands: Red Label, Black Label, White Horse, Chivas Regal (12years), Ballantine's Finest, Old Parr and Natu Nobilis. The method was validated with an independent test set of authentic samples belonging to the seven selected brands and another eleven brands not included in the training samples. Furthermore, seventy-three counterfeit samples were also used to validate the method. Results showed correct classification rates for genuine and false samples over 98.6% and 93.1%, respectively, indicating that the method can be helpful for the forensic analysis of whisky samples. Copyright © 2017 Elsevier Ltd. All rights reserved.

  4. Assessment of Hybrid High-Order methods on curved meshes and comparison with discontinuous Galerkin methods

    NASA Astrophysics Data System (ADS)

    Botti, Lorenzo; Di Pietro, Daniele A.

    2018-10-01

    We propose and validate a novel extension of Hybrid High-Order (HHO) methods to meshes featuring curved elements. HHO methods are based on discrete unknowns that are broken polynomials on the mesh and its skeleton. We propose here the use of physical frame polynomials over mesh elements and reference frame polynomials over mesh faces. With this choice, the degree of face unknowns must be suitably selected in order to recover on curved meshes the same convergence rates as on straight meshes. We provide an estimate of the optimal face polynomial degree depending on the element polynomial degree and on the so-called effective mapping order. The estimate is numerically validated through specifically crafted numerical tests. All test cases are conducted considering two- and three-dimensional pure diffusion problems, and include comparisons with discontinuous Galerkin discretizations. The extension to agglomerated meshes with curved boundaries is also considered.

  5. Validation of a Smartphone Image-Based Dietary Assessment Method for Pregnant Women

    PubMed Central

    Ashman, Amy M.; Collins, Clare E.; Brown, Leanne J.; Rae, Kym M.; Rollo, Megan E.

    2017-01-01

    Image-based dietary records could lower participant burden associated with traditional prospective methods of dietary assessment. They have been used in children, adolescents and adults, but have not been evaluated in pregnant women. The current study evaluated relative validity of the DietBytes image-based dietary assessment method for assessing energy and nutrient intakes. Pregnant women collected image-based dietary records (via a smartphone application) of all food, drinks and supplements consumed over three non-consecutive days. Intakes from the image-based method were compared to intakes collected from three 24-h recalls, taken on random days; once per week, in the weeks following the image-based record. Data were analyzed using nutrient analysis software. Agreement between methods was ascertained using Pearson correlations and Bland-Altman plots. Twenty-five women (27 recruited, one withdrew, one incomplete), median age 29 years, 15 primiparas, eight Aboriginal Australians, completed image-based records for analysis. Significant correlations between the two methods were observed for energy, macronutrients and fiber (r = 0.58–0.84, all p < 0.05), and for micronutrients both including (r = 0.47–0.94, all p < 0.05) and excluding (r = 0.40–0.85, all p < 0.05) supplements in the analysis. Bland-Altman plots confirmed acceptable agreement with no systematic bias. The DietBytes method demonstrated acceptable relative validity for assessment of nutrient intakes of pregnant women. PMID:28106758

  6. Risk-Based Prioritization Method for the Classification of Groundwater Pollution from Hazardous Waste Landfills.

    PubMed

    Yang, Yu; Jiang, Yong-Hai; Lian, Xin-Ying; Xi, Bei-Dou; Ma, Zhi-Fei; Xu, Xiang-Jian; An, Da

    2016-12-01

    Hazardous waste landfill sites are a significant source of groundwater pollution. To ensure that these landfills with a significantly high risk of groundwater contamination are properly managed, a risk-based ranking method related to groundwater contamination is needed. In this research, a risk-based prioritization method for the classification of groundwater pollution from hazardous waste landfills was established. The method encompasses five phases, including risk pre-screening, indicator selection, characterization, classification and, lastly, validation. In the risk ranking index system employed here, 14 indicators involving hazardous waste landfills and migration in the vadose zone as well as aquifer were selected. The boundary of each indicator was determined by K-means cluster analysis and the weight of each indicator was calculated by principal component analysis. These methods were applied to 37 hazardous waste landfills in China. The result showed that the risk for groundwater contamination from hazardous waste landfills could be ranked into three classes from low to high risk. In all, 62.2 % of the hazardous waste landfill sites were classified in the low and medium risk classes. The process simulation method and standardized anomalies were used to validate the result of risk ranking; the results were consistent with the simulated results related to the characteristics of contamination. The risk ranking method was feasible, valid and can provide reference data related to risk management for groundwater contamination at hazardous waste landfill sites.

  7. Fracture mechanics life analytical methods verification testing

    NASA Technical Reports Server (NTRS)

    Favenesi, J. A.; Clemmons, T. G.; Lambert, T. J.

    1994-01-01

    Verification and validation of the basic information capabilities in NASCRAC has been completed. The basic information includes computation of K versus a, J versus a, and crack opening area versus a. These quantities represent building blocks which NASCRAC uses in its other computations such as fatigue crack life and tearing instability. Several methods were used to verify and validate the basic information capabilities. The simple configurations such as the compact tension specimen and a crack in a finite plate were verified and validated versus handbook solutions for simple loads. For general loads using weight functions, offline integration using standard FORTRAN routines was performed. For more complicated configurations such as corner cracks and semielliptical cracks, NASCRAC solutions were verified and validated versus published results and finite element analyses. A few minor problems were identified in the basic information capabilities of the simple configurations. In the more complicated configurations, significant differences between NASCRAC and reference solutions were observed because NASCRAC calculates its solutions as averaged values across the entire crack front whereas the reference solutions were computed for a single point.

  8. A systematic review of measures of HIV/AIDS stigma in paediatric HIV-infected and HIV-affected populations

    PubMed Central

    McAteer, Carole Ian; Truong, Nhan-Ai Thi; Aluoch, Josephine; Deathe, Andrew Roland; Nyandiko, Winstone M; Marete, Irene; Vreeman, Rachel Christine

    2016-01-01

    Introduction HIV-related stigma impacts the quality of life and care management of HIV-infected and HIV-affected individuals, but how we measure stigma and its impact on children and adolescents has less often been described. Methods We conducted a systematic review of studies that measured HIV-related stigma with a quantitative tool in paediatric HIV-infected and HIV-affected populations. Results and discussion Varying measures have been used to assess stigma in paediatric populations, with most studies utilizing the full or variant form of the HIV Stigma Scale that has been validated in adult populations and utilized with paediatric populations in Africa, Asia and the United States. Other common measures included the Perceived Public Stigma Against Children Affected by HIV, primarily utilized and validated in China. Few studies implored item validation techniques with the population of interest, although scales were used in a different cultural context from the origin of the scale. Conclusions Many stigma measures have been used to assess HIV stigma in paediatric populations, globally, but few have implored methods for cultural adaptation and content validity. PMID:27717409

  9. Forensic Science Research and Development at the National Institute of Justice: Opportunities in Applied Physics

    NASA Astrophysics Data System (ADS)

    Dutton, Gregory

    Forensic science is a collection of applied disciplines that draws from all branches of science. A key question in forensic analysis is: to what degree do a piece of evidence and a known reference sample share characteristics? Quantification of similarity, estimation of uncertainty, and determination of relevant population statistics are of current concern. A 2016 PCAST report questioned the foundational validity and the validity in practice of several forensic disciplines, including latent fingerprints, firearms comparisons and DNA mixture interpretation. One recommendation was the advancement of objective, automated comparison methods based on image analysis and machine learning. These concerns parallel the National Institute of Justice's ongoing R&D investments in applied chemistry, biology and physics. NIJ maintains a funding program spanning fundamental research with potential for forensic application to the validation of novel instruments and methods. Since 2009, NIJ has funded over 179M in external research to support the advancement of accuracy, validity and efficiency in the forensic sciences. An overview of NIJ's programs will be presented, with examples of relevant projects from fluid dynamics, 3D imaging, acoustics, and materials science.

  10. Validation of verbal autopsy methods using hospital medical records: a case study in Vietnam.

    PubMed

    Tran, Hong Thi; Nguyen, Hoa Phuong; Walker, Sue M; Hill, Peter S; Rao, Chalapati

    2018-05-18

    Information on causes of death (COD) is crucial for measuring the health outcomes of populations and progress towards the Sustainable Development Goals. In many countries such as Vietnam where the civil registration and vital statistics (CRVS) system is dysfunctional, information on vital events will continue to rely on verbal autopsy (VA) methods. This study assesses the validity of VA methods used in Vietnam, and provides recommendations on methods for implementing VA validation studies in Vietnam. This validation study was conducted on a sample of 670 deaths from a recent VA study in Quang Ninh province. The study covered 116 cases from this sample, which met three inclusion criteria: a) the death occurred within 30 days of discharge after last hospitalisation, and b) medical records (MRs) for the deceased were available from respective hospitals, and c) the medical record mentioned that the patient was terminally ill at discharge. For each death, the underlying cause of death (UCOD) identified from MRs was compared to the UCOD from VA. The validity of VA diagnoses for major causes of death was measured using sensitivity, specificity and positive predictive value (PPV). The sensitivity of VA was at least 75% in identifying some leading CODs such as stroke, road traffic accidents and several site-specific cancers. However, sensitivity was less than 50% for other important causes including ischemic heart disease, chronic obstructive pulmonary diseases, and diabetes. Overall, there was 57% agreement between UCOD from VA and MR, which increased to 76% when multiple causes from VA were compared to UCOD from MR. Our findings suggest that VA is a valid method to ascertain UCOD in contexts such as Vietnam. Furthermore, within cultural contexts in which patients prefer to die at home instead of a healthcare facility, using the available MRs as the gold standard may be meaningful to the extent that recall bias from the interval between last hospital discharge and death can be minimized. Therefore, future studies should evaluate validity of MRs as a gold standard for VA studies in contexts similar to the Vietnamese context.

  11. GetReal in mathematical modelling: a review of studies predicting drug effectiveness in the real world.

    PubMed

    Panayidou, Klea; Gsteiger, Sandro; Egger, Matthias; Kilcher, Gablu; Carreras, Máximo; Efthimiou, Orestis; Debray, Thomas P A; Trelle, Sven; Hummel, Noemi

    2016-09-01

    The performance of a drug in a clinical trial setting often does not reflect its effect in daily clinical practice. In this third of three reviews, we examine the applications that have been used in the literature to predict real-world effectiveness from randomized controlled trial efficacy data. We searched MEDLINE, EMBASE from inception to March 2014, the Cochrane Methodology Register, and websites of key journals and organisations and reference lists. We extracted data on the type of model and predictions, data sources, validation and sensitivity analyses, disease area and software. We identified 12 articles in which four approaches were used: multi-state models, discrete event simulation models, physiology-based models and survival and generalized linear models. Studies predicted outcomes over longer time periods in different patient populations, including patients with lower levels of adherence or persistence to treatment or examined doses not tested in trials. Eight studies included individual patient data. Seven examined cardiovascular and metabolic diseases and three neurological conditions. Most studies included sensitivity analyses, but external validation was performed in only three studies. We conclude that mathematical modelling to predict real-world effectiveness of drug interventions is not widely used at present and not well validated. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd. © 2016 The Authors Research Synthesis Methods Published by John Wiley & Sons Ltd.

  12. A satellite relative motion model including J_2 and J_3 via Vinti's intermediary

    NASA Astrophysics Data System (ADS)

    Biria, Ashley D.; Russell, Ryan P.

    2018-03-01

    Vinti's potential is revisited for analytical propagation of the main satellite problem, this time in the context of relative motion. A particular version of Vinti's spheroidal method is chosen that is valid for arbitrary elliptical orbits, encapsulating J_2, J_3, and generally a partial J_4 in an orbit propagation theory without recourse to perturbation methods. As a child of Vinti's solution, the proposed relative motion model inherits these properties. Furthermore, the problem is solved in oblate spheroidal elements, leading to large regions of validity for the linearization approximation. After offering several enhancements to Vinti's solution, including boosts in accuracy and removal of some singularities, the proposed model is derived and subsequently reformulated so that Vinti's solution is piecewise differentiable. While the model is valid for the critical inclination and nonsingular in the element space, singularities remain in the linear transformation from Earth-centered inertial coordinates to spheroidal elements when the eccentricity is zero or for nearly equatorial orbits. The new state transition matrix is evaluated against numerical solutions including the J_2 through J_5 terms for a wide range of chief orbits and separation distances. The solution is also compared with side-by-side simulations of the original Gim-Alfriend state transition matrix, which considers the J_2 perturbation. Code for computing the resulting state transition matrix and associated reference frame and coordinate transformations is provided online as supplementary material.

  13. Research Methods in Healthcare Epidemiology and Antimicrobial Stewardship: Use of Administrative and Surveillance Databases.

    PubMed

    Drees, Marci; Gerber, Jeffrey S; Morgan, Daniel J; Lee, Grace M

    2016-11-01

    Administrative and surveillance data are used frequently in healthcare epidemiology and antimicrobial stewardship (HE&AS) research because of their wide availability and efficiency. However, data quality issues exist, requiring careful consideration and potential validation of data. This methods paper presents key considerations for using administrative and surveillance data in HE&AS, including types of data available and potential use, data limitations, and the importance of validation. After discussing these issues, we review examples of HE&AS research using administrative data with a focus on scenarios when their use may be advantageous. A checklist is provided to help aid study development in HE&AS using administrative data. Infect Control Hosp Epidemiol 2016;1-10.

  14. Hinge Moment Coefficient Prediction Tool and Control Force Analysis of Extra-300 Aerobatic Aircraft

    NASA Astrophysics Data System (ADS)

    Nurohman, Chandra; Arifianto, Ony; Barecasco, Agra

    2018-04-01

    This paper presents the development of tool that is applicable to predict hinge moment coefficients of subsonic aircraft based on Roskam’s method, including the validation and its application to predict hinge moment coefficient of an Extra-300. The hinge moment coefficients are used to predict the stick forces of the aircraft during several aerobatic maneuver i.e. inside loop, half cuban 8, split-s, and aileron roll. The maximum longitudinal stick force is 566.97 N occurs in inside loop while the maximum lateral stick force is 340.82 N occurs in aileron roll. Furthermore, validation hinge moment prediction method is performed using Cessna 172 data.

  15. Semipermeable Hollow Fiber Phantoms for Development and Validation of Perfusion-Sensitive MR Methods and Signal Models

    PubMed Central

    Anderson, J.R.; Ackerman, J.J.H.; Garbow, J.R.

    2015-01-01

    Two semipermeable, hollow fiber phantoms for the validation of perfusion-sensitive magnetic resonance methods and signal models are described. Semipermeable hollow fibers harvested from a standard commercial hemodialysis cartridge serve to mimic tissue capillary function. Flow of aqueous media through the fiber lumen is achieved with a laboratory-grade peristaltic pump. Diffusion of water and solute species (e.g., Gd-based contrast agent) occurs across the fiber wall, allowing exchange between the lumen and the extralumenal space. Phantom design attributes include: i) small physical size, ii) easy and low-cost construction, iii) definable compartment volumes, and iv) experimental control over media content and flow rate. PMID:26167136

  16. Excavator Design Validation

    NASA Technical Reports Server (NTRS)

    Pholsiri, Chalongrath; English, James; Seberino, Charles; Lim, Yi-Je

    2010-01-01

    The Excavator Design Validation tool verifies excavator designs by automatically generating control systems and modeling their performance in an accurate simulation of their expected environment. Part of this software design includes interfacing with human operations that can be included in simulation-based studies and validation. This is essential for assessing productivity, versatility, and reliability. This software combines automatic control system generation from CAD (computer-aided design) models, rapid validation of complex mechanism designs, and detailed models of the environment including soil, dust, temperature, remote supervision, and communication latency to create a system of high value. Unique algorithms have been created for controlling and simulating complex robotic mechanisms automatically from just a CAD description. These algorithms are implemented as a commercial cross-platform C++ software toolkit that is configurable using the Extensible Markup Language (XML). The algorithms work with virtually any mobile robotic mechanisms using module descriptions that adhere to the XML standard. In addition, high-fidelity, real-time physics-based simulation algorithms have also been developed that include models of internal forces and the forces produced when a mechanism interacts with the outside world. This capability is combined with an innovative organization for simulation algorithms, new regolith simulation methods, and a unique control and study architecture to make powerful tools with the potential to transform the way NASA verifies and compares excavator designs. Energid's Actin software has been leveraged for this design validation. The architecture includes parametric and Monte Carlo studies tailored for validation of excavator designs and their control by remote human operators. It also includes the ability to interface with third-party software and human-input devices. Two types of simulation models have been adapted: high-fidelity discrete element models and fast analytical models. By using the first to establish parameters for the second, a system has been created that can be executed in real time, or faster than real time, on a desktop PC. This allows Monte Carlo simulations to be performed on a computer platform available to all researchers, and it allows human interaction to be included in a real-time simulation process. Metrics on excavator performance are established that work with the simulation architecture. Both static and dynamic metrics are included.

  17. Quantitative determination and validation of octreotide acetate using 1 H-NMR spectroscopy with internal standard method.

    PubMed

    Yu, Chen; Zhang, Qian; Xu, Peng-Yao; Bai, Yin; Shen, Wen-Bin; Di, Bin; Su, Meng-Xiang

    2018-01-01

    Quantitative nuclear magnetic resonance (qNMR) is a well-established technique in quantitative analysis. We presented a validated 1 H-qNMR method for assay of octreotide acetate, a kind of cyclic octopeptide. Deuterium oxide was used to remove the undesired exchangeable peaks, which was referred to as proton exchange, in order to make the quantitative signals isolated in the crowded spectrum of the peptide and ensure precise quantitative analysis. Gemcitabine hydrochloride was chosen as the suitable internal standard. Experimental conditions, including relaxation delay time, the numbers of scans, and pulse angle, were optimized first. Then method validation was carried out in terms of selectivity, stability, linearity, precision, and robustness. The assay result was compared with that by means of high performance liquid chromatography, which is provided by Chinese Pharmacopoeia. The statistical F test, Student's t test, and nonparametric test at 95% confidence level indicate that there was no significant difference between these two methods. qNMR is a simple and accurate quantitative tool with no need for specific corresponding reference standards. It has the potential of the quantitative analysis of other peptide drugs and standardization of the corresponding reference standards. Copyright © 2017 John Wiley & Sons, Ltd.

  18. In silico prediction of Tetrahymena pyriformis toxicity for diverse industrial chemicals with substructure pattern recognition and machine learning methods.

    PubMed

    Cheng, Feixiong; Shen, Jie; Yu, Yue; Li, Weihua; Liu, Guixia; Lee, Philip W; Tang, Yun

    2011-03-01

    There is an increasing need for the rapid safety assessment of chemicals by both industries and regulatory agencies throughout the world. In silico techniques are practical alternatives in the environmental hazard assessment. It is especially true to address the persistence, bioaccumulative and toxicity potentials of organic chemicals. Tetrahymena pyriformis toxicity is often used as a toxic endpoint. In this study, 1571 diverse unique chemicals were collected from the literature and composed of the largest diverse data set for T. pyriformis toxicity. Classification predictive models of T. pyriformis toxicity were developed by substructure pattern recognition and different machine learning methods, including support vector machine (SVM), C4.5 decision tree, k-nearest neighbors and random forest. The results of a 5-fold cross-validation showed that the SVM method performed better than other algorithms. The overall predictive accuracies of the SVM classification model with radial basis functions kernel was 92.2% for the 5-fold cross-validation and 92.6% for the external validation set, respectively. Furthermore, several representative substructure patterns for characterizing T. pyriformis toxicity were also identified via the information gain analysis methods. Copyright © 2010 Elsevier Ltd. All rights reserved.

  19. Novel rapid liquid chromatography tandem masspectrometry method for vemurafenib and metabolites in human plasma, including metabolite concentrations at steady state.

    PubMed

    Vikingsson, Svante; Strömqvist, Malin; Svedberg, Anna; Hansson, Johan; Höiom, Veronica; Gréen, Henrik

    2016-08-01

    A novel, rapid and sensitive liquid chromatography tandem-mass spectrometry method for quantification of vemurafenib in human plasma, that also for the first time allows for metabolite semi-quantification, was developed and validated to support clinical trials and therapeutic drug monitoring. Vemurafenib was analysed by precipitation with methanol followed by a 1.9 min isocratic liquid chromatography tandem masspectrometry analysis using an Acquity BEH C18 column with methanol and formic acid using isotope labelled internal standards. Analytes were detected in multireaction monitoring mode on a Xevo TQ. Semi-quantification of vemurafenib metabolites was performed using the same analytical system and sample preparation with gradient elution. The vemurafenib method was successfully validated in the range 0.5-100 μg/mL according to international guidelines. The metabolite method was partially validated owing to the lack of commercially available reference materials. For the first time concentration levels at steady state for melanoma patients treated with vemurafenib is presented. The low abundance of vemurafenib metabolites suggests that they lack clinical significance. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  20. Development, validation and utilisation of food-frequency questionnaires - a review.

    PubMed

    Cade, Janet; Thompson, Rachel; Burley, Victoria; Warm, Daniel

    2002-08-01

    The purpose of this review is to provide guidance on the development, validation and use of food-frequency questionnaires (FFQs) for different study designs. It does not include any recommendations about the most appropriate method for dietary assessment (e.g. food-frequency questionnaire versus weighed record). A comprehensive search of electronic databases was carried out for publications from 1980 to 1999. Findings from the review were then commented upon and added to by a group of international experts. Recommendations have been developed to aid in the design, validation and use of FFQs. Specific details of each of these areas are discussed in the text. FFQs are being used in a variety of ways and different study designs. There is no gold standard for directly assessing the validity of FFQs. Nevertheless, the outcome of this review should help those wishing to develop or adapt an FFQ to validate it for its intended use.

  1. A validation of the construct and reliability of an emotional intelligence scale applied to nursing students1

    PubMed Central

    Espinoza-Venegas, Maritza; Sanhueza-Alvarado, Olivia; Ramírez-Elizondo, Noé; Sáez-Carrillo, Katia

    2015-01-01

    OBJECTIVE: The current study aimed to validate the construct and reliability of an emotional intelligence scale. METHOD: The Trait Meta-Mood Scale-24 was applied to 349 nursing students. The process included content validation, which involved expert reviews, pilot testing, measurements of reliability using Cronbach's alpha, and factor analysis to corroborate the validity of the theoretical model's construct. RESULTS: Adequate Cronbach coefficients were obtained for all three dimensions, and factor analysis confirmed the scale's dimensions (perception, comprehension, and regulation). CONCLUSION: The Trait Meta-Mood Scale is a reliable and valid tool to measure the emotional intelligence of nursing students. Its use allows for accurate determinations of individuals' abilities to interpret and manage emotions. At the same time, this new construct is of potential importance for measurements in nursing leadership; educational, organizational, and personal improvements; and the establishment of effective relationships with patients. PMID:25806642

  2. Simulation of anisoplanatic imaging through optical turbulence using numerical wave propagation with new validation analysis

    NASA Astrophysics Data System (ADS)

    Hardie, Russell C.; Power, Jonathan D.; LeMaster, Daniel A.; Droege, Douglas R.; Gladysz, Szymon; Bose-Pillai, Santasri

    2017-07-01

    We present a numerical wave propagation method for simulating imaging of an extended scene under anisoplanatic conditions. While isoplanatic simulation is relatively common, few tools are specifically designed for simulating the imaging of extended scenes under anisoplanatic conditions. We provide a complete description of the proposed simulation tool, including the wave propagation method used. Our approach computes an array of point spread functions (PSFs) for a two-dimensional grid on the object plane. The PSFs are then used in a spatially varying weighted sum operation, with an ideal image, to produce a simulated image with realistic optical turbulence degradation. The degradation includes spatially varying warping and blurring. To produce the PSF array, we generate a series of extended phase screens. Simulated point sources are numerically propagated from an array of positions on the object plane, through the phase screens, and ultimately to the focal plane of the simulated camera. Note that the optical path for each PSF will be different, and thus, pass through a different portion of the extended phase screens. These different paths give rise to a spatially varying PSF to produce anisoplanatic effects. We use a method for defining the individual phase screen statistics that we have not seen used in previous anisoplanatic simulations. We also present a validation analysis. In particular, we compare simulated outputs with the theoretical anisoplanatic tilt correlation and a derived differential tilt variance statistic. This is in addition to comparing the long- and short-exposure PSFs and isoplanatic angle. We believe this analysis represents the most thorough validation of an anisoplanatic simulation to date. The current work is also unique that we simulate and validate both constant and varying Cn2(z) profiles. Furthermore, we simulate sequences with both temporally independent and temporally correlated turbulence effects. Temporal correlation is introduced by generating even larger extended phase screens and translating this block of screens in front of the propagation area. Our validation analysis shows an excellent match between the simulation statistics and the theoretical predictions. Thus, we think this tool can be used effectively to study optical anisoplanatic turbulence and to aid in the development of image restoration methods.

  3. Leadership: validation of a self-report scale: comment on Dussault, Frenette, and Fernet (2013).

    PubMed

    Chakrabarty, Subhra

    2014-10-01

    In a recent study, Dussault, Frenette, and Fernet (2013) developed a 21-item self-report instrument to measure leadership based on Bass's (1985) transformational/transactional leadership paradigm. The final specification included a third-order dimension (leadership), two second-order dimensions (transactional leadership and transformational leadership), and a first-order dimension (laissez-faire leadership). This note focuses on the need for assessing convergent and discriminant validity of the scale, and on ruling out the potential for common method bias.

  4. Sonic Boom Modeling Technical Challenge

    NASA Technical Reports Server (NTRS)

    Sullivan, Brenda M.

    2007-01-01

    This viewgraph presentation reviews the technical challenges in modeling sonic booms. The goal of this program is to develop knowledge, capabilities and technologies to enable overland supersonic flight. The specific objectives of the modeling are: (1) Develop and validate sonic boom propagation model through realistic atmospheres, including effects of turbulence (2) Develop methods enabling prediction of response of and acoustic transmission into structures impacted by sonic booms (3) Develop and validate psychoacoustic model of human response to sonic booms under both indoor and outdoor listening conditions, using simulators.

  5. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Lundstrom, Blake; Chakraborty, Sudipta; Lauss, Georg

    This paper presents a concise description of state-of-the-art real-time simulation-based testing methods and demonstrates how they can be used independently and/or in combination as an integrated development and validation approach for smart grid DERs and systems. A three-part case study demonstrating the application of this integrated approach at the different stages of development and validation of a system-integrated smart photovoltaic (PV) inverter is also presented. Laboratory testing results and perspectives from two international research laboratories are included in the case study.

  6. Puerto Rican understandings of child disability: methods for the cultural validation of standardized measures of child health.

    PubMed

    Gannotti, Mary E; Handwerker, W Penn

    2002-12-01

    Validating the cultural context of health is important for obtaining accurate and useful information from standardized measures of child health adapted for cross-cultural applications. This paper describes the application of ethnographic triangulation for cultural validation of a measure of childhood disability, the Pediatric Evaluation of Disability Inventory (PEDI) for use with children living in Puerto Rico. The key concepts include macro-level forces such as geography, demography, and economics, specific activities children performed and their key social interactions, beliefs, attitudes, emotions, and patterns of behavior surrounding independence in children and childhood disability, as well as the definition of childhood disability. Methods utilize principal components analysis to establish the validity of cultural concepts and multiple regression analysis to identify intracultural variation. Findings suggest culturally specific modifications to the PEDI, provide contextual information for informed interpretation of test scores, and point to the need to re-standardize normative values for use with Puerto Rican children. Without this type of information, Puerto Rican children may appear more disabled than expected for their level of impairment or not to be making improvements in functional status. The methods also allow for cultural boundaries to be quantitatively established, rather than presupposed. Copyright 2002 Elsevier Science Ltd.

  7. Registration of in vivo MR to histology of rodent brains using blockface imaging

    NASA Astrophysics Data System (ADS)

    Uberti, Mariano; Liu, Yutong; Dou, Huanyu; Mosley, R. Lee; Gendelman, Howard E.; Boska, Michael

    2009-02-01

    Registration of MRI to histopathological sections can enhance bioimaging validation for use in pathobiologic, diagnostic, and therapeutic evaluations. However, commonly used registration methods fall short of this goal due to tissue shrinkage and tearing after brain extraction and preparation. In attempts to overcome these limitations we developed a software toolbox using 3D blockface imaging as the common space of reference. This toolbox includes a semi-automatic brain extraction technique using constraint level sets (CLS), 3D reconstruction methods for the blockface and MR volume, and a 2D warping technique using thin-plate splines with landmark optimization. Using this toolbox, the rodent brain volume is first extracted from the whole head MRI using CLS. The blockface volume is reconstructed followed by 3D brain MRI registration to the blockface volume to correct the global deformations due to brain extraction and fixation. Finally, registered MRI and histological slices are warped to corresponding blockface images to correct slice specific deformations. The CLS brain extraction technique was validated by comparing manual results showing 94% overlap. The image warping technique was validated by calculating target registration error (TRE). Results showed a registration accuracy of a TRE < 1 pixel. Lastly, the registration method and the software tools developed were used to validate cell migration in murine human immunodeficiency virus type one encephalitis.

  8. Alternatives to animal testing: research, trends, validation, regulatory acceptance.

    PubMed

    Huggins, Jane

    2003-01-01

    Current trends and issues in the development of alternatives to the use of animals in biomedical experimentation are discussed in this position paper. Eight topics are considered and include refinement of acute toxicity assays; eye corrosion/irritation alternatives; skin corrosion/irritation alternatives; contact sensitization alternatives; developmental/reproductive testing alternatives; genetic engineering (transgenic) assays; toxicogenomics; and validation of alternative methods. The discussion of refinement of acute toxicity assays is focused primarily on developments with regard to reduction of the number of animals used in the LD(50) assay. However, the substitution of humane endpoints such as clinical signs of toxicity for lethality in these assays is also evaluated. Alternative assays for eye corrosion/irritation as well as those for skin corrosion/irritation are described with particular attention paid to the outcomes, both successful and unsuccessful, of several validation efforts. Alternative assays for contact sensitization and developmental/reproductive toxicity are presented as examples of methods designed for the examination of interactions between toxins and somewhat more complex physiological systems. Moreover, genetic engineering and toxicogenomics are discussed with an eye toward the future of biological experimentation in general. The implications of gene manipulation for research animals, specifically, are also examined. Finally, validation methods are investigated as to their effectiveness, or lack thereof, and suggestions for their standardization and improvement, as well as implementation are reviewed.

  9. Validity of a Simple Method for Measuring Force-Velocity-Power Profile in Countermovement Jump.

    PubMed

    Jiménez-Reyes, Pedro; Samozino, Pierre; Pareja-Blanco, Fernando; Conceição, Filipe; Cuadrado-Peñafiel, Víctor; González-Badillo, Juan José; Morin, Jean-Benoît

    2017-01-01

    To analyze the reliability and validity of a simple computation method to evaluate force (F), velocity (v), and power (P) output during a countermovement jump (CMJ) suitable for use in field conditions and to verify the validity of this computation method to compute the CMJ force-velocity (F-v) profile (including unloaded and loaded jumps) in trained athletes. Sixteen high-level male sprinters and jumpers performed maximal CMJs under 6 different load conditions (0-87 kg). A force plate sampling at 1000 Hz was used to record vertical ground-reaction force and derive vertical-displacement data during CMJ trials. For each condition, mean F, v, and P of the push-off phase were determined from both force-plate data (reference method) and simple computation measures based on body mass, jump height (from flight time), and push-off distance and used to establish the linear F-v relationship for each individual. Mean absolute bias values were 0.9% (± 1.6%), 4.7% (± 6.2%), 3.7% (± 4.8%), and 5% (± 6.8%) for F, v, P, and slope of the F-v relationship (S Fv ), respectively. Both methods showed high correlations for F-v-profile-related variables (r = .985-.991). Finally, all variables computed from the simple method showed high reliability, with ICC >.980 and CV <1.0%. These results suggest that the simple method presented here is valid and reliable for computing CMJ force, velocity, power, and F-v profiles in athletes and could be used in practice under field conditions when body mass, push-off distance, and jump height are known.

  10. SAS molecular tests Escherichia coli O157 detection kit. Performance tested method 031203.

    PubMed

    Bapanpally, Chandra; Montier, Laura; Khan, Shah; Kasra, Akif; Brunelle, Sharon L

    2014-01-01

    The SAS Molecular tests Escherichia coli O157 Detection method, a loop-mediated isothermal amplification method, performed as well as or better than the U.S. Department of Agriculture, Food Safety Inspection Service Microbiology Laboratory Guidebook and the U.S. Food and Drug Administration Bacteriological Analytical Manual reference methods for ground beef, beef trim, bagged mixed lettuce, and fresh spinach. Ground beef (30% fat, 25 g test portion) was validated for 7-8 h enrichment, leafy greens were validated in a 6-7 h enrichment, and ground beef (30% fat, 375 g composite test portion) and beef trim (375 g composite test portion) were validated in a 16-20 h enrichment. The method performance for meat and leafy green matrixes was also shown to be acceptable under conditions of co-enrichment with Salmonella. Thus, after a short co-enrichment step, ground beef, beef trim, lettuce, and spinach can be tested for both Salmonella and E. coli O157. The SAS Molecular tests Salmonella Detection Kit was validated using the same test portions as for the SAS Molecular tests E. coli O157 Detection Kit and those results are presented in a separate report. Inclusivity and exclusivity testing revealed no false negatives and no false positives among the 50 E. coli 0157 strains, including H7 and non-motile strains, and 30 non-E. coli O157 strains examined. Finally, the method was shown to be robust when variations to DNA extract hold time and DNA volume were varied. The method comparison and robustness data suggest a full 7 h enrichment time should be used for 25 g ground beef test portions.

  11. A comparison of accuracy validation methods for genomic and pedigree-based predictions of swine litter size traits using Large White and simulated data.

    PubMed

    Putz, A M; Tiezzi, F; Maltecca, C; Gray, K A; Knauer, M T

    2018-02-01

    The objective of this study was to compare and determine the optimal validation method when comparing accuracy from single-step GBLUP (ssGBLUP) to traditional pedigree-based BLUP. Field data included six litter size traits. Simulated data included ten replicates designed to mimic the field data in order to determine the method that was closest to the true accuracy. Data were split into training and validation sets. The methods used were as follows: (i) theoretical accuracy derived from the prediction error variance (PEV) of the direct inverse (iLHS), (ii) approximated accuracies from the accf90(GS) program in the BLUPF90 family of programs (Approx), (iii) correlation between predictions and the single-step GEBVs from the full data set (GEBV Full ), (iv) correlation between predictions and the corrected phenotypes of females from the full data set (Y c ), (v) correlation from method iv divided by the square root of the heritability (Y ch ) and (vi) correlation between sire predictions and the average of their daughters' corrected phenotypes (Y cs ). Accuracies from iLHS increased from 0.27 to 0.37 (37%) in the Large White. Approximation accuracies were very consistent and close in absolute value (0.41 to 0.43). Both iLHS and Approx were much less variable than the corrected phenotype methods (ranging from 0.04 to 0.27). On average, simulated data showed an increase in accuracy from 0.34 to 0.44 (29%) using ssGBLUP. Both iLHS and Y ch approximated the increase well, 0.30 to 0.46 and 0.36 to 0.45, respectively. GEBV Full performed poorly in both data sets and is not recommended. Results suggest that for within-breed selection, theoretical accuracy using PEV was consistent and accurate. When direct inversion is infeasible to get the PEV, correlating predictions to the corrected phenotypes divided by the square root of heritability is adequate given a large enough validation data set. © 2017 Blackwell Verlag GmbH.

  12. Reliability Validation and Improvement Framework

    DTIC Science & Technology

    2012-11-01

    systems . Steps in that direction include the use of the Architec- ture Tradeoff Analysis Method ® (ATAM®) developed at the Carnegie Mellon...embedded software • cyber - physical systems (CPSs) to indicate that the embedded software interacts with, manag - es, and controls a physical system [Lee...the use of formal static analysis methods to increase our confidence in system operation beyond testing. However, analysis results

  13. A step-by-step approach to improve data quality when using commercial business lists to characterize retail food environments.

    PubMed

    Jones, Kelly K; Zenk, Shannon N; Tarlov, Elizabeth; Powell, Lisa M; Matthews, Stephen A; Horoi, Irina

    2017-01-07

    Food environment characterization in health studies often requires data on the location of food stores and restaurants. While commercial business lists are commonly used as data sources for such studies, current literature provides little guidance on how to use validation study results to make decisions on which commercial business list to use and how to maximize the accuracy of those lists. Using data from a retrospective cohort study [Weight And Veterans' Environments Study (WAVES)], we (a) explain how validity and bias information from existing validation studies (count accuracy, classification accuracy, locational accuracy, as well as potential bias by neighborhood racial/ethnic composition, economic characteristics, and urbanicity) were used to determine which commercial business listing to purchase for retail food outlet data and (b) describe the methods used to maximize the quality of the data and results of this approach. We developed data improvement methods based on existing validation studies. These methods included purchasing records from commercial business lists (InfoUSA and Dun and Bradstreet) based on store/restaurant names as well as standard industrial classification (SIC) codes, reclassifying records by store type, improving geographic accuracy of records, and deduplicating records. We examined the impact of these procedures on food outlet counts in US census tracts. After cleaning and deduplicating, our strategy resulted in a 17.5% reduction in the count of food stores that were valid from those purchased from InfoUSA and 5.6% reduction in valid counts of restaurants purchased from Dun and Bradstreet. Locational accuracy was improved for 7.5% of records by applying street addresses of subsequent years to records with post-office (PO) box addresses. In total, up to 83% of US census tracts annually experienced a change (either positive or negative) in the count of retail food outlets between the initial purchase and the final dataset. Our study provides a step-by-step approach to purchase and process business list data obtained from commercial vendors. The approach can be followed by studies of any size, including those with datasets too large to process each record by hand and will promote consistency in characterization of the retail food environment across studies.

  14. Determination of total arsenic in fish by hydride-generation atomic absorption spectrometry: method validation, traceability and uncertainty evaluation

    NASA Astrophysics Data System (ADS)

    Nugraha, W. C.; Elishian, C.; Ketrin, R.

    2017-03-01

    Fish containing arsenic compound is one of the important indicators of arsenic contamination in water monitoring. The high level of arsenic in fish is due to absorption through food chain and accumulated in their habitat. Hydride generation (HG) coupled with atomic absorption spectrometric (AAS) detection is one of the most popular techniques employed for arsenic determination in a variety of matrices including fish. This study aimed to develop a method for the determination of total arsenic in fish by HG-AAS. The method for sample preparation from American of Analytical Chemistry (AOAC) Method 999.10-2005 was adopted for acid digestion using microwave digestion system and AOAC Method 986.15 - 2005 for dry ashing. The method was developed and validated using Certified Reference Material DORM 3 Fish Protein for trace metals for ensuring the accuracy and the traceability of the results. The sources of uncertainty of the method were also evaluated. By using the method, it was found that the total arsenic concentration in the fish was 45.6 ± 1.22 mg.Kg-1 with a coverage factor of equal to 2 at 95% of confidence level. Evaluation of uncertainty was highly influenced by the calibration curve. This result was also traceable to International Standard System through analysis of Certified Reference Material DORM 3 with 97.5% of recovery. In summary, it showed that method of preparation and HG-AAS technique for total arsenic determination in fish were valid and reliable.

  15. National Institutes of Health Pathways to Prevention Workshop: Methods for Evaluating Natural Experiments in Obesity.

    PubMed

    Emmons, Karen M; Doubeni, Chyke A; Fernandez, Maria E; Miglioretti, Diana L; Samet, Jonathan M

    2018-06-05

    On 5 and 6 December 2017, the National Institutes of Health (NIH) convened the Pathways to Prevention Workshop: Methods for Evaluating Natural Experiments in Obesity to identify the status of methods for assessing natural experiments to reduce obesity, areas in which these methods could be improved, and research needs for advancing the field. This article considers findings from a systematic evidence review on methods for evaluating natural experiments in obesity, workshop presentations by experts and stakeholders, and public comment. Research gaps are identified, and recommendations related to 4 key issues are provided. Recommendations on population-based data sources and data integration include maximizing use and sharing of existing surveillance and research databases and ensuring significant effort to integrate and link databases. Recommendations on measurement include use of standardized and validated measures of obesity-related outcomes and exposures, systematic measurement of co-benefits and unintended consequences, and expanded use of validated technologies for measurement. Study design recommendations include improving guidance, documentation, and communication about methods used; increasing use of designs that minimize bias in natural experiments; and more carefully selecting control groups. Cross-cutting recommendations target activities that the NIH and other funders might undertake to improve the rigor of natural experiments in obesity, including training and collaboration on modeling and causal inference, promoting the importance of community engagement in the conduct of natural experiments, ensuring maintenance of relevant surveillance systems, and supporting extended follow-up assessments for exemplar natural experiments. To combat the significant public health threat posed by obesity, researchers should continue to take advantage of natural experiments. The recommendations in this report aim to strengthen evidence from such studies.

  16. Assessment of Interobserver Reliability in Nutrition Studies that Use Direct Observation of School Meals

    PubMed Central

    BAGLIO, MICHELLE L.; BAXTER, SUZANNE DOMEL; GUINN, CAROLINE H.; THOMPSON, WILLIAM O.; SHAFFER, NICOLE M.; FRYE, FRANCESCA H. A.

    2005-01-01

    This article (a) provides a general review of interobserver reliability (IOR) and (b) describes our method for assessing IOR for items and amounts consumed during school meals for a series of studies regarding the accuracy of fourth-grade children's dietary recalls validated with direct observation of school meals. A widely used validation method for dietary assessment is direct observation of meals. Although many studies utilize several people to conduct direct observations, few published studies indicate whether IOR was assessed. Assessment of IOR is necessary to determine that the information collected does not depend on who conducted the observation. Two strengths of our method for assessing IOR are that IOR was assessed regularly throughout the data collection period and that IOR was assessed for foods at the item and amount level instead of at the nutrient level. Adequate agreement among observers is essential to the reasoning behind using observation as a validation tool. Readers are encouraged to question the results of studies that fail to mention and/or to include the results for assessment of IOR when multiple people have conducted observations. PMID:15354155

  17. Fuzzy Logic Controller Stability Analysis Using a Satisfiability Modulo Theories Approach

    NASA Technical Reports Server (NTRS)

    Arnett, Timothy; Cook, Brandon; Clark, Matthew A.; Rattan, Kuldip

    2017-01-01

    While many widely accepted methods and techniques exist for validation and verification of traditional controllers, at this time no solutions have been accepted for Fuzzy Logic Controllers (FLCs). Due to the highly nonlinear nature of such systems, and the fact that developing a valid FLC does not require a mathematical model of the system, it is quite difficult to use conventional techniques to prove controller stability. Since safety-critical systems must be tested and verified to work as expected for all possible circumstances, the fact that FLC controllers cannot be tested to achieve such requirements poses limitations on the applications for such technology. Therefore, alternative methods for verification and validation of FLCs needs to be explored. In this study, a novel approach using formal verification methods to ensure the stability of a FLC is proposed. Main research challenges include specification of requirements for a complex system, conversion of a traditional FLC to a piecewise polynomial representation, and using a formal verification tool in a nonlinear solution space. Using the proposed architecture, the Fuzzy Logic Controller was found to always generate negative feedback, but inconclusive for Lyapunov stability.

  18. Simultaneous determination of multi-residue and multi-class antibiotics in aquaculture shrimps by UPLC-MS/MS.

    PubMed

    Saxena, Sushil Kumar; Rangasamy, Rajesh; Krishnan, Anoop A; Singh, Dhirendra P; Uke, Sumedh P; Malekadi, Praveen Kumar; Sengar, Anoop S; Mohamed, D Peer; Gupta, Ananda

    2018-09-15

    An accurate, reliable and fast multi-residue, multi-class method using ultra-performance liquid chromatography-tandem mass spectrometry (UPLC-MS/MS) was developed and validated for simultaneous determination and quantification of 24 pharmacologically active substances of three different classes (Quinolones including fluoroquinolones, sulphonamides and tetracyclines) in aquaculture shrimps. Sample preparation involves extraction with acetonitrile containing 0.1% formic acid and followed by clean up with n-hexane and 0.1% methanol in water by UPLC-MS/MS within 8 min. The method was validated according to European Commission Decision 2002/657. Acceptable values were obtained for linearity (5-200 μg kg -1 ), specificity, Limit of Quantification (5-10 μg kg -1 ), recovery (between 83 and 100%), repeatability (RSD < 9%), within lab reproducibility (RSD < 15%), reproducibility (RSD ≤ 22%), decision limit (105-116 μg kg -1 ) and detection capability (110-132 μg kg -1 ). The validated method was applied to aquaculture shrimp samples from India. Copyright © 2018 Elsevier Ltd. All rights reserved.

  19. Development plan for the External Hazards Experimental Group. Light Water Reactor Sustainability Program

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Coleman, Justin Leigh; Smith, Curtis Lee; Burns, Douglas Edward

    This report describes the development plan for a new multi-partner External Hazards Experimental Group (EHEG) coordinated by Idaho National Laboratory (INL) within the Risk-Informed Safety Margin Characterization (RISMC) technical pathway of the Light Water Reactor Sustainability Program. Currently, there is limited data available for development and validation of the tools and methods being developed in the RISMC Toolkit. The EHEG is being developed to obtain high-quality, small- and large-scale experimental data validation of RISMC tools and methods in a timely and cost-effective way. The group of universities and national laboratories that will eventually form the EHEG (which is ultimately expectedmore » to include both the initial participants and other universities and national laboratories that have been identified) have the expertise and experimental capabilities needed to both obtain and compile existing data archives and perform additional seismic and flooding experiments. The data developed by EHEG will be stored in databases for use within RISMC. These databases will be used to validate the advanced external hazard tools and methods.« less

  20. Polynomial modal analysis of slanted lamellar gratings.

    PubMed

    Granet, Gérard; Randriamihaja, Manjakavola Honore; Raniriharinosy, Karyl

    2017-06-01

    The problem of diffraction by slanted lamellar dielectric and metallic gratings in classical mounting is formulated as an eigenvalue eigenvector problem. The numerical solution is obtained by using the moment method with Legendre polynomials as expansion and test functions, which allows us to enforce in an exact manner the boundary conditions which determine the eigensolutions. Our method is successfully validated by comparison with other methods including in the case of highly slanted gratings.

  1. Nontechnical skill training and the use of scenarios in modern surgical education.

    PubMed

    Brunckhorst, Oliver; Khan, Muhammad S; Dasgupta, Prokar; Ahmed, Kamran

    2017-07-01

    Nontechnical skills are being increasingly recognized as a core reason of surgical errors. Combined with the changing nature of surgical training, there has therefore been an increase in nontechnical skill research in the literature. This review therefore aims to: define nontechnical skillsets, assess current training methods, explore assessment modalities and suggest future research aims. The literature demonstrates an increasing understanding of the components of nontechnical skills within surgery. This has led to a greater availability of validated training methods for its training, including the use of didactic teaching, e-learning and simulation-based scenarios. In addition, there are now various extensively validated assessment tools for nontechnical skills including NOTSS, the Oxford NOTECHS and OTAS. Finally, there is now more focus on the development of tools which target individual nontechnical skill components and an attempt to understand which of these play a greater role in specific procedures such as laparoscopic or robotic surgery. Current evidence demonstrates various training methods and tools for the training of nontechnical skills. Future research is likely to focus increasingly on individual nontechnical skill components and procedure-specific skills.

  2. FT-midIR determination of fatty acid profiles, including trans fatty acids, in bakery products after focused microwave-assisted Soxhlet extraction.

    PubMed

    Ruiz-Jiménez, J; Priego-Capote, F; Luque de Castro, M D

    2006-08-01

    A study of the feasibility of Fourier transform medium infrared spectroscopy (FT-midIR) for analytical determination of fatty acid profiles, including trans fatty acids, is presented. The training and validation sets-75% (102 samples) and 25% (36 samples) of the samples once the spectral outliers have been removed-to develop FT-midIR general equations, were built with samples from 140 commercial and home-made bakery products. The concentration of the analytes in the samples used for this study is within the typical range found in these kinds of products. Both sets were independent; thus, the validation set was only used for testing the equations. The criterion used for the selection of the validation set was samples with the highest number of neighbours and the most separation between them (H<0.6). Partial least squares regression and cross validation were used for multivariate calibration. The FT-midIR method does not require post-extraction manipulation and gives information about the fatty acid profile in two min. The 14:0, 16:0, 18:0, 18:1 and 18:2 fatty acids can be determined with excellent precision and other fatty acids with good precision according to the Shenk criteria, R (2)>/=0.90, SEP=1-1.5 SEL and R (2)=0.70-0.89, SEP=2-3 SEL, respectively. The results obtained with the proposed method were compared with those provided by the conventional method based on GC-MS. At 95% significance level, the differences between the values obtained for the different fatty acids were within the experimental error.

  3. Quality of survey reporting in nephrology journals: a methodologic review.

    PubMed

    Li, Alvin Ho-Ting; Thomas, Sonia M; Farag, Alexandra; Duffett, Mark; Garg, Amit X; Naylor, Kyla L

    2014-12-05

    Survey research is an important research method used to determine individuals' attitudes, knowledge, and behaviors; however, as with other research methods, inadequate reporting threatens the validity of results. This study aimed to describe the quality of reporting of surveys published between 2001 and 2011 in the field of nephrology. The top nephrology journals were systematically reviewed (2001-2011: American Journal of Kidney Diseases, Nephrology Dialysis Transplantation, and Kidney International; 2006-2011: Clinical Journal of the American Society of Nephrology) for studies whose primary objective was to collect and report survey results. Included were nephrology journals with a heavy focus on clinical research and high impact factors. All titles and abstracts were screened in duplicate. Surveys were excluded if they were part of a multimethod study, evaluated only psychometric characteristics, or used semi-structured interviews. Information was collected on survey and respondent characteristics, questionnaire development (e.g., pilot testing), psychometric characteristics (e.g., validity and reliability), survey methods used to optimize response rate (e.g., system of multiple contacts), and response rate. After a screening of 19,970 citations, 216 full-text articles were reviewed and 102 surveys were included. Approximately 85% of studies reported a response rate. Almost half of studies (46%) discussed how they developed their questionnaire and only a quarter of studies (28%) mentioned the validity or reliability of the questionnaire. The only characteristic that improved over the years was the proportion of articles reporting missing data (2001-2004: 46.4%; 2005-2008: 61.9%; and 2009-2011: 84.8%; respectively) (P<0.01). The quality of survey reporting in nephrology journals remains suboptimal. In particular, reporting of the validity and reliability of the questionnaire must be improved. Guidelines to improve survey reporting and increase transparency are clearly needed. Copyright © 2014 by the American Society of Nephrology.

  4. Application of High-Performance Liquid Chromatography Coupled with Linear Ion Trap Quadrupole Orbitrap Mass Spectrometry for Qualitative and Quantitative Assessment of Shejin-Liyan Granule Supplements.

    PubMed

    Gu, Jifeng; Wu, Weijun; Huang, Mengwei; Long, Fen; Liu, Xinhua; Zhu, Yizhun

    2018-04-11

    A method for high-performance liquid chromatography coupled with linear ion trap quadrupole Orbitrap high-resolution mass spectrometry (HPLC-LTQ-Orbitrap MS) was developed and validated for the qualitative and quantitative assessment of Shejin-liyan Granule. According to the fragmentation mechanism and high-resolution MS data, 54 compounds, including fourteen isoflavones, eleven ligands, eight flavonoids, six physalins, six organic acids, four triterpenoid saponins, two xanthones, two alkaloids, and one licorice coumarin, were identified or tentatively characterized. In addition, ten of the representative compounds (matrine, galuteolin, tectoridin, iridin, arctiin, tectorigenin, glycyrrhizic acid, irigenin, arctigenin, and irisflorentin) were quantified using the validated HPLC-LTQ-Orbitrap MS method. The method validation showed a good linearity with coefficients of determination (r²) above 0.9914 for all analytes. The accuracy of the intra- and inter-day variation of the investigated compounds was 95.0-105.0%, and the precision values were less than 4.89%. The mean recoveries and reproducibilities of each analyte were 95.1-104.8%, with relative standard deviations below 4.91%. The method successfully quantified the ten compounds in Shejin-liyan Granule, and the results show that the method is accurate, sensitive, and reliable.

  5. Determination of polarimetric parameters of honey by near-infrared transflectance spectroscopy.

    PubMed

    García-Alvarez, M; Ceresuela, S; Huidobro, J F; Hermida, M; Rodríguez-Otero, J L

    2002-01-30

    NIR transflectance spectroscopy was used to determine polarimetric parameters (direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides) and sucrose in honey. In total, 156 honey samples were collected during 1992 (45 samples), 1995 (56 samples), and 1996 (55 samples). Samples were analyzed by NIR spectroscopy and polarimetric methods. Calibration (118 samples) and validation (38 samples) sets were made up; honeys from the three years were included in both sets. Calibrations were performed by modified partial least-squares regression and scatter correction by standard normal variation and detrend methods. For direct polarization, polarization after inversion, specific rotation in dry matter, and polarization due to nonmonosaccharides, good statistics (bias, SEV, and R(2)) were obtained for the validation set, and no statistically (p = 0.05) significant differences were found between instrumental and polarimetric methods for these parameters. Statistical data for sucrose were not as good as those of the other parameters. Therefore, NIR spectroscopy is not an effective method for quantitative analysis of sucrose in these honey samples. However, NIR spectroscopy may be an acceptable method for semiquantitative evaluation of sucrose for honeys, such as those in our study, containing up to 3% of sucrose. Further work is necessary to validate the uncertainty at higher levels.

  6. 76 FR 1138 - Enhanced Assessment Instruments

    Federal Register 2010, 2011, 2012, 2013, 2014

    2011-01-07

    ... priorities: (a) Collaborating with institutions of higher education, other research institutions, or other... a member State may hold); (2) The consortium's method and process (e.g., consensus, majority) for... available on an ongoing basis for research, including for prospective linking, validity, and program...

  7. Comparative Validation of the Determination of Sofosbuvir in Pharmaceuticals by Several Inexpensive Ecofriendly Chromatographic, Electrophoretic, and Spectrophotometric Methods.

    PubMed

    El-Yazbi, Amira F

    2017-01-20

    Sofosbuvir (SOFO) was approved by the U.S. Food and Drug Administration in 2013 for the treatment of hepatitis C virusinfection with enhanced antiviral potency compared with earlier analogs. Notwithstanding, all current editions of the pharmacopeias still do not present any analytical methods for the quantification of SOFO. Thus, rapid, simple, and ecofriendly methods for the routine analysis of commercial formulations of SOFO are desirable. In this study, five accurate methods for the determination of SOFO in pharmaceutical tablets were developed and validated. These methods include HPLC, capillary zone electrophoresis, HPTLC, and UV spectrophotometric and derivative spectrometry methods. The proposed methods proved to be rapid, simple, sensitive, selective, and accurate analytical procedures that were suitable for the reliable determination of SOFO in pharmaceutical tablets. An analysis of variance test with <em>P</em>-value &#x003E; 0.05 confirmed that there were no significant differences between the proposed assays. Thus, any of these methods can be used for the routine analysis of SOFO in commercial tablets.

  8. Soil Moisture Active Passive (SMAP) L-Band Microwave Radiometer Post-Launch Calibration

    NASA Technical Reports Server (NTRS)

    Peng, Jinzheng; Piepmeier, Jeffrey R.; Misra, Sidharth; Dinnat, Emmanuel P.; Hudson, Derek; Le Vine, David M.; De Amici, Giovanni; Mohammed, Priscilla N.; Yueh, Simon H.; Meissner, Thomas

    2016-01-01

    The SMAP microwave radiometer is a fully-polarimetric L-band radiometer flown on the SMAP satellite in a 6 AM/ 6 PM sun-synchronous orbit at 685 km altitude. Since April, 2015, the radiometer is under calibration and validation to assess the quality of the radiometer L1B data product. Calibration methods including the SMAP L1B TA2TB (from Antenna Temperature (TA) to the Earth's surface Brightness Temperature (TB)) algorithm and TA forward models are outlined, and validation approaches to calibration stability/quality are described in this paper including future work. Results show that the current radiometer L1B data satisfies its requirements.

  9. Soil Moisture ActivePassive (SMAP) L-Band Microwave Radiometer Post-Launch Calibration

    NASA Technical Reports Server (NTRS)

    Peng, Jinzheng; Piepmeier, Jeffrey R.; Misra, Sidharth; Dinnat, Emmanuel P.; Hudson, Derek; Le Vine, David M.; De Amici, Giovanni; Mohammed, Priscilla N.; Yueh, Simon H.; Meissner, Thomas

    2016-01-01

    The SMAP microwave radiometer is a fully-polarimetric L-band radiometer flown on the SMAP satellite in a 6 AM/ 6 PM sun-synchronous orbit at 685 km altitude. Since April, 2015, the radiometer is under calibration and validation to assess the quality of the radiometer L1B data product. Calibration methods including the SMAP L1B TA2TB (from Antenna Temperature (TA) to the Earth’s surface Brightness Temperature (TB)) algorithm and TA forward models are outlined, and validation approaches to calibration stability/quality are described in this paper including future work. Results show that the current radiometer L1B data satisfies its requirements.

  10. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Grigg, Reid; McPherson, Brian; Lee, Rober

    The Southwest Regional Partnership on Carbon Sequestration (SWP) one of seven regional partnerships sponsored by the U.S. Department of Energy (USDOE) carried out five field pilot tests in its Phase II Carbon Sequestration Demonstration effort, to validate the most promising sequestration technologies and infrastructure concepts, including three geologic pilot tests and two terrestrial pilot programs. This field testing demonstrated the efficacy of proposed sequestration technologies to reduce or offset greenhouse gas emissions in the region. Risk mitigation, optimization of monitoring, verification, and accounting (MVA) protocols, and effective outreach and communication were additional critical goals of these field validation tests. Themore » program included geologic pilot tests located in Utah, New Mexico, Texas, and a region-wide terrestrial analysis. Each geologic sequestration test site was intended to include injection of a minimum of ~75,000 tons/year CO{sub 2}, with minimum injection duration of one year. These pilots represent medium- scale validation tests in sinks that host capacity for possible larger-scale sequestration operations in the future. These validation tests also demonstrated a broad variety of carbon sink targets and multiple value-added benefits, including testing of enhanced oil recovery and sequestration, enhanced coalbed methane production and a geologic sequestration test combined with a local terrestrial sequestration pilot. A regional terrestrial sequestration demonstration was also carried out, with a focus on improved terrestrial MVA methods and reporting approaches specific for the Southwest region.« less

  11. New clinical validation method for automated sphygmomanometer: a proposal by Japan ISO-WG for sphygmomanometer standard.

    PubMed

    Shirasaki, Osamu; Asou, Yosuke; Takahashi, Yukio

    2007-12-01

    Owing to fast or stepwise cuff deflation, or measuring at places other than the upper arm, the clinical accuracy of most recent automated sphygmomanometers (auto-BPMs) cannot be validated by one-arm simultaneous comparison, which would be the only accurate validation method based on auscultation. Two main alternative methods are provided by current standards, that is, two-arm simultaneous comparison (method 1) and one-arm sequential comparison (method 2); however, the accuracy of these validation methods might not be sufficient to compensate for the suspicious accuracy in lateral blood pressure (BP) differences (LD) and/or BP variations (BPV) between the device and reference readings. Thus, the Japan ISO-WG for sphygmomanometer standards has been studying a new method that might improve validation accuracy (method 3). The purpose of this study is to determine the appropriateness of method 3 by comparing immunity to LD and BPV with those of the current validation methods (methods 1 and 2). The validation accuracy of the above three methods was assessed in human participants [N=120, 45+/-15.3 years (mean+/-SD)]. An oscillometric automated monitor, Omron HEM-762, was used as the tested device. When compared with the others, methods 1 and 3 showed a smaller intra-individual standard deviation of device error (SD1), suggesting their higher reproducibility of validation. The SD1 by method 2 (P=0.004) significantly correlated with the participant's BP, supporting our hypothesis that the increased SD of device error by method 2 is at least partially caused by essential BPV. Method 3 showed a significantly (P=0.0044) smaller interparticipant SD of device error (SD2), suggesting its higher interparticipant consistency of validation. Among the methods of validation of the clinical accuracy of auto-BPMs, method 3, which showed the highest reproducibility and highest interparticipant consistency, can be proposed as being the most appropriate.

  12. Determination of 74 new psychoactive substances in serum using automated in-line solid-phase extraction-liquid chromatography-tandem mass spectrometry.

    PubMed

    Lehmann, Sabrina; Kieliba, Tobias; Beike, Justus; Thevis, Mario; Mercer-Chalmers-Bender, Katja

    2017-10-01

    A detailed description is given of the development and validation of a fully automated in-line solid-phase extraction-liquid chromatography-tandem mass spectrometry (SPE-LC-MS/MS) method capable of detecting 90 central-stimulating new psychoactive substances (NPS) and 5 conventional amphetamine-type stimulants (amphetamine, 3,4-methylenedioxy-methamphetamine (MDMA), 3,4-methylenedioxy-amphetamine (MDA), 3,4-methylenedioxy-N-ethyl-amphetamine (MDEA), methamphetamine) in serum. The aim was to apply the validated method to forensic samples. The preparation of 150μL of serum was performed by an Instrument Top Sample Preparation (ITSP)-SPE with mixed mode cation exchanger cartridges. The extracts were directly injected into an LC-MS/MS system, using a biphenyl column and gradient elution with 2mM ammonium formate/0.1% formic acid and acetonitrile/0.1% formic acid as mobile phases. The chromatographic run time amounts to 9.3min (including re-equilibration). The total cycle time is 11min, due to the interlacing between sample preparation and analysis. The method was fully validated using 69 NPS and five conventional amphetamine-type stimulants, according to the guidelines of the Society of Toxicological and Forensic Chemistry (GTFCh). The guidelines were fully achieved for 62 analytes (with a limit of detection (LOD) between 0.2 and 4μg/L), whilst full validation was not feasible for the remaining 12 analytes. For the fully validated analytes, the method achieved linearity in the 5μg/L (lower limit of quantification, LLOQ) to 250μg/L range (coefficients of determination>0.99). Recoveries for 69 of these compounds were greater than 50%, with relative standard deviations≤15%. The validated method was then tested for its capability in detecting a further 21 NPS, thus totalling 95 tested substances. An LOD between 0.4 and 1.6μg/L was obtained for these 21 additional qualitatively-measured substances. The method was subsequently successfully applied to 28 specimens from routine forensic case work, of which 7 samples were determined to be positive for NPS consumption. Copyright © 2017 Elsevier B.V. All rights reserved.

  13. Segmental analysis of amphetamines in hair using a sensitive UHPLC-MS/MS method.

    PubMed

    Jakobsson, Gerd; Kronstrand, Robert

    2014-06-01

    A sensitive and robust ultra high performance liquid chromatography-tandem mass spectrometry (UHPLC-MS/MS) method was developed and validated for quantification of amphetamine, methamphetamine, 3,4-methylenedioxyamphetamine and 3,4-methylenedioxy methamphetamine in hair samples. Segmented hair (10 mg) was incubated in 2M sodium hydroxide (80°C, 10 min) before liquid-liquid extraction with isooctane followed by centrifugation and evaporation of the organic phase to dryness. The residue was reconstituted in methanol:formate buffer pH 3 (20:80). The total run time was 4 min and after optimization of UHPLC-MS/MS-parameters validation included selectivity, matrix effects, recovery, process efficiency, calibration model and range, lower limit of quantification, precision and bias. The calibration curve ranged from 0.02 to 12.5 ng/mg, and the recovery was between 62 and 83%. During validation the bias was less than ±7% and the imprecision was less than 5% for all analytes. In routine analysis, fortified control samples demonstrated an imprecision <13% and control samples made from authentic hair demonstrated an imprecision <26%. The method was applied to samples from a controlled study of amphetamine intake as well as forensic hair samples previously analyzed with an ultra high performance liquid chromatography time of flight mass spectrometry (UHPLC-TOF-MS) screening method. The proposed method was suitable for quantification of these drugs in forensic cases including violent crimes, autopsy cases, drug testing and re-granting of driving licences. This study also demonstrated that if hair samples are divided into several short segments, the time point for intake of a small dose of amphetamine can be estimated, which might be useful when drug facilitated crimes are investigated. Copyright © 2014 John Wiley & Sons, Ltd.

  14. Public health information in crisis-affected populations: a review of methods and their use for advocacy and action.

    PubMed

    Checchi, Francesco; Warsame, Abdihamid; Treacy-Wong, Victoria; Polonsky, Jonathan; van Ommeren, Mark; Prudhon, Claudine

    2017-11-18

    Valid and timely information about various domains of public health underpins the effectiveness of humanitarian public health interventions in crises. However, obstacles including insecurity, insufficient resources and skills for data collection and analysis, and absence of validated methods combine to hamper the quantity and quality of public health information available to humanitarian responders. This paper, the second in a Series of four papers, reviews available methods to collect public health data pertaining to different domains of health and health services in crisis settings, including population size and composition, exposure to armed attacks, sexual and gender-based violence, food security and feeding practices, nutritional status, physical and mental health outcomes, public health service availability, coverage and effectiveness, and mortality. The paper also quantifies the availability of a minimal essential set of information in large armed conflict and natural disaster crises since 2010: we show that information was available and timely only in a small minority of cases. On the basis of this observation, we propose an agenda for methodological research and steps required to improve on the current use of available methods. This proposition includes setting up a dedicated interagency service for public health information and epidemiology in crises. Copyright © 2017 World Health Organization. Published by Elsevier Ltd/Inc/BV. All rights reserved. Published by Elsevier Ltd.. All rights reserved.

  15. Concurrent validity of different functional and neuroproteomic pain assessment methods in the rat osteoarthritis monosodium iodoacetate (MIA) model.

    PubMed

    Otis, Colombe; Gervais, Julie; Guillot, Martin; Gervais, Julie-Anne; Gauvin, Dominique; Péthel, Catherine; Authier, Simon; Dansereau, Marc-André; Sarret, Philippe; Martel-Pelletier, Johanne; Pelletier, Jean-Pierre; Beaudry, Francis; Troncy, Eric

    2016-06-23

    Lack of validity in osteoarthritis pain models and assessment methods is suspected. Our goal was to 1) assess the repeatability and reproducibility of measurement and the influence of environment, and acclimatization, to different pain assessment outcomes in normal rats, and 2) test the concurrent validity of the most reliable methods in relation to the expression of different spinal neuropeptides in a chemical model of osteoarthritic pain. Repeatability and inter-rater reliability of reflexive nociceptive mechanical thresholds, spontaneous static weight-bearing, treadmill, rotarod, and operant place escape/avoidance paradigm (PEAP) were assessed by the intraclass correlation coefficient (ICC). The most reliable acclimatization protocol was determined by comparing coefficients of variation. In a pilot comparative study, the sensitivity and responsiveness to treatment of the most reliable methods were tested in the monosodium iodoacetate (MIA) model over 21 days. Two MIA (2 mg) groups (including one lidocaine treatment group) and one sham group (0.9 % saline) received an intra-articular (50 μL) injection. No effect of environment (observer, inverted circadian cycle, or exercise) was observed; all tested methods except mechanical sensitivity (ICC <0.3), offered good repeatability (ICC ≥0.7). The most reliable acclimatization protocol included five assessments over two weeks. MIA-related osteoarthritic change in pain was demonstrated with static weight-bearing, punctate tactile allodynia evaluation, treadmill exercise and operant PEAP, the latter being the most responsive to analgesic intra-articular lidocaine. Substance P and calcitonin gene-related peptide were higher in MIA groups compared to naive (adjusted P (adj-P) = 0.016) or sham-treated (adj-P = 0.029) rats. Repeated post-MIA lidocaine injection resulted in 34 times lower downregulation for spinal substance P compared to MIA alone (adj-P = 0.029), with a concomitant increase of 17 % in time spent on the PEAP dark side (indicative of increased comfort). This study of normal rats and rats with pain established the most reliable and sensitive pain assessment methods and an optimized acclimatization protocol. Operant PEAP testing was more responsive to lidocaine analgesia than other tests used, while neuropeptide spinal concentration is an objective quantification method attractive to support and validate different centralized pain functional assessment methods.

  16. Computation of Pressurized Gas Bearings Using CE/SE Method

    NASA Technical Reports Server (NTRS)

    Cioc, Sorin; Dimofte, Florin; Keith, Theo G., Jr.; Fleming, David P.

    2003-01-01

    The space-time conservation element and solution element (CE/SE) method is extended to compute compressible viscous flows in pressurized thin fluid films. This numerical scheme has previously been used successfully to solve a wide variety of compressible flow problems, including flows with large and small discontinuities. In this paper, the method is applied to calculate the pressure distribution in a hybrid gas journal bearing. The formulation of the problem is presented, including the modeling of the feeding system. the numerical results obtained are compared with experimental data. Good agreement between the computed results and the test data were obtained, and thus validate the CE/SE method to solve such problems.

  17. A calibration method of infrared LVF based spectroradiometer

    NASA Astrophysics Data System (ADS)

    Liu, Jiaqing; Han, Shunli; Liu, Lei; Hu, Dexin

    2017-10-01

    In this paper, a calibration method of LVF-based spectroradiometer is summarize, including spectral calibration and radiometric calibration. The spectral calibration process as follow: first, the relationship between stepping motor's step number and transmission wavelength is derivative by theoretical calculation, including a non-linearity correction of LVF;second, a line-to-line method was used to corrected the theoretical wavelength; Finally, the 3.39 μm and 10.69 μm laser is used for spectral calibration validation, show the sought 0.1% accuracy or better is achieved.A new sub-region multi-point calibration method is used for radiometric calibration to improving accuracy, results show the sought 1% accuracy or better is achieved.

  18. Full immersion simulation: validation of a distributed simulation environment for technical and non-technical skills training in Urology.

    PubMed

    Brewin, James; Tang, Jessica; Dasgupta, Prokar; Khan, Muhammad S; Ahmed, Kamran; Bello, Fernando; Kneebone, Roger; Jaye, Peter

    2015-07-01

    To evaluate the face, content and construct validity of the distributed simulation (DS) environment for technical and non-technical skills training in endourology. To evaluate the educational impact of DS for urology training. DS offers a portable, low-cost simulated operating room environment that can be set up in any open space. A prospective mixed methods design using established validation methodology was conducted in this simulated environment with 10 experienced and 10 trainee urologists. All participants performed a simulated prostate resection in the DS environment. Outcome measures included surveys to evaluate the DS, as well as comparative analyses of experienced and trainee urologist's performance using real-time and 'blinded' video analysis and validated performance metrics. Non-parametric statistical methods were used to compare differences between groups. The DS environment demonstrated face, content and construct validity for both non-technical and technical skills. Kirkpatrick level 1 evidence for the educational impact of the DS environment was shown. Further studies are needed to evaluate the effect of simulated operating room training on real operating room performance. This study has shown the validity of the DS environment for non-technical, as well as technical skills training. DS-based simulation appears to be a valuable addition to traditional classroom-based simulation training. © 2014 The Authors BJU International © 2014 BJU International Published by John Wiley & Sons Ltd.

  19. The Use of Virtual Reality in the Study of People's Responses to Violent Incidents.

    PubMed

    Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel

    2009-01-01

    This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call 'plausibility' - including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents.

  20. The Use of Virtual Reality in the Study of People's Responses to Violent Incidents

    PubMed Central

    Rovira, Aitor; Swapp, David; Spanlang, Bernhard; Slater, Mel

    2009-01-01

    This paper reviews experimental methods for the study of the responses of people to violence in digital media, and in particular considers the issues of internal validity and ecological validity or generalisability of results to events in the real world. Experimental methods typically involve a significant level of abstraction from reality, with participants required to carry out tasks that are far removed from violence in real life, and hence their ecological validity is questionable. On the other hand studies based on field data, while having ecological validity, cannot control multiple confounding variables that may have an impact on observed results, so that their internal validity is questionable. It is argued that immersive virtual reality may provide a unification of these two approaches. Since people tend to respond realistically to situations and events that occur in virtual reality, and since virtual reality simulations can be completely controlled for experimental purposes, studies of responses to violence within virtual reality are likely to have both ecological and internal validity. This depends on a property that we call ‘plausibility’ – including the fidelity of the depicted situation with prior knowledge and expectations. We illustrate this with data from a previously published experiment, a virtual reprise of Stanley Milgram's 1960s obedience experiment, and also with pilot data from a new study being developed that looks at bystander responses to violent incidents. PMID:20076762

  1. Instrumental Variable Methods for Continuous Outcomes That Accommodate Nonignorable Missing Baseline Values.

    PubMed

    Ertefaie, Ashkan; Flory, James H; Hennessy, Sean; Small, Dylan S

    2017-06-15

    Instrumental variable (IV) methods provide unbiased treatment effect estimation in the presence of unmeasured confounders under certain assumptions. To provide valid estimates of treatment effect, treatment effect confounders that are associated with the IV (IV-confounders) must be included in the analysis, and not including observations with missing values may lead to bias. Missing covariate data are particularly problematic when the probability that a value is missing is related to the value itself, which is known as nonignorable missingness. In such cases, imputation-based methods are biased. Using health-care provider preference as an IV method, we propose a 2-step procedure with which to estimate a valid treatment effect in the presence of baseline variables with nonignorable missing values. First, the provider preference IV value is estimated by performing a complete-case analysis using a random-effects model that includes IV-confounders. Second, the treatment effect is estimated using a 2-stage least squares IV approach that excludes IV-confounders with missing values. Simulation results are presented, and the method is applied to an analysis comparing the effects of sulfonylureas versus metformin on body mass index, where the variables baseline body mass index and glycosylated hemoglobin have missing values. Our result supports the association of sulfonylureas with weight gain. © The Author 2017. Published by Oxford University Press on behalf of the Johns Hopkins Bloomberg School of Public Health. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  2. Validating and determining the weight of items used for evaluating clinical governance implementation based on analytic hierarchy process model.

    PubMed

    Hooshmand, Elaheh; Tourani, Sogand; Ravaghi, Hamid; Vafaee Najar, Ali; Meraji, Marziye; Ebrahimipour, Hossein

    2015-04-08

    The purpose of implementing a system such as Clinical Governance (CG) is to integrate, establish and globalize distinct policies in order to improve quality through increasing professional knowledge and the accountability of healthcare professional toward providing clinical excellence. Since CG is related to change, and change requires money and time, CG implementation has to be focused on priority areas that are in more dire need of change. The purpose of the present study was to validate and determine the significance of items used for evaluating CG implementation. The present study was descriptive-quantitative in method and design. Items used for evaluating CG implementation were first validated by the Delphi method and then compared with one another and ranked based on the Analytical Hierarchy Process (AHP) model. The items that were validated for evaluating CG implementation in Iran include performance evaluation, training and development, personnel motivation, clinical audit, clinical effectiveness, risk management, resource allocation, policies and strategies, external audit, information system management, research and development, CG structure, implementation prerequisites, the management of patients' non-medical needs, complaints and patients' participation in the treatment process. The most important items based on their degree of significance were training and development, performance evaluation, and risk management. The least important items included the management of patients' non-medical needs, patients' participation in the treatment process and research and development. The fundamental requirements of CG implementation included having an effective policy at national level, avoiding perfectionism, using the expertise and potentials of the entire country and the coordination of this model with other models of quality improvement such as accreditation and patient safety. © 2015 by Kerman University of Medical Sciences.

  3. Patient simulation: a literary synthesis of assessment tools in anesthesiology.

    PubMed

    Edler, Alice A; Fanning, Ruth G; Chen, Michael I; Claure, Rebecca; Almazan, Dondee; Struyk, Brain; Seiden, Samuel C

    2009-12-20

    High-fidelity patient simulation (HFPS) has been hypothesized as a modality for assessing competency of knowledge and skill in patient simulation, but uniform methods for HFPS performance assessment (PA) have not yet been completely achieved. Anesthesiology as a field founded the HFPS discipline and also leads in its PA. This project reviews the types, quality, and designated purpose of HFPS PA tools in anesthesiology. We used the systematic review method and systematically reviewed anesthesiology literature referenced in PubMed to assess the quality and reliability of available PA tools in HFPS. Of 412 articles identified, 50 met our inclusion criteria. Seventy seven percent of studies have been published since 2000; more recent studies demonstrated higher quality. Investigators reported a variety of test construction and validation methods. The most commonly reported test construction methods included "modified Delphi Techniques" for item selection, reliability measurement using inter-rater agreement, and intra-class correlations between test items or subtests. Modern test theory, in particular generalizability theory, was used in nine (18%) of studies. Test score validity has been addressed in multiple investigations and shown a significant improvement in reporting accuracy. However the assessment of predicative has been low across the majority of studies. Usability and practicality of testing occasions and tools was only anecdotally reported. To more completely comply with the gold standards for PA design, both shared experience of experts and recognition of test construction standards, including reliability and validity measurements, instrument piloting, rater training, and explicit identification of the purpose and proposed use of the assessment tool, are required.

  4. Validation of amino-acids measurement in dried blood spot by FIA-MS/MS for PKU management.

    PubMed

    Bruno, C; Dufour-Rainfray, D; Patin, F; Vourc'h, P; Guilloteau, D; Maillot, F; Labarthe, F; Tardieu, M; Andres, C R; Emond, P; Blasco, H

    2016-09-01

    Phenylketonuria (PKU) is a metabolic disorder leading to high concentrations of phenylalanine (Phe) and low concentrations of tyrosine (Tyr) in blood and brain that may be neurotoxic. This disease requires a regular monitoring of plasma Phe and Tyr as well as branched-chain amino-acids concentrations to adapt the Phe-restricted diet and other therapy that may be prescribed in PKU. We validated a Flow Injection Analysis tandem Mass Spectrometry (FIA-MS/MS) to replace the enzymatic method routinely used for neonatal screening in order to monitor in parallel to Phe, Tyr and branched-chain amino-acids not detected by the enzymatic method. We ascertained the performances of the method: linearity, detection and quantification limits, contamination index, accuracy. We cross validated the FIA-MS/MS and enzymatic methods and we evaluated our own reference ranges to monitor Phe, Tyr, Leu, Val on 59 dried blood spots of normal controls. We also evaluated Tyr, Leu and Val concentrations in PKU patients to detect some potential abnormalities, not evaluated by the enzymatic method. We developed a rapid method with excellent performances including precision and accuracy <15%. We noted an excellent correlation of Phe concentrations between FIA-MS/MS and enzymatic methods (p<0.0001) based on our database which are similar to references ranges published. We observed that 50% of PKU patients had lower concentrations of Tyr, Leu and/or Val that could not be detected by the enzymatic method. Based on laboratory accreditation recommendations, we validated a robust, rapid and reliable FIA-MS/MS method to monitor plasma Phe concentrations but also Tyr, Leu and Val concentrations, suitable for PKU management. We evaluated our own reference ranges of concentration for a routine application of this method. Copyright © 2016 The Canadian Society of Clinical Chemists. Published by Elsevier Inc. All rights reserved.

  5. Validation of design procedure and performance modeling of a heat and fluid transport field experiment in the unsaturated zone

    NASA Astrophysics Data System (ADS)

    Nir, A.; Doughty, C.; Tsang, C. F.

    Validation methods which developed in the context of deterministic concepts of past generations often cannot be directly applied to environmental problems, which may be characterized by limited reproducibility of results and highly complex models. Instead, validation is interpreted here as a series of activities, including both theoretical and experimental tests, designed to enhance our confidence in the capability of a proposed model to describe some aspect of reality. We examine the validation process applied to a project concerned with heat and fluid transport in porous media, in which mathematical modeling, simulation, and results of field experiments are evaluated in order to determine the feasibility of a system for seasonal thermal energy storage in shallow unsaturated soils. Technical details of the field experiments are not included, but appear in previous publications. Validation activities are divided into three stages. The first stage, carried out prior to the field experiments, is concerned with modeling the relevant physical processes, optimization of the heat-exchanger configuration and the shape of the storage volume, and multi-year simulation. Subjects requiring further theoretical and experimental study are identified at this stage. The second stage encompasses the planning and evaluation of the initial field experiment. Simulations are made to determine the experimental time scale and optimal sensor locations. Soil thermal parameters and temperature boundary conditions are estimated using an inverse method. Then results of the experiment are compared with model predictions using different parameter values and modeling approximations. In the third stage, results of an experiment performed under different boundary conditions are compared to predictions made by the models developed in the second stage. Various aspects of this theoretical and experimental field study are described as examples of the verification and validation procedure. There is no attempt to validate a specific model, but several models of increasing complexity are compared with experimental results. The outcome is interpreted as a demonstration of the paradigm proposed by van der Heijde, 26 that different constituencies have different objectives for the validation process and therefore their acceptance criteria differ also.

  6. Creation of a novel simulator for minimally invasive neurosurgery: fusion of 3D printing and special effects.

    PubMed

    Weinstock, Peter; Rehder, Roberta; Prabhu, Sanjay P; Forbes, Peter W; Roussin, Christopher J; Cohen, Alan R

    2017-07-01

    OBJECTIVE Recent advances in optics and miniaturization have enabled the development of a growing number of minimally invasive procedures, yet innovative training methods for the use of these techniques remain lacking. Conventional teaching models, including cadavers and physical trainers as well as virtual reality platforms, are often expensive and ineffective. Newly developed 3D printing technologies can recreate patient-specific anatomy, but the stiffness of the materials limits fidelity to real-life surgical situations. Hollywood special effects techniques can create ultrarealistic features, including lifelike tactile properties, to enhance accuracy and effectiveness of the surgical models. The authors created a highly realistic model of a pediatric patient with hydrocephalus via a unique combination of 3D printing and special effects techniques and validated the use of this model in training neurosurgery fellows and residents to perform endoscopic third ventriculostomy (ETV), an effective minimally invasive method increasingly used in treating hydrocephalus. METHODS A full-scale reproduction of the head of a 14-year-old adolescent patient with hydrocephalus, including external physical details and internal neuroanatomy, was developed via a unique collaboration of neurosurgeons, simulation engineers, and a group of special effects experts. The model contains "plug-and-play" replaceable components for repetitive practice. The appearance of the training model (face validity) and the reproducibility of the ETV training procedure (content validity) were assessed by neurosurgery fellows and residents of different experience levels based on a 14-item Likert-like questionnaire. The usefulness of the training model for evaluating the performance of the trainees at different levels of experience (construct validity) was measured by blinded observers using the Objective Structured Assessment of Technical Skills (OSATS) scale for the performance of ETV. RESULTS A combination of 3D printing technology and casting processes led to the creation of realistic surgical models that include high-fidelity reproductions of the anatomical features of hydrocephalus and allow for the performance of ETV for training purposes. The models reproduced the pulsations of the basilar artery, ventricles, and cerebrospinal fluid (CSF), thus simulating the experience of performing ETV on an actual patient. The results of the 14-item questionnaire showed limited variability among participants' scores, and the neurosurgery fellows and residents gave the models consistently high ratings for face and content validity. The mean score for the content validity questions (4.88) was higher than the mean score for face validity (4.69) (p = 0.03). On construct validity scores, the blinded observers rated performance of fellows significantly higher than that of residents, indicating that the model provided a means to distinguish between novice and expert surgical skills. CONCLUSIONS A plug-and-play lifelike ETV training model was developed through a combination of 3D printing and special effects techniques, providing both anatomical and haptic accuracy. Such simulators offer opportunities to accelerate the development of expertise with respect to new and novel procedures as well as iterate new surgical approaches and innovations, thus allowing novice neurosurgeons to gain valuable experience in surgical techniques without exposing patients to risk of harm.

  7. Identifying outliers of non-Gaussian groundwater state data based on ensemble estimation for long-term trends

    NASA Astrophysics Data System (ADS)

    Jeong, Jina; Park, Eungyu; Han, Weon Shik; Kim, Kueyoung; Choung, Sungwook; Chung, Il Moon

    2017-05-01

    A hydrogeological dataset often includes substantial deviations that need to be inspected. In the present study, three outlier identification methods - the three sigma rule (3σ), inter quantile range (IQR), and median absolute deviation (MAD) - that take advantage of the ensemble regression method are proposed by considering non-Gaussian characteristics of groundwater data. For validation purposes, the performance of the methods is compared using simulated and actual groundwater data with a few hypothetical conditions. In the validations using simulated data, all of the proposed methods reasonably identify outliers at a 5% outlier level; whereas, only the IQR method performs well for identifying outliers at a 30% outlier level. When applying the methods to real groundwater data, the outlier identification performance of the IQR method is found to be superior to the other two methods. However, the IQR method shows limitation by identifying excessive false outliers, which may be overcome by its joint application with other methods (for example, the 3σ rule and MAD methods). The proposed methods can be also applied as potential tools for the detection of future anomalies by model training based on currently available data.

  8. Time series modeling of human operator dynamics in manual control tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency responses of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that has not been previously modeled to demonstrate the strengths of the method.

  9. A Methodology for Validating Safety Heuristics Using Clinical Simulations: Identifying and Preventing Possible Technology-Induced Errors Related to Using Health Information Systems

    PubMed Central

    Borycki, Elizabeth; Kushniruk, Andre; Carvalho, Christopher

    2013-01-01

    Internationally, health information systems (HIS) safety has emerged as a significant concern for governments. Recently, research has emerged that has documented the ability of HIS to be implicated in the harm and death of patients. Researchers have attempted to develop methods that can be used to prevent or reduce technology-induced errors. Some researchers are developing methods that can be employed prior to systems release. These methods include the development of safety heuristics and clinical simulations. In this paper, we outline our methodology for developing safety heuristics specific to identifying the features or functions of a HIS user interface design that may lead to technology-induced errors. We follow this with a description of a methodological approach to validate these heuristics using clinical simulations. PMID:23606902

  10. Time Series Modeling of Human Operator Dynamics in Manual Control Tasks

    NASA Technical Reports Server (NTRS)

    Biezad, D. J.; Schmidt, D. K.

    1984-01-01

    A time-series technique is presented for identifying the dynamic characteristics of the human operator in manual control tasks from relatively short records of experimental data. Control of system excitation signals used in the identification is not required. The approach is a multi-channel identification technique for modeling multi-input/multi-output situations. The method presented includes statistical tests for validity, is designed for digital computation, and yields estimates for the frequency response of the human operator. A comprehensive relative power analysis may also be performed for validated models. This method is applied to several sets of experimental data; the results are discussed and shown to compare favorably with previous research findings. New results are also presented for a multi-input task that was previously modeled to demonstrate the strengths of the method.

  11. The Importance of Method Selection in Determining Product Integrity for Nutrition Research1234

    PubMed Central

    Mudge, Elizabeth M; Brown, Paula N

    2016-01-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. PMID:26980823

  12. The Importance of Method Selection in Determining Product Integrity for Nutrition Research.

    PubMed

    Mudge, Elizabeth M; Betz, Joseph M; Brown, Paula N

    2016-03-01

    The American Herbal Products Association estimates that there as many as 3000 plant species in commerce. The FDA estimates that there are about 85,000 dietary supplement products in the marketplace. The pace of product innovation far exceeds that of analytical methods development and validation, with new ingredients, matrixes, and combinations resulting in an analytical community that has been unable to keep up. This has led to a lack of validated analytical methods for dietary supplements and to inappropriate method selection where methods do exist. Only after rigorous validation procedures to ensure that methods are fit for purpose should they be used in a routine setting to verify product authenticity and quality. By following systematic procedures and establishing performance requirements for analytical methods before method development and validation, methods can be developed that are both valid and fit for purpose. This review summarizes advances in method selection, development, and validation regarding herbal supplement analysis and provides several documented examples of inappropriate method selection and application. © 2016 American Society for Nutrition.

  13. Development and validation of LC-HRMS and GC-NICI-MS methods for stereoselective determination of MDMA and its phase I and II metabolites in human urine

    PubMed Central

    Schwaninger, Andrea E.; Meyer, Markus R.; Huestis, Marilyn A.; Maurer, Hans H.

    2013-01-01

    3,4-Methylenedioxymethamphetamine (MDMA) is a racemic drug of abuse and its R- and S-enantiomers are known to differ in their dose-response curve. The S-enantiomer was shown to be eliminated at a higher rate than the R-enantiomer most likely explained by stereoselective metabolism that was observed in various in vitro experiments. The aim of this work was the development and validation of methods for evaluating the stereoselective elimination of phase I and particularly phase II metabolites of MDMA in human urine. Urine samples were divided into three different methods. Method A allowed stereoselective determination of the 4-hydroxy-3-methoxymethamphetamine (HMMA) glucuronides and only achiral determination of the intact sulfate conjugates of HMMA and 3,4-dihydroxymethamphetamine (DHMA) after C18 solid-phase extraction by liquid chromatography–high-resolution mass spectrometry with electrospray ionization. Method B allowed the determination of the enantiomer ratios of DHMA and HMMA sulfate conjugates after selective enzymatic cleavage and chiral analysis of the corresponding deconjugated metabolites after chiral derivatization with S-heptafluorobutyrylprolyl chloride using gas chromatography–mass spectrometry with negativeion chemical ionization. Method C allowed the chiral determination of MDMA and its unconjugated metabolites using method B without sulfate cleavage. The validation process including specificity, recovery, matrix effects, process efficiency, accuracy and precision, stabilities and limits of quantification and detection showed that all methods were selective, sensitive, accurate and precise for all tested analytes. PMID:21656610

  14. A Comparison of Three Different Scoring Methods for Self-Report Measures of Psychological Aggression in a Sample of College Females

    PubMed Central

    Shorey, Ryan C.; Brasfield, Hope; Febres, Jeniimarie; Cornelius, Tara L.; Stuart, Gregory L.

    2012-01-01

    Psychological aggression in females’ dating relationships has received increased empirical attention in recent years. However, researchers have used numerous measures of psychological aggression, and various scoring methods with these measures, making it difficult to compare across studies on psychological aggression. In addition, research has yet to examine whether different scoring methods for psychological aggression measures may affect the psychometric properties of these instruments. The current study examined three self-report measures of psychological aggression within a sample of female college students (N = 108), including their psychometric properties when scored using frequency, sum, and variety scores. Results showed that the Revised Conflict Tactics Scales (CTS2) had variable internal consistency depending on the scoring method used and good validity; the Multidimensional Measure of Emotional Abuse (MMEA) and the Follingstad Psychological Aggression Scale (FPAS) both had good internal consistency and validity across scoring methods. Implications of these findings for the assessment of psychological aggression and future research are discussed. PMID:23393957

  15. A comparison of three different scoring methods for self-report measures of psychological aggression in a sample of college females.

    PubMed

    Shorey, Ryan C; Brasfield, Hope; Febres, Jeniimarie; Cornelius, Tara L; Stuart, Gregory L

    2012-01-01

    Psychological aggression in females' dating relationships has received increased empirical attention in recent years. However, researchers' have used numerous measures of psychological aggression and various scoring methods with these measures, making it difficult to compare across studies on psychological aggression. In addition, research has yet to examine whether different scoring methods for psychological aggression measures may affect the psychometric properties of these instruments. This study examined three self-report measures of psychological aggression within a sample of female college students (N = 108), including their psychometric properties when scored using frequency, sum, and variety scores. Results showed that the Revised Conflict Tactics Scales (CTS2) had variable internal consistency depending on the scoring method used and good validity; the Multidimensional Measure of Emotional Abuse (MMEA) and the Follingstad Psychological Aggression Scale (FPAS) both had good internal consistency and validity across scoring methods. Implications of these findings for the assessment of psychological aggression and future research are discussed.

  16. Estimation of Monthly Near Surface Air Temperature Using Geographically Weighted Regression in China

    NASA Astrophysics Data System (ADS)

    Wang, M. M.; He, G. J.; Zhang, Z. M.; Zhang, Z. J.; Liu, X. G.

    2018-04-01

    Near surface air temperature (NSAT) is a primary descriptor of terrestrial environment conditions. The availability of NSAT with high spatial resolution is deemed necessary for several applications such as hydrology, meteorology and ecology. In this study, a regression-based NSAT mapping method is proposed. This method is combined remote sensing variables with geographical variables, and uses geographically weighted regression to estimate NSAT. The altitude was selected as geographical variable; and the remote sensing variables include land surface temperature (LST) and Normalized Difference vegetation index (NDVI). The performance of the proposed method was assessed by predict monthly minimum, mean, and maximum NSAT from point station measurements in China, a domain with a large area, complex topography, and highly variable station density, and the NSAT maps were validated against the meteorology observations. Validation results with meteorological data show the proposed method achieved an accuracy of 1.58 °C. It is concluded that the proposed method for mapping NSAT is very operational and has good precision.

  17. Performing skin microbiome research: A method to the madness

    PubMed Central

    Kong, Heidi H.; Andersson, Björn; Clavel, Thomas; Common, John E.; Jackson, Scott A.; Olson, Nathan D.; Segre, Julia A.; Traidl-Hoffmann, Claudia

    2017-01-01

    Growing interest in microbial contributions to human health and disease has increasingly led investigators to examine the microbiome in both healthy skin and cutaneous disorders, including acne, psoriasis and atopic dermatitis. The need for common language, effective study design, and validated methods are critical for high-quality, standardized research. Features, unique to skin, pose particular challenges when conducting microbiome research. This review discusses microbiome research standards and highlights important factors to consider, including clinical study design, skin sampling, sample processing, DNA sequencing, control inclusion, and data analysis. PMID:28063650

  18. Mobile device geo-localization and object visualization in sensor networks

    NASA Astrophysics Data System (ADS)

    Lemaire, Simon; Bodensteiner, Christoph; Arens, Michael

    2014-10-01

    In this paper we present a method to visualize geo-referenced objects on modern smartphones using a multi- functional application design. The application applies different localization and visualization methods including the smartphone camera image. The presented application copes well with different scenarios. A generic application work flow and augmented reality visualization techniques are described. The feasibility of the approach is experimentally validated using an online desktop selection application in a network with a modern of-the-shelf smartphone. Applications are widespread and include for instance crisis and disaster management or military applications.

  19. Internal Cluster Validation on Earthquake Data in the Province of Bengkulu

    NASA Astrophysics Data System (ADS)

    Rini, D. S.; Novianti, P.; Fransiska, H.

    2018-04-01

    K-means method is an algorithm for cluster n object based on attribute to k partition, where k < n. There is a deficiency of algorithms that is before the algorithm is executed, k points are initialized randomly so that the resulting data clustering can be different. If the random value for initialization is not good, the clustering becomes less optimum. Cluster validation is a technique to determine the optimum cluster without knowing prior information from data. There are two types of cluster validation, which are internal cluster validation and external cluster validation. This study aims to examine and apply some internal cluster validation, including the Calinski-Harabasz (CH) Index, Sillhouette (S) Index, Davies-Bouldin (DB) Index, Dunn Index (D), and S-Dbw Index on earthquake data in the Bengkulu Province. The calculation result of optimum cluster based on internal cluster validation is CH index, S index, and S-Dbw index yield k = 2, DB Index with k = 6 and Index D with k = 15. Optimum cluster (k = 6) based on DB Index gives good results for clustering earthquake in the Bengkulu Province.

  20. Validation of an instrument to assess toddler feeding practices of Latino mothers.

    PubMed

    Chaidez, Virginia; Kaiser, Lucia L

    2011-08-01

    This paper describes qualitative and quantitative aspects of testing a 34-item Toddler-Feeding Questionnaire (TFQ), designed for use in Latino families, and the associations between feeding practices and toddler dietary outcomes. Qualitative methods included review by an expert panel for content validity and cognitive testing of the tool to assess face validity. Quantitative analyses included use of exploratory factor analysis for construct validity; Pearson's correlations for test-retest reliability; Cronbach's alpha (α) for internal reliability; and multivariate regression for investigating relationships between feeding practices and toddler diet and anthropometry. Interviews were conducted using a convenience sample of 94 Latino mother and toddler dyads obtained largely through the Supplemental Nutrition Program for Women, Infants and Children (WIC). Data collection included household characteristics, self-reported early-infant feeding practices, the toddler's dietary intake, and anthropometric measurements. Factor analysis suggests the TFQ contains three subscales: indulgent; authoritative; and environmental influences. The TFQ demonstrated acceptable reliability for most measures. As hypothesized, indulgent practices in Latino toddlers were associated with increased energy consumption and higher intakes of total fat, saturated fat, and sweetened beverages. This tool may be useful in future research exploring the relationship of toddler feeding practices to nutritional outcomes in Latino families. Copyright © 2011 Elsevier Ltd. All rights reserved.

  1. 36 CFR 223.222 - Appraisal.

    Code of Federal Regulations, 2010 CFR

    2010-07-01

    ... DISPOSAL OF NATIONAL FOREST SYSTEM TIMBER Special Forest Products Appraisal and Pricing § 223.222 Appraisal. The Chief of the Forest Service shall determine the appraised value of special forest products. Valid methods to determine appraised value include, but are not limited to, transaction evidence appraisals...

  2. Development and psychometric evaluation of the Undergraduate Clinical Education Environment Measure (UCEEM).

    PubMed

    Strand, Pia; Sjöborg, Karolina; Stalmeijer, Renée; Wichmann-Hansen, Gitte; Jakobsson, Ulf; Edgren, Gudrun

    2013-12-01

    There is a paucity of instruments designed to evaluate the multiple dimensions of the workplace as an educational environment for undergraduate medical students. The aim was to develop and psychometrically evaluate an instrument to measure how undergraduate medical students perceive the clinical workplace environment, based on workplace learning theories and empirical findings. Development of the instrument relied on established standards including theoretical and empirical grounding, systematic item development and expert review at various stages to ensure content validity. Qualitative and quantitative methods were employed using a series of steps from conceptualization through psychometric analysis of scores in a Swedish medical student population. The final result was a 25-item instrument with two overarching dimensions, experiential learning and social participation, and four subscales that coincided well with theory and empirical findings: Opportunities to learn in and through work & quality of supervision; Preparedness for student entry; Workplace interaction patterns & student inclusion; and Equal treatment. Evidence from various sources supported content validity, construct validity and reliability of the instrument. The Undergraduate Clinical Education Environment Measure represents a valid, reliable and feasible multidimensional instrument for evaluation of the clinical workplace as a learning environment for undergraduate medical students. Further validation in different populations using various psychometric methods is needed.

  3. Subarachnoid hemorrhage admissions retrospectively identified using a prediction model

    PubMed Central

    McIntyre, Lauralyn; Fergusson, Dean; Turgeon, Alexis; dos Santos, Marlise P.; Lum, Cheemun; Chassé, Michaël; Sinclair, John; Forster, Alan; van Walraven, Carl

    2016-01-01

    Objective: To create an accurate prediction model using variables collected in widely available health administrative data records to identify hospitalizations for primary subarachnoid hemorrhage (SAH). Methods: A previously established complete cohort of consecutive primary SAH patients was combined with a random sample of control hospitalizations. Chi-square recursive partitioning was used to derive and internally validate a model to predict the probability that a patient had primary SAH (due to aneurysm or arteriovenous malformation) using health administrative data. Results: A total of 10,322 hospitalizations with 631 having primary SAH (6.1%) were included in the study (5,122 derivation, 5,200 validation). In the validation patients, our recursive partitioning algorithm had a sensitivity of 96.5% (95% confidence interval [CI] 93.9–98.0), a specificity of 99.8% (95% CI 99.6–99.9), and a positive likelihood ratio of 483 (95% CI 254–879). In this population, patients meeting criteria for the algorithm had a probability of 45% of truly having primary SAH. Conclusions: Routinely collected health administrative data can be used to accurately identify hospitalized patients with a high probability of having a primary SAH. This algorithm may allow, upon validation, an easy and accurate method to create validated cohorts of primary SAH from either ruptured aneurysm or arteriovenous malformation. PMID:27629096

  4. A systematic review of methods to assess intake of sugar-sweetened beverages among healthy European adults and children: a DEDIPAC (DEterminants of DIet and Physical Activity) study.

    PubMed

    Riordan, Fiona; Ryan, Kathleen; Perry, Ivan J; Schulze, Matthias B; Andersen, Lene Frost; Geelen, Anouk; Van't Veer, Pieter; Eussen, Simone; van Dongen, Martien; Wijckmans-Duysens, Nicole; Harrington, Janas M

    2017-03-01

    Research indicates that intake of sugar-sweetened beverages (SSB) may be associated with negative health consequences. However, differences between assessment methods can affect the comparability of intake data across studies. The current review aimed to identify methods used to assess SSB intake among children and adults in pan-European studies and to inform the development of the DEDIPAC (DEterminants of DIet and Physical Activity) toolbox of methods suitable for use in future European studies. A literature search was conducted using three electronic databases and by hand-searching reference lists. English-language studies of any design which assessed SSB consumption were included in the review. Studies involving two or more European countries were included in the review. Healthy, free-living children and adults. The review identified twenty-three pan-European studies which assessed intake of SSB. The FFQ was the most commonly used (n 24), followed by the 24 h recall (n 6) and diet records (n 1). There were several differences between the identified FFQ, including the definition of SSB used. In total, seven instruments that were tested for validity were selected as potentially suitable to assess SSB intake among adults (n 1), adolescents (n 3) and children (n 3). The current review highlights the need for instruments to use an agreed definition of SSB. Methods that were tested for validity and used in pan-European populations encompassing a range of countries were identified. These methods should be considered for use by future studies focused on evaluating consumption of SSB.

  5. Often Asked but Rarely Answered: Can Asians Meet DSM-5/ICD-10 Autism Spectrum Disorder Criteria?

    PubMed Central

    Kim, So Hyun; Koh, Yun-Joo; Lim, Eun-Chung; Kim, Soo-Jeong; Leventhal, Bennett L.

    2016-01-01

    Abstract Objectives: To evaluate whether Asian (Korean children) populations can be validly diagnosed with autism spectrum disorder (ASD) using Western-based diagnostic instruments and criteria based on Diagnostic and Statistical Manual on Mental Disorders, 5th edition (DSM-5). Methods: Participants included an epidemiologically ascertained 7–14-year-old (N = 292) South Korean cohort from a larger prevalence study (N = 55,266). Main outcomes were based on Western-based diagnostic methods for Korean children using gold standard instruments, Autism Diagnostic Interview-Revised, and Autism Diagnostic Observation Schedule. Factor analysis and ANOVAs were performed to examine factor structure of autism symptoms and identify phenotypic differences between Korean children with ASD and non-ASD diagnoses. Results: Using Western-based diagnostic methods, Korean children with ASD were successfully identified with moderate-to-high diagnostic validity (sensitivities/specificities ranging 64%–93%), strong internal consistency, and convergent/concurrent validity. The patterns of autism phenotypes in a Korean population were similar to those observed in a Western population with two symptom domains (social communication and restricted and repetitive behavior factors). Statistically significant differences in the use of socially acceptable communicative behaviors (e.g., direct gaze, range of facial expressions) emerged between ASD versus non-ASD cases (mostly p < 0.001), ensuring that these can be a similarly valid part of the ASD phenotype in both Asian and Western populations. Conclusions: Despite myths, biases, and stereotypes about Asian social behavior, Asians (at least Korean children) typically use elements of reciprocal social interactions similar to those in the West. Therefore, standardized diagnostic methods widely used for ASD in Western culture can be validly used as part of the assessment process and research with Koreans and, possibly, other Asians. PMID:27315155

  6. OWL-based reasoning methods for validating archetypes.

    PubMed

    Menárguez-Tortosa, Marcos; Fernández-Breis, Jesualdo Tomás

    2013-04-01

    Some modern Electronic Healthcare Record (EHR) architectures and standards are based on the dual model-based architecture, which defines two conceptual levels: reference model and archetype model. Such architectures represent EHR domain knowledge by means of archetypes, which are considered by many researchers to play a fundamental role for the achievement of semantic interoperability in healthcare. Consequently, formal methods for validating archetypes are necessary. In recent years, there has been an increasing interest in exploring how semantic web technologies in general, and ontologies in particular, can facilitate the representation and management of archetypes, including binding to terminologies, but no solution based on such technologies has been provided to date to validate archetypes. Our approach represents archetypes by means of OWL ontologies. This permits to combine the two levels of the dual model-based architecture in one modeling framework which can also integrate terminologies available in OWL format. The validation method consists of reasoning on those ontologies to find modeling errors in archetypes: incorrect restrictions over the reference model, non-conformant archetype specializations and inconsistent terminological bindings. The archetypes available in the repositories supported by the openEHR Foundation and the NHS Connecting for Health Program, which are the two largest publicly available ones, have been analyzed with our validation method. For such purpose, we have implemented a software tool called Archeck. Our results show that around 1/5 of archetype specializations contain modeling errors, the most common mistakes being related to coded terms and terminological bindings. The analysis of each repository reveals that different patterns of errors are found in both repositories. This result reinforces the need for making serious efforts in improving archetype design processes. Copyright © 2012 Elsevier Inc. All rights reserved.

  7. Development and Validation of a Monte Carlo Simulation Tool for Multi-Pinhole SPECT

    PubMed Central

    Mok, Greta S. P.; Du, Yong; Wang, Yuchuan; Frey, Eric C.; Tsui, Benjamin M. W.

    2011-01-01

    Purpose In this work, we developed and validated a Monte Carlo simulation (MCS) tool for investigation and evaluation of multi-pinhole (MPH) SPECT imaging. Procedures This tool was based on a combination of the SimSET and MCNP codes. Photon attenuation and scatter in the object, as well as penetration and scatter through the collimator detector, are modeled in this tool. It allows accurate and efficient simulation of MPH SPECT with focused pinhole apertures and user-specified photon energy, aperture material, and imaging geometry. The MCS method was validated by comparing the point response function (PRF), detection efficiency (DE), and image profiles obtained from point sources and phantom experiments. A prototype single-pinhole collimator and focused four- and five-pinhole collimators fitted on a small animal imager were used for the experimental validations. We have also compared computational speed among various simulation tools for MPH SPECT, including SimSET-MCNP, MCNP, SimSET-GATE, and GATE for simulating projections of a hot sphere phantom. Results We found good agreement between the MCS and experimental results for PRF, DE, and image profiles, indicating the validity of the simulation method. The relative computational speeds for SimSET-MCNP, MCNP, SimSET-GATE, and GATE are 1: 2.73: 3.54: 7.34, respectively, for 120-view simulations. We also demonstrated the application of this MCS tool in small animal imaging by generating a set of low-noise MPH projection data of a 3D digital mouse whole body phantom. Conclusions The new method is useful for studying MPH collimator designs, data acquisition protocols, image reconstructions, and compensation techniques. It also has great potential to be applied for modeling the collimator-detector response with penetration and scatter effects for MPH in the quantitative reconstruction method. PMID:19779896

  8. Development and validation of a Food Frequency Questionnaire (FFQ) for assessing sugar consumption among adults in Klang Valley, Malaysia.

    PubMed

    Shanita, Nik S; Norimah, A K; Abu Hanifah, S

    2012-12-01

    The aim of this study was to develop and validate a semiquantitative food frequency questionnaire (FFQ) for assessing habitual added sugar consumption of adults in the Klang Valley. In the development phase, a 24-hour dietary recall (24-hr DR) was used to determine food items to be included into the FFQ among adults from three major ethnicities (n = 51). In the validation phase, the FFQ was further validated against a reference method which was a multiple-pass 24-hr DR among 125 adults in Klang Valley. The response rate for the latter phase was 96.1%. The semi-quantitative FFQ consisting of 64 food items was categorised into 10 food groups. The mean added sugar intake determined by the reference method was 44.2 +/- 20.2 g/day while that from the FFQ was 49.4 +/- 21.4 g/day. The difference in mean intake between the two methods was 5.2 g (95% CI = 2.6-7.9; SD = 14.9, p < 0.05) or 11.8%. Pearson correlation was r = 0.74 (p < 0.001) for the two methods while Spearman rank correlations for the various food groups ranged between 0.11 (cake and related foods) to 0.61 (self-prepared drinks), with most groups correlating significantly (p < 0.05). Cross-classification of subjects into quintiles of intake showed 47.2% of the subjects correctly classifying into the same quintile, 34.4% into adjacent quintiles while none were grossly misclassified. The Bland-Altman plot was concentrated in the y-axis range (-24.14 g to 34.8 g) with a mean of 5.22 g. This semi-quantitative FFQ provides a validated tool for estimating habitual intake of added sugar in the adult population of the Klang Valley.

  9. Validated flow-injection method for rapid aluminium determination in anti-perspirants.

    PubMed

    López-Gonzálvez, A; Ruiz, M A; Barbas, C

    2008-09-29

    A flow-injection (FI) method for the rapid determination of aluminium in anti-perspirants has been developed. The method is based on the spectrophotometric detection at 535nm of the complex formed between Al ions and the chromogenic reagent eriochrome cyanine R. Both the batch and FI methods were validated by checking the parameters included in the ISO-3543-1 regulation. Variables involved in the FI method were optimized by using appropriate statistical tools. The method does not exhibit interference from other substances present in anti-perspirants and it shows a high precision with a R.S.D. value (n=6) of 0.9%. Moreover, the accuracy of the method was evaluated by comparison with a back complexometric titration method, which is currently used for routine analysis in pharmaceutical laboratories. The Student's t-test showed that the results obtained by both methods were not significantly different for a significance level of 95%. A response time of 12s and a sample analysis time, by performing triplicate injections, of 60s were achieved. The analytical figures of merit make the method highly appropriate to substitute the time-consuming complexometric method for this kind of analysis.

  10. Consortium on Methods Evaluating Tobacco: Research Tools to Inform FDA Regulation of Snus.

    PubMed

    Berman, Micah L; Bickel, Warren K; Harris, Andrew C; LeSage, Mark G; O'Connor, Richard J; Stepanov, Irina; Shields, Peter G; Hatsukami, Dorothy K

    2017-10-04

    The U.S. Food and Drug Administration (FDA) has purview over tobacco products. To set policy, the FDA must rely on sound science, yet most existing tobacco research methods have not been designed to specifically inform regulation. The NCI and FDA-funded Consortium on Methods Evaluating Tobacco (COMET) was established to develop and assess valid and reliable methods for tobacco product evaluation. The goal of this paper is to describe these assessment methods using a U.S. manufactured "snus" as the test product. In designing studies that could inform FDA regulation, COMET has taken a multidisciplinary approach that includes experimental animal models and a range of human studies that examine tobacco product appeal, addictiveness, and toxicity. This paper integrates COMET's findings over the last 4 years. Consistency in results was observed across the various studies, lending validity to our methods. Studies showed low abuse liability for snus and low levels of consumer demand. Toxicity was less than cigarettes on some biomarkers but higher than medicinal nicotine. Using our study methods and the convergence of results, the snus that we tested as a potential modified risk tobacco product is likely to neither result in substantial public health harm nor benefit. This review describes methods that were used to assess the appeal, abuse liability, and toxicity of snus. These methods included animal, behavioral economics, and consumer perception studies, and clinical trials. Across these varied methods, study results showed low abuse-liability and appeal of the snus product we tested. In several studies, demand for snus was lower than for less toxic nicotine gum. The consistency and convergence of results across a range of multi-disciplinary studies lends validity to our methods and suggests that promotion of snus as a modified risk tobacco products is unlikely to produce substantial public health benefit or harm. © The Author 2017. Published by Oxford University Press on behalf of the Society for Research on Nicotine and Tobacco. All rights reserved. For permissions, please e-mail: journals.permissions@oup.com.

  11. Optimization and validation of a rapid method to determine citrate and inorganic phosphate in milk by capillary electrophoresis.

    PubMed

    Izco, J M; Tormo, M; Harris, A; Tong, P S; Jimenez-Flores, R

    2003-01-01

    Quantification of phosphate and citrate compounds is very important because their distribution between soluble and colloidal phases of milk and their interactions with milk proteins influence the stability and some functional properties of dairy products. The aim of this work was to optimize and validate a capillary electrophoresis method for the rapid determination of these compounds in milk. Various parameters affecting analysis have been optimized, including type, composition, and pH of the electrolyte, and sample extraction. Ethanol, acetonitrile, sulfuric acid, water at 50 degrees C or at room temperature were tested as sample buffers (SB). Water at room temperature yielded the best overall results and was chosen for further validation. The extraction time was checked and could be shortened to less than 1 min. Also, sample preparation was simplified to pipet 12 microl of milk into 1 ml of water containing 20 ppm of tartaric acid as an internal standard. The linearity of the method was excellent (R2 > 0.999) with CV values of response factors <3%. The detection limits for phosphate and citrate were 5.1 and 2.4 nM, respectively. The accuracy of the method was calculated for each compound (103.2 and 100.3%). In addition, citrate and phosphate content of several commercial milk samples were analyzed by this method, and the results deviated less than 5% from values obtained when analyzing the samples by official methods. To study the versatility of the technique, other dairy productssuch as cream cheese, yogurt, or Cheddar cheese were analyzed and accuracy was similar to milk in all products tested. The procedure is rapid and offers a very fast and simple sample preparation. Once the sample has arrived at the laboratory, less than 5 min (including handling, preparation, running, integration, and quantification) are necessary to determine the concentration of citric acid and inorganic phosphate. Because of the speed and accuracy of this method, it is promising as an analytical quantitative testing technique.

  12. Development of an integrated laboratory system for the monitoring of cyanotoxins in surface and drinking waters.

    PubMed

    Triantis, Theodoros; Tsimeli, Katerina; Kaloudis, Triantafyllos; Thanassoulias, Nicholas; Lytras, Efthymios; Hiskia, Anastasia

    2010-05-01

    A system of analytical processes has been developed in order to serve as a cost-effective scheme for the monitoring of cyanobacterial toxins on a quantitative basis, in surface and drinking waters. Five cyclic peptide hepatotoxins, microcystin-LR, -RR, -YR, -LA and nodularin were chosen as the target compounds. Two different enzyme-linked immunosorbent assays (ELISA) were validated in order to serve as primary quantitative screening tools. Validation results showed that the ELISA methods are sufficiently specific and sensitive with limits of detection (LODs) around 0.1 microg/L, however, matrix effects should be considered, especially with surface water samples or bacterial mass methanolic extracts. A colorimetric protein phosphatase inhibition assay (PPIA) utilizing protein phosphatase 2A and p-nitrophenyl phosphate as substrate, was applied in microplate format in order to serve as a quantitative screening method for the detection of the toxic activity associated with cyclic peptide hepatotoxins, at concentration levels >0.2 microg/L of MC-LR equivalents. A fast HPLC/PDA method has been developed for the determination of microcystins, by using a short, 50mm C18 column, with 1.8 microm particle size. Using this method a 10-fold reduction of sample run time was achieved and sufficient separation of microcystins was accomplished in less than 3 min. Finally, the analytical system includes an LC/MS/MS method that was developed for the determination of the 5 target compounds after SPE extraction. The method achieves extremely low limits of detection (<0.02 microg/L), in both surface and drinking waters and it is used for identification and verification purposes as well as for determinations at the ppt level. An analytical protocol that includes the above methods has been designed and validated through the analysis of a number of real samples. Copyright 2009 Elsevier Ltd. All rights reserved.

  13. Validation approach for a fast and simple targeted screening method for 75 antibiotics in meat and aquaculture products using LC-MS/MS.

    PubMed

    Dubreil, Estelle; Gautier, Sophie; Fourmond, Marie-Pierre; Bessiral, Mélaine; Gaugain, Murielle; Verdon, Eric; Pessel, Dominique

    2017-04-01

    An approach is described to validate a fast and simple targeted screening method for antibiotic analysis in meat and aquaculture products by LC-MS/MS. The strategy of validation was applied for a panel of 75 antibiotics belonging to different families, i.e., penicillins, cephalosporins, sulfonamides, macrolides, quinolones and phenicols. The samples were extracted once with acetonitrile, concentrated by evaporation and injected into the LC-MS/MS system. The approach chosen for the validation was based on the Community Reference Laboratory (CRL) guidelines for the validation of screening qualitative methods. The aim of the validation was to prove sufficient sensitivity of the method to detect all the targeted antibiotics at the level of interest, generally the maximum residue limit (MRL). A robustness study was also performed to test the influence of different factors. The validation showed that the method is valid to detect and identify 73 antibiotics of the 75 antibiotics studied in meat and aquaculture products at the validation levels.

  14. Pesticide analysis in teas and chamomile by liquid chromatography and gas chromatography tandem mass spectrometry using a modified QuEChERS method: validation and pilot survey in real samples.

    PubMed

    Lozano, Ana; Rajski, Łukasz; Belmonte-Valles, Noelia; Uclés, Ana; Uclés, Samanta; Mezcua, Milagros; Fernández-Alba, Amadeo R

    2012-12-14

    This paper presents the validation of a modified QuEChERS method in four matrices - green tea, red tea, black tea and chamomile. The experiments were carried out using blank samples spiked with a solution of 86 pesticides (insecticides, fungicides and herbicides) at four levels - 10, 25, 50 and 100 μg/kg. The samples were extracted according to the citrate QuEChERS protocol; however, to reduce the amount of coextracted matrix compounds, calcium chloride was employed instead of magnesium sulphate in the clean-up step. The samples were analysed by LC-MS/MS and GC-MS/MS. Included in the scope of validation were: recovery, linearity, matrix effects, limits of detection and quantitation as well as intra-day and inter-day precision. The validated method was used in a real sample survey carried out on 75 samples purchased in ten different countries. In all matrices, recoveries of the majority of compounds were in the 70-120% range and were characterised by precision lower than 20%. In 85% of pesticide/matrix combinations the analytes can be detected quantitatively by the proposed method at the European Union Maximum Residue Level. The analysis of the real samples revealed that large number of teas and chamomiles sold in the European Union contain pesticides whose usage is not approved and also pesticides in concentrations above the EU MRLs. Copyright © 2012 Elsevier B.V. All rights reserved.

  15. Development and validation of sensitive LC/MS/MS method for quantitative bioanalysis of levonorgestrel in rat plasma and application to pharmacokinetics study.

    PubMed

    Ananthula, Suryatheja; Janagam, Dileep R; Jamalapuram, Seshulatha; Johnson, James R; Mandrell, Timothy D; Lowe, Tao L

    2015-10-15

    Rapid, sensitive, selective and accurate LC/MS/MS method was developed for quantitative determination of levonorgestrel (LNG) in rat plasma and further validated for specificity, linearity, accuracy, precision, sensitivity, matrix effect, recovery efficiency and stability. Liquid-liquid extraction procedure using hexane:ethyl acetate mixture at 80:20 v:v ratio was employed to efficiently extract LNG from rat plasma. Reversed phase Luna column C18(2) (50×2.0mm i.d., 3μM) installed on a AB SCIEX Triple Quad™ 4500 LC/MS/MS system was used to perform chromatographic separation. LNG was identified within 2min with high specificity. Linear calibration curve was drawn within 0.5-50ng·mL(-1) concentration range. The developed method was validated for intra-day and inter-day accuracy and precision whose values fell in the acceptable limits. Matrix effect was found to be minimal. Recovery efficiency at three quality control (QC) concentrations 0.5 (low), 5 (medium) and 50 (high) ng·mL(-1) was found to be >90%. Stability of LNG at various stages of experiment including storage, extraction and analysis was evaluated using QC samples, and the results showed that LNG was stable at all the conditions. This validated method was successfully used to study the pharmacokinetics of LNG in rats after SubQ injection, providing its applicability in relevant preclinical studies. Copyright © 2015 Elsevier B.V. All rights reserved.

  16. Simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii using excitation-emission matrix fluorescence coupled with chemometrics methods

    NASA Astrophysics Data System (ADS)

    Bai, Xue-Mei; Liu, Tie; Liu, De-Long; Wei, Yong-Ju

    2018-02-01

    A chemometrics-assisted excitation-emission matrix (EEM) fluorescence method was proposed for simultaneous determination of α-asarone and β-asarone in Acorus tatarinowii. Using the strategy of combining EEM data with chemometrics methods, the simultaneous determination of α-asarone and β-asarone in the complex Traditional Chinese medicine system was achieved successfully, even in the presence of unexpected interferents. The physical or chemical separation step was avoided due to the use of ;mathematical separation;. Six second-order calibration methods were used including parallel factor analysis (PARAFAC), alternating trilinear decomposition (ATLD), alternating penalty trilinear decomposition (APTLD), self-weighted alternating trilinear decomposition (SWATLD), the unfolded partial least-squares (U-PLS) and multidimensional partial least-squares (N-PLS) with residual bilinearization (RBL). In addition, HPLC method was developed to further validate the presented strategy. Consequently, for the validation samples, the analytical results obtained by six second-order calibration methods were almost accurate. But for the Acorus tatarinowii samples, the results indicated a slightly better predictive ability of N-PLS/RBL procedure over other methods.

  17. Incorporating High-Frequency Physiologic Data Using Computational Dictionary Learning Improves Prediction of Delayed Cerebral Ischemia Compared to Existing Methods.

    PubMed

    Megjhani, Murad; Terilli, Kalijah; Frey, Hans-Peter; Velazquez, Angela G; Doyle, Kevin William; Connolly, Edward Sander; Roh, David Jinou; Agarwal, Sachin; Claassen, Jan; Elhadad, Noemie; Park, Soojin

    2018-01-01

    Accurate prediction of delayed cerebral ischemia (DCI) after subarachnoid hemorrhage (SAH) can be critical for planning interventions to prevent poor neurological outcome. This paper presents a model using convolution dictionary learning to extract features from physiological data available from bedside monitors. We develop and validate a prediction model for DCI after SAH, demonstrating improved precision over standard methods alone. 488 consecutive SAH admissions from 2006 to 2014 to a tertiary care hospital were included. Models were trained on 80%, while 20% were set aside for validation testing. Modified Fisher Scale was considered the standard grading scale in clinical use; baseline features also analyzed included age, sex, Hunt-Hess, and Glasgow Coma Scales. An unsupervised approach using convolution dictionary learning was used to extract features from physiological time series (systolic blood pressure and diastolic blood pressure, heart rate, respiratory rate, and oxygen saturation). Classifiers (partial least squares and linear and kernel support vector machines) were trained on feature subsets of the derivation dataset. Models were applied to the validation dataset. The performances of the best classifiers on the validation dataset are reported by feature subset. Standard grading scale (mFS): AUC 0.54. Combined demographics and grading scales (baseline features): AUC 0.63. Kernel derived physiologic features: AUC 0.66. Combined baseline and physiologic features with redundant feature reduction: AUC 0.71 on derivation dataset and 0.78 on validation dataset. Current DCI prediction tools rely on admission imaging and are advantageously simple to employ. However, using an agnostic and computationally inexpensive learning approach for high-frequency physiologic time series data, we demonstrated that we could incorporate individual physiologic data to achieve higher classification accuracy.

  18. Content validity of the PedsQL™ 3.2 Diabetes Module in newly diagnosed patients with Type 1 diabetes mellitus ages 8-45.

    PubMed

    Varni, James W; Curtis, Bradley H; Abetz, Linda N; Lasch, Kathryn E; Piault, Elisabeth C; Zeytoonjian, Andrea A

    2013-10-01

    The content validity of the 28-item PedsQL™ 3.0 Diabetes Module has not been established in research on pediatric and adult patients with newly diagnosed Type 1 diabetes across a broad age range. This study aimed to document the content validity of three age-specific versions (8-12 years, 13-18 years, and 18-45 years) of the PedsQL™ Diabetes Module in a population of newly diagnosed patients with Type 1 diabetes. The study included in-depth interviews with 31 newly diagnosed patients with Type 1 diabetes between the ages of 8 and 45 years, as well as 14 parents and/or caregivers of child and teenage patients between the ages of 8 and 18 years of age; grounded theory data collection and analysis methods; and review by clinical and measurement experts. Following the initial round of interviews, revisions reflecting patient feedback were made to the Child and Teen versions of the Diabetes Module, and an Adult version of the Diabetes Module was drafted. Cognitive interviews of the modified versions of the Diabetes Module were conducted with an additional sample of 11 patients. The results of these interviews support the content validity of the modified 33-item PedsQL™ 3.2 Diabetes Module for pediatric and adult patients, including interpretability, comprehensiveness, and relevance suitable for all patients with Type 1 Diabetes. Qualitative methods support the content validity of the modified PedsQL™ 3.2 Diabetes Module in pediatric and adult patients. It is recommended that the PedsQL™ 3.2 Diabetes Module replaces version 3.0 and is suitable for measuring patient-reported outcomes in all patients with newly diagnosed, stable, or long-standing diabetes in clinical research and practice.

  19. Online registration of monthly sports participation after anterior cruciate ligament injury: a reliability and validity study.

    PubMed

    Grindem, Hege; Eitzen, Ingrid; Snyder-Mackler, Lynn; Risberg, May Arna

    2014-05-01

    The current methods measuring sports activity after anterior cruciate ligament (ACL) injury are commonly restricted to the most knee-demanding sports, and do not consider participation in multiple sports. We therefore developed an online activity survey to prospectively record the monthly participation in all major sports relevant to our patient-group. To assess the reliability, content validity and concurrent validity of the survey and to evaluate if it provided more complete data on sports participation than a routine activity questionnaire. 145 consecutively included ACL-injured patients were eligible for the reliability study. The retest of the online activity survey was performed 2 days after the test response had been recorded. A subsample of 88 ACL-reconstructed patients was included in the validity study. The ACL-reconstructed patients completed the online activity survey from the first to the 12th postoperative month, and a routine activity questionnaire 6 and 12 months postoperatively. The online activity survey was highly reliable (κ ranging from 0.81 to 1). It contained all the common sports reported on the routine activity questionnaire. There was a substantial agreement between the two methods on return to preinjury main sport (κ=0.71 and 0.74 at 6 and 12 months postoperatively). The online activity survey revealed that a significantly higher number of patients reported to participate in running, cycling and strength training, and patients reported to participate in a greater number of sports. The online activity survey is a highly reliable way of recording detailed changes in sports participation after ACL injury. The findings of this study support the content and concurrent validity of the survey, and suggest that the online activity survey can provide more complete data on sports participation than a routine activity questionnaire.

  20. Improving validation methods for molecular diagnostics: application of Bland-Altman, Deming and simple linear regression analyses in assay comparison and evaluation for next-generation sequencing.

    PubMed

    Misyura, Maksym; Sukhai, Mahadeo A; Kulasignam, Vathany; Zhang, Tong; Kamel-Reid, Suzanne; Stockley, Tracy L

    2018-02-01

    A standard approach in test evaluation is to compare results of the assay in validation to results from previously validated methods. For quantitative molecular diagnostic assays, comparison of test values is often performed using simple linear regression and the coefficient of determination (R 2 ), using R 2 as the primary metric of assay agreement. However, the use of R 2 alone does not adequately quantify constant or proportional errors required for optimal test evaluation. More extensive statistical approaches, such as Bland-Altman and expanded interpretation of linear regression methods, can be used to more thoroughly compare data from quantitative molecular assays. We present the application of Bland-Altman and linear regression statistical methods to evaluate quantitative outputs from next-generation sequencing assays (NGS). NGS-derived data sets from assay validation experiments were used to demonstrate the utility of the statistical methods. Both Bland-Altman and linear regression were able to detect the presence and magnitude of constant and proportional error in quantitative values of NGS data. Deming linear regression was used in the context of assay comparison studies, while simple linear regression was used to analyse serial dilution data. Bland-Altman statistical approach was also adapted to quantify assay accuracy, including constant and proportional errors, and precision where theoretical and empirical values were known. The complementary application of the statistical methods described in this manuscript enables more extensive evaluation of performance characteristics of quantitative molecular assays, prior to implementation in the clinical molecular laboratory. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  1. Natural language processing in pathology: a scoping review.

    PubMed

    Burger, Gerard; Abu-Hanna, Ameen; de Keizer, Nicolette; Cornet, Ronald

    2016-07-22

    Encoded pathology data are key for medical registries and analyses, but pathology information is often expressed as free text. We reviewed and assessed the use of NLP (natural language processing) for encoding pathology documents. Papers addressing NLP in pathology were retrieved from PubMed, Association for Computing Machinery (ACM) Digital Library and Association for Computational Linguistics (ACL) Anthology. We reviewed and summarised the study objectives; NLP methods used and their validation; software implementations; the performance on the dataset used and any reported use in practice. The main objectives of the 38 included papers were encoding and extraction of clinically relevant information from pathology reports. Common approaches were word/phrase matching, probabilistic machine learning and rule-based systems. Five papers (13%) compared different methods on the same dataset. Four papers did not specify the method(s) used. 18 of the 26 studies that reported F-measure, recall or precision reported values of over 0.9. Proprietary software was the most frequently mentioned category (14 studies); General Architecture for Text Engineering (GATE) was the most applied architecture overall. Practical system use was reported in four papers. Most papers used expert annotation validation. Different methods are used in NLP research in pathology, and good performances, that is, high precision and recall, high retrieval/removal rates, are reported for all of these. Lack of validation and of shared datasets precludes performance comparison. More comparative analysis and validation are needed to provide better insight into the performance and merits of these methods. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/

  2. Evaluation of the Thermo Scientific SureTect Listeria species assay. AOAC Performance Tested Method 071304.

    PubMed

    Cloke, Jonathan; Evans, Katharine; Crabtree, David; Hughes, Annette; Simpson, Helen; Holopainen, Jani; Wickstrand, Nina; Kauppinen, Mikko; Leon-Velarde, Carlos; Larson, Nathan; Dave, Keron

    2014-01-01

    The Thermo Scientific SureTect Listeria species Assay is a new real-time PCR assay for the detection of all species of Listeria in food and environmental samples. This validation study was conducted using the AOAC Research Institute (RI) Performance Tested Methods program to validate the SureTect Listeria species Assay in comparison to the reference method detailed in International Organization for Standardization 11290-1:1996 including amendment 1:2004 in a variety of foods plus plastic and stainless steel. The food matrixes validated were smoked salmon, processed cheese, fresh bagged spinach, cantaloupe, cooked prawns, cooked sliced turkey meat, cooked sliced ham, salami, pork frankfurters, and raw ground beef. All matrixes were tested by Thermo Fisher Scientific, Microbiology Division, Basingstoke, UK. In addition, three matrixes (pork frankfurters, fresh bagged spinach, and stainless steel surface samples) were analyzed independently as part of the AOAC-RI-controlled independent laboratory study by the University ofGuelph, Canada. Using probability of detection statistical analysis, a significant difference in favour of the SureTect assay was demonstrated between the SureTect and reference method for high level spiked samples of pork frankfurters, smoked salmon, cooked prawns, stainless steel, and low-spiked samples of salami. For all other matrixes, no significant difference was seen between the two methods during the study. Inclusivity testing was conducted with 68 different isolates of Listeria species, all of which were detected by the SureTect Listeria species Assay. None of the 33 exclusivity isolates were detected by the SureTect Listeria species Assay. Ruggedness testing was conducted to evaluate the performance of the assay with specific method deviations outside of the recommended parameters open to variation, which demonstrated that the assay gave reliable performance. Accelerated stability testing was additionally conducted, validating the assay shelf life.

  3. Convergent Validity of Preschool Children's Television Viewing Measures among Low-Income Latino Families: A Cross-Sectional Study

    PubMed Central

    McLeod, Jessica; Chen, Tzu-An; Nicklas, Theresa A.; Baranowski, Tom

    2013-01-01

    Abstract Background Television viewing is an important modifiable risk factor for childhood obesity. However, valid methods for measuring children's TV viewing are sparse and few studies have included Latinos, a population disproportionately affected by obesity. The goal of this study was to test the reliability and convergent validity of four TV viewing measures among low-income Latino preschool children in the United States. Methods Latino children (n=96) ages 3–5 years old were recruited from four Head Start centers in Houston, Texas (January, 2009, to June, 2010). TV viewing was measured concurrently over 7 days by four methods: (1) TV diaries (parent reported), (2) sedentary time (accelerometry), (3) TV Allowance (an electronic TV power meter), and (4) Ecological Momentary Assessment (EMA) on personal digital assistants (parent reported). This 7-day procedure was repeated 3–4 weeks later. Test–retest reliability was determined by intraclass correlations (ICC). Spearman correlations (due to nonnormal distributions) were used to determine convergent validity compared to the TV diary. Results The TV diary had the highest test–retest reliability (ICC=0.82, p<0.001), followed by the TV Allowance (ICC=0.69, p<0.001), EMA (ICC=0.46, p<0.001), and accelerometry (ICC=0.36–0.38, p<0.01). The TV Allowance (r=0.45–0.55, p<0.001) and EMA (r=0.47–0.51, p<0.001) methods were significantly correlated with TV diaries. Accelerometer-determined sedentary minutes were not correlated with TV diaries. The TV Allowance and EMA methods were significantly correlated with each other (r=0.48–0.53, p<0.001). Conclusions The TV diary is feasible and is the most reliable method for measuring US Latino preschool children's TV viewing. PMID:23270534

  4. Attitudes about Advances in Sweat Patch Testing in Drug Courts: Insights from a Case Study in Southern California

    ERIC Educational Resources Information Center

    Polzer, Katherine

    2010-01-01

    Drug courts are reinventing the drug testing framework by experimenting with new methods, including use of the sweat patch. The sweat patch is a band-aid like strip used to monitor drug court participants. The validity and reliability of the sweat patch as an effective testing method was examined, as well as the effectiveness, meaning how likely…

  5. A systematic review of validated methods for identifying anaphylaxis, including anaphylactic shock and angioneurotic edema, using administrative and claims data.

    PubMed

    Schneider, Gary; Kachroo, Sumesh; Jones, Natalie; Crean, Sheila; Rotella, Philip; Avetisyan, Ruzan; Reynolds, Matthew W

    2012-01-01

    The Food and Drug Administration's Mini-Sentinel pilot program initially aims to conduct active surveillance to refine safety signals that emerge for marketed medical products. A key facet of this surveillance is to develop and understand the validity of algorithms for identifying health outcomes of interest from administrative and claims data. This article summarizes the process and findings of the algorithm review of anaphylaxis. PubMed and Iowa Drug Information Service searches were conducted to identify citations applicable to the anaphylaxis health outcome of interest. Level 1 abstract reviews and Level 2 full-text reviews were conducted to find articles using administrative and claims data to identify anaphylaxis and including validation estimates of the coding algorithms. Our search revealed limited literature focusing on anaphylaxis that provided administrative and claims data-based algorithms and validation estimates. Only four studies identified via literature searches provided validated algorithms; however, two additional studies were identified by Mini-Sentinel collaborators and were incorporated. The International Classification of Diseases, Ninth Revision, codes varied, as did the positive predictive value, depending on the cohort characteristics and the specific codes used to identify anaphylaxis. Research needs to be conducted on designing validation studies to test anaphylaxis algorithms and estimating their predictive power, sensitivity, and specificity. Copyright © 2012 John Wiley & Sons, Ltd.

  6. Experimental Validation of Model Updating and Damage Detection via Eigenvalue Sensitivity Methods with Artificial Boundary Conditions

    DTIC Science & Technology

    2017-09-01

    VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS by Matthew D. Bouwense...VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY CONDITIONS 5. FUNDING NUMBERS 6. AUTHOR...unlimited. EXPERIMENTAL VALIDATION OF MODEL UPDATING AND DAMAGE DETECTION VIA EIGENVALUE SENSITIVITY METHODS WITH ARTIFICIAL BOUNDARY

  7. Brazilian Center for the Validation of Alternative Methods (BraCVAM) and the process of validation in Brazil.

    PubMed

    Presgrave, Octavio; Moura, Wlamir; Caldeira, Cristiane; Pereira, Elisabete; Bôas, Maria H Villas; Eskes, Chantra

    2016-03-01

    The need for the creation of a Brazilian centre for the validation of alternative methods was recognised in 2008, and members of academia, industry and existing international validation centres immediately engaged with the idea. In 2012, co-operation between the Oswaldo Cruz Foundation (FIOCRUZ) and the Brazilian Health Surveillance Agency (ANVISA) instigated the establishment of the Brazilian Center for the Validation of Alternative Methods (BraCVAM), which was officially launched in 2013. The Brazilian validation process follows OECD Guidance Document No. 34, where BraCVAM functions as the focal point to identify and/or receive requests from parties interested in submitting tests for validation. BraCVAM then informs the Brazilian National Network on Alternative Methods (RENaMA) of promising assays, which helps with prioritisation and contributes to the validation studies of selected assays. A Validation Management Group supervises the validation study, and the results obtained are peer-reviewed by an ad hoc Scientific Review Committee, organised under the auspices of BraCVAM. Based on the peer-review outcome, BraCVAM will prepare recommendations on the validated test method, which will be sent to the National Council for the Control of Animal Experimentation (CONCEA). CONCEA is in charge of the regulatory adoption of all validated test methods in Brazil, following an open public consultation. 2016 FRAME.

  8. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires.

    PubMed

    Helmerhorst, Hendrik J F; Brage, Søren; Warren, Janet; Besson, Herve; Ekelund, Ulf

    2012-08-31

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs.A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible.In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62-0.71 for existing, and 0.74-0.76 for new PAQs. Median validity coefficients ranged from 0.30-0.39 for existing, and from 0.25-0.41 for new PAQs.Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument.

  9. [Reliability and validity of depression scales of Chinese version: a systematic review].

    PubMed

    Sun, X Y; Li, Y X; Yu, C Q; Li, L M

    2017-01-10

    Objective: Through systematically reviewing the reliability and validity of depression scales of Chinese version in adults in China to evaluate the psychometric properties of depression scales for different groups. Methods: Eligible studies published before 6 May 2016 were retrieved from the following database: CNKI, Wanfang, PubMed and Embase. The HSROC model of the diagnostic test accuracy (DTA) for Meta-analysis was used to calculate the pooled sensitivity and specificity of the PHQ-9. Results: A total of 44 papers evaluating the performance of depression scales were included. Results showed that the reliability and validity of the common depression scales were eligible, including the Beck depression inventory (BDI), the Hamilton depression scale (HAMD), the center epidemiological studies depression scale (CES-D), the patient health questionnaire (PHQ) and the Geriatric depression scale (GDS). The Cronbach' s coefficient of most tools were larger than 0.8, while the test-retest reliability and split-half reliability were larger than 0.7, indicating good internal consistency and stability. The criterion validity, convergent validity, discrimination validity and screening validity were acceptable though different cut-off points were recommended by different studies. The pooled sensitivity of the 11 studies evaluating PHQ-9 was 0.88 (95 %CI : 0.85-0.91) while the pooled specificity was 0.89 (95 %CI : 0.82-0.94), which demonstrated the applicability of PHQ-9 in screening depression. Conclusion: The reliability and validity of different depression scales of Chinese version are acceptable. The characteristics of different tools and study population should be taken into consideration when choosing a specific scale.

  10. A systematic review of reliability and objective criterion-related validity of physical activity questionnaires

    PubMed Central

    2012-01-01

    Physical inactivity is one of the four leading risk factors for global mortality. Accurate measurement of physical activity (PA) and in particular by physical activity questionnaires (PAQs) remains a challenge. The aim of this paper is to provide an updated systematic review of the reliability and validity characteristics of existing and more recently developed PAQs and to quantitatively compare the performance between existing and newly developed PAQs. A literature search of electronic databases was performed for studies assessing reliability and validity data of PAQs using an objective criterion measurement of PA between January 1997 and December 2011. Articles meeting the inclusion criteria were screened and data were extracted to provide a systematic overview of measurement properties. Due to differences in reported outcomes and criterion methods a quantitative meta-analysis was not possible. In total, 31 studies testing 34 newly developed PAQs, and 65 studies examining 96 existing PAQs were included. Very few PAQs showed good results on both reliability and validity. Median reliability correlation coefficients were 0.62–0.71 for existing, and 0.74–0.76 for new PAQs. Median validity coefficients ranged from 0.30–0.39 for existing, and from 0.25–0.41 for new PAQs. Although the majority of PAQs appear to have acceptable reliability, the validity is moderate at best. Newly developed PAQs do not appear to perform substantially better than existing PAQs in terms of reliability and validity. Future PAQ studies should include measures of absolute validity and the error structure of the instrument. PMID:22938557

  11. New Sentinel-2 radiometric validation approaches (SEOM program)

    NASA Astrophysics Data System (ADS)

    Bruniquel, Véronique; Lamquin, Nicolas; Ferron, Stéphane; Govaerts, Yves; Woolliams, Emma; Dilo, Arta; Gascon, Ferran

    2016-04-01

    SEOM is an ESA program element whose one of the objectives aims at launching state-of-the-art studies for the scientific exploitation of operational missions. In the frame of this program, ESA awarded ACRI-ST and its partners Rayference and National Physical Laboratory (NPL) early 2016 for a R&D study on the development and intercomparison of algorithms for validating the Sentinel-2 radiometric L1 data products beyond the baseline algorithms used operationally in the frame of the S2 Mission Performance Centre. In this context, several algorithms have been proposed and are currently in development: The first one is based on the exploitation of Deep Convective Cloud (DCC) observations over ocean. This method allows an inter-band radiometry validation from the blue to the NIR (typically from B1 to B8a) from a reference band already validated for example with the well-known Rayleigh method. Due to their physical properties, DCCs appear from the remote sensing point of view to have bright and cold tops and they can be used as invariant targets to monitor the radiometric response degradation of reflective solar bands. The DCC approach is statistical i.e. the method shall be applied on a large number of measurements to derive reliable statistics and decrease the impact of the perturbing contributors. The second radiometric validation method is based on the exploitation of matchups combining both concomitant in-situ measurements and Sentinel-2 observations. The in-situ measurements which are used here correspond to measurements acquired in the frame of the RadCalNet networks. The validation is performed for the Sentinel-2 bands similar to the bands of the instruments equipping the validation site. The measurements from the Cimel CE 318 12-filters BRDF Sun Photometer installed recently in the Gobabeb site near the Namib desert are used for this method. A comprehensive verification of the calibration requires an analysis of MSI radiances over the full dynamic range, including low radiances, as extreme values are more subject to instrument response non-linearity. The third method developed in the frame of this project aims to address this point. It is based on a comparison of Sentinel-2 observations over coastal waters which have low radiometry and corresponding Radiative Transfer (RT) simulations using AERONET-OC measurements. Finally, a last method is developed using RadCalNet measurements and Sentinel-2 observations to validate the radiometry of mid/low resolution sensors such as Sentinel-3/OLCI. The RadCalNet measurements are transferred from the RadCalNet sites to Pseudo Invariant Calibration Sites (PICS) using Sentinel-2, and then these larger sites are used to validate mid- and low-resolution sensors to the RadCalNet reference. For all the developed methods, an uncertainty budget is derived following QA4EO guidelines. A last step of this ESA project is dedicated to an Inter-comparison Workshop open to entities involved in Sentinel-2 radiometric validation activities. Blind inter-comparison tests over a series of images will be proposed and the results will be discussed during the workshop.

  12. [Validity of expired carbon monoxide and urine cotinine using dipstick method to assess smoking status].

    PubMed

    Park, Su San; Lee, Ju Yul; Cho, Sung-Il

    2007-07-01

    We investigated the validity of the dipstick method (Mossman Associates Inc. USA) and the expired CO method to distinguish between smokers and nonsmokers. We also elucidated the related factors of the two methods. This study included 244 smokers and 50 ex-smokers, recruited from smoking cessation clinics at 4 local public health centers, who had quit for over 4 weeks. We calculated the sensitivity, specificity and Kappa coefficient of each method for validity. We obtained ROC curve, predictive value and agreement to determine the cutoff of expired air CO method. Finally, we elucidated the related factors and compared their effect powers using the standardized regression coefficient. The dipstick method showed a sensitivity of 92.6%, specificity of 96.0% and Kappa coefficient of 0.79. The best cutoff value to distinguish smokers was 5-6 ppm. At 5 ppm, the expired CO method showed a sensitivity of 94.3%, specificity of 82.0% and Kappa coefficient of 0.73. And at 6 ppm, sensitivity, specificity and Kappa coefficient were 88.5%, 86.0% and 0.64, respectively. Therefore, the dipstick method had higher sensitivity and specificity than the expired CO method. The dipstick and expired CO methods were significantly increased with increasing smoking amount. With longer time since the last smoking, expired CO showed a rapid decrease after 4 hours, whereas the dipstick method showed relatively stable levels for more than 4 hours. The dipstick and expired CO methods were both good indicators for assessing smoking status. However, the former showed higher sensitivity and specificity and stable levels over longer hours after smoking, compared to the expired CO method.

  13. OARSI Clinical Trials Recommendations for Hip Imaging in Osteoarthritis

    PubMed Central

    Gold, Garry E.; Cicuttini, Flavia; Crema, Michel D.; Eckstein, Felix; Guermazi, Ali; Kijowski, Richard; Link, Thomas M.; Maheu, Emmanuel; Martel-Pelletier, Johanne; Miller, Colin G.; Pelletier, Jean-Pierre; Peterfy, Charles G.; Potter, Hollis G.; Roemer, Frank W.; Hunter, David. J

    2015-01-01

    Imaging of hip in osteoarthritis (OA) has seen considerable progress in the past decade, with the introduction of new techniques that may be more sensitive to structural disease changes. The purpose of this expert opinion, consensus driven recommendation is to provide detail on how to apply hip imaging in disease modifying clinical trials. It includes information on acquisition methods/ techniques (including guidance on positioning for radiography, sequence/protocol recommendations/ hardware for MRI); commonly encountered problems (including positioning, hardware and coil failures, artifacts associated with various MRI sequences); quality assurance/ control procedures; measurement methods; measurement performance (reliability, responsiveness, and validity); recommendations for trials; and research recommendations. PMID:25952344

  14. Genomic prediction in animals and plants: simulation of data, validation, reporting, and benchmarking.

    PubMed

    Daetwyler, Hans D; Calus, Mario P L; Pong-Wong, Ricardo; de Los Campos, Gustavo; Hickey, John M

    2013-02-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals.

  15. Genomic Prediction in Animals and Plants: Simulation of Data, Validation, Reporting, and Benchmarking

    PubMed Central

    Daetwyler, Hans D.; Calus, Mario P. L.; Pong-Wong, Ricardo; de los Campos, Gustavo; Hickey, John M.

    2013-01-01

    The genomic prediction of phenotypes and breeding values in animals and plants has developed rapidly into its own research field. Results of genomic prediction studies are often difficult to compare because data simulation varies, real or simulated data are not fully described, and not all relevant results are reported. In addition, some new methods have been compared only in limited genetic architectures, leading to potentially misleading conclusions. In this article we review simulation procedures, discuss validation and reporting of results, and apply benchmark procedures for a variety of genomic prediction methods in simulated and real example data. Plant and animal breeding programs are being transformed by the use of genomic data, which are becoming widely available and cost-effective to predict genetic merit. A large number of genomic prediction studies have been published using both simulated and real data. The relative novelty of this area of research has made the development of scientific conventions difficult with regard to description of the real data, simulation of genomes, validation and reporting of results, and forward in time methods. In this review article we discuss the generation of simulated genotype and phenotype data, using approaches such as the coalescent and forward in time simulation. We outline ways to validate simulated data and genomic prediction results, including cross-validation. The accuracy and bias of genomic prediction are highlighted as performance indicators that should be reported. We suggest that a measure of relatedness between the reference and validation individuals be reported, as its impact on the accuracy of genomic prediction is substantial. A large number of methods were compared in example simulated and real (pine and wheat) data sets, all of which are publicly available. In our limited simulations, most methods performed similarly in traits with a large number of quantitative trait loci (QTL), whereas in traits with fewer QTL variable selection did have some advantages. In the real data sets examined here all methods had very similar accuracies. We conclude that no single method can serve as a benchmark for genomic prediction. We recommend comparing accuracy and bias of new methods to results from genomic best linear prediction and a variable selection approach (e.g., BayesB), because, together, these methods are appropriate for a range of genetic architectures. An accompanying article in this issue provides a comprehensive review of genomic prediction methods and discusses a selection of topics related to application of genomic prediction in plants and animals. PMID:23222650

  16. Computational identification of structural factors affecting the mutagenic potential of aromatic amines: study design and experimental validation.

    PubMed

    Slavov, Svetoslav H; Stoyanova-Slavova, Iva; Mattes, William; Beger, Richard D; Brüschweiler, Beat J

    2018-07-01

    A grid-based, alignment-independent 3D-SDAR (three-dimensional spectral data-activity relationship) approach based on simulated 13 C and 15 N NMR chemical shifts augmented with through-space interatomic distances was used to model the mutagenicity of 554 primary and 419 secondary aromatic amines. A robust modeling strategy supported by extensive validation including randomized training/hold-out test set pairs, validation sets, "blind" external test sets as well as experimental validation was applied to avoid over-parameterization and build Organization for Economic Cooperation and Development (OECD 2004) compliant models. Based on an experimental validation set of 23 chemicals tested in a two-strain Salmonella typhimurium Ames assay, 3D-SDAR was able to achieve performance comparable to 5-strain (Ames) predictions by Lhasa Limited's Derek and Sarah Nexus for the same set. Furthermore, mapping of the most frequently occurring bins on the primary and secondary aromatic amine structures allowed the identification of molecular features that were associated either positively or negatively with mutagenicity. Prominent structural features found to enhance the mutagenic potential included: nitrobenzene moieties, conjugated π-systems, nitrothiophene groups, and aromatic hydroxylamine moieties. 3D-SDAR was also able to capture "true" negative contributions that are particularly difficult to detect through alternative methods. These include sulphonamide, acetamide, and other functional groups, which not only lack contributions to the overall mutagenic potential, but are known to actively lower it, if present in the chemical structures of what otherwise would be potential mutagens.

  17. EMDataBank unified data resource for 3DEM.

    PubMed

    Lawson, Catherine L; Patwardhan, Ardan; Baker, Matthew L; Hryc, Corey; Garcia, Eduardo Sanz; Hudson, Brian P; Lagerstedt, Ingvar; Ludtke, Steven J; Pintilie, Grigore; Sala, Raul; Westbrook, John D; Berman, Helen M; Kleywegt, Gerard J; Chiu, Wah

    2016-01-04

    Three-dimensional Electron Microscopy (3DEM) has become a key experimental method in structural biology for a broad spectrum of biological specimens from molecules to cells. The EMDataBank project provides a unified portal for deposition, retrieval and analysis of 3DEM density maps, atomic models and associated metadata (emdatabank.org). We provide here an overview of the rapidly growing 3DEM structural data archives, which include maps in EM Data Bank and map-derived models in the Protein Data Bank. In addition, we describe progress and approaches toward development of validation protocols and methods, working with the scientific community, in order to create a validation pipeline for 3DEM data. © The Author(s) 2015. Published by Oxford University Press on behalf of Nucleic Acids Research.

  18. A survey on sleep assessment methods

    PubMed Central

    Silva, Josep; Cauli, Omar

    2018-01-01

    Purpose A literature review is presented that aims to summarize and compare current methods to evaluate sleep. Methods Current sleep assessment methods have been classified according to different criteria; e.g., objective (polysomnography, actigraphy…) vs. subjective (sleep questionnaires, diaries…), contact vs. contactless devices, and need for medical assistance vs. self-assessment. A comparison of validation studies is carried out for each method, identifying their sensitivity and specificity reported in the literature. Finally, the state of the market has also been reviewed with respect to customers’ opinions about current sleep apps. Results A taxonomy that classifies the sleep detection methods. A description of each method that includes the tendencies of their underlying technologies analyzed in accordance with the literature. A comparison in terms of precision of existing validation studies and reports. Discussion In order of accuracy, sleep detection methods may be arranged as follows: Questionnaire < Sleep diary < Contactless devices < Contact devices < Polysomnography A literature review suggests that current subjective methods present a sensitivity between 73% and 97.7%, while their specificity ranges in the interval 50%–96%. Objective methods such as actigraphy present a sensibility higher than 90%. However, their specificity is low compared to their sensitivity, being one of the limitations of such technology. Moreover, there are other factors, such as the patient’s perception of her or his sleep, that can be provided only by subjective methods. Therefore, sleep detection methods should be combined to produce a synergy between objective and subjective methods. The review of the market indicates the most valued sleep apps, but it also identifies problems and gaps, e.g., many hardware devices have not been validated and (especially software apps) should be studied before their clinical use. PMID:29844990

  19. Translation, cultural adaptation and validation of the Diabetes Attitudes Scale - third version into Brazilian Portuguese 1

    PubMed Central

    Vieira, Gisele de Lacerda Chaves; Pagano, Adriana Silvino; Reis, Ilka Afonso; Rodrigues, Júlia Santos Nunes; Torres, Heloísa de Carvalho

    2018-01-01

    ABSTRACT Objective: to perform the translation, adaptation and validation of the Diabetes Attitudes Scale - third version instrument into Brazilian Portuguese. Methods: methodological study carried out in six stages: initial translation, synthesis of the initial translation, back-translation, evaluation of the translated version by the Committee of Judges (27 Linguists and 29 health professionals), pre-test and validation. The pre-test and validation (test-retest) steps included 22 and 120 health professionals, respectively. The Content Validity Index, the analyses of internal consistency and reproducibility were performed using the R statistical program. Results: in the content validation, the instrument presented good acceptance among the Judges with a mean Content Validity Index of 0.94. The scale presented acceptable internal consistency (Cronbach’s alpha = 0.60), while the correlation of the total score at the test and retest moments was considered high (Polychoric Correlation Coefficient = 0.86). The Intra-class Correlation Coefficient, for the total score, presented a value of 0.65. Conclusion: the Brazilian version of the instrument (Escala de Atitudes dos Profissionais em relação ao Diabetes Mellitus) was considered valid and reliable for application by health professionals in Brazil. PMID:29319739

  20. Development of Level 1b Calibration and Validation Readiness, Implementation and Management Plans for GOES-R

    NASA Technical Reports Server (NTRS)

    Kunkee, David B.; Farley, Robert W.; Kwan, Betty P.; Hecht, James H.; Walterscheid, Richard L.; Claudepierre, Seth G.; Bishop, Rebecca L.; Gelinas, Lynette J.; Deluccia, Frank J.

    2017-01-01

    A complement of Readiness, Implementation and Management Plans (RIMPs) to facilitate management of post-launch product test activities for the official Geostationary Operational Environmental Satellite (GOES-R) Level 1b (L1b) products have been developed and documented. Separate plans have been created for each of the GOES-R sensors including: the Advanced Baseline Imager (ABI), the Extreme ultraviolet and X-ray Irradiance Sensors (EXIS), Geostationary Lightning Mapper (GLM), GOES-R Magnetometer (MAG), the Space Environment In-Situ Suite (SEISS), and the Solar Ultraviolet Imager (SUVI). The GOES-R program has implemented these RIMPs in order to address the full scope of CalVal activities required for a successful demonstration of GOES-R L1b data product quality throughout the three validation stages: Beta, Provisional and Full Validation. For each product maturity level, the RIMPs include specific performance criteria and required artifacts that provide evidence a given validation stage has been reached, the timing when each stage will be complete, a description of every applicable Post-Launch Product Test (PLPT), roles and responsibilities of personnel, upstream dependencies, and analysis methods and tools to be employed during validation. Instrument level Post-Launch Tests (PLTs) are also referenced and apply primarily to functional check-out of the instruments.

  1. On Conducting Construct Validity Meta-Analyses for the Rorschach: A Reply to Tibon Czopp and Zeligman (2016).

    PubMed

    Mihura, Joni L; Meyer, Gregory J; Dumitrascu, Nicolae; Bombel, George

    2016-01-01

    We respond to Tibon Czopp and Zeligman's (2016) critique of our systematic reviews and meta-analyses of 65 Rorschach Comprehensive System (CS) variables published in Psychological Bulletin (2013). The authors endorsed our supportive findings but critiqued the same methodology when used for the 13 unsupported variables. Unfortunately, their commentary was based on significant misunderstandings of our meta-analytic method and results, such as thinking we used introspectively assessed criteria in classifying levels of support and reporting only a subset of our externally assessed criteria. We systematically address their arguments that our construct label and criterion variable choices were inaccurate and, therefore, meta-analytic validity for these 13 CS variables was artificially low. For example, the authors created new construct labels for these variables that they called "the customary CS interpretation," but did not describe their methodology nor provide evidence that their labels would result in better validity than ours. They cite studies they believe we should have included; we explain how these studies did not fit our inclusion criteria and that including them would have actually reduced the relevant CS variables' meta-analytic validity. Ultimately, criticisms alone cannot change meta-analytic support from negative to positive; Tibon Czopp and Zeligman would need to conduct their own construct validity meta-analyses.

  2. Development and Validation of an Instrument Measuring Theory-Based Determinants of Monitoring Obesogenic Behaviors of Pre-Schoolers among Hispanic Mothers.

    PubMed

    Branscum, Paul; Lora, Karina R

    2016-06-02

    Public health interventions are greatly needed for obesity prevention, and planning for such strategies should include community participation. The study's purpose was to develop and validate a theory-based instrument with low-income, Hispanic mothers of preschoolers, to assess theory-based determinants of maternal monitoring of child's consumption of fruits and vegetables and sugar-sweetened beverages (SSB). Nine focus groups with mothers were conducted to determine nutrition-related behaviors that mothers found as most obesogenic for their children. Next, behaviors were operationally defined and rated for importance and changeability. Two behaviors were selected for investigation (fruits and vegetable and SSB). Twenty semi-structured interviews with mothers were conducted next to develop culturally appropriate items for the instrument. Afterwards, face and content validity were established using a panel of six experts. Finally, the instrument was tested with a sample of 238 mothers. Psychometric properties evaluated included construct validity (using the maximum likelihood extraction method of factor analysis), and internal consistency reliability (Cronbach's alpha). Results suggested that all scales on the instrument were valid and reliable, except for the autonomy scales. Researchers and community planners working with Hispanic families can use this instrument to measure theory-based determinants of parenting behaviors related to preschoolers' consumption of fruits and vegetables, and SSB.

  3. WEC-SIM Validation Testing Plan FY14 Q4.

    DOE Office of Scientific and Technical Information (OSTI.GOV)

    Ruehl, Kelley Michelle

    2016-02-01

    The WEC-Sim project is currently on track, having met both the SNL and NREL FY14 Milestones, as shown in Table 1 and Table 2. This is also reflected in the Gantt chart uploaded to the WEC-Sim SharePoint site in the FY14 Q4 Deliverables folder. The work completed in FY14 includes code verification through code-to-code comparison (FY14 Q1 and Q2), preliminary code validation through comparison to experimental data (FY14 Q2 and Q3), presentation and publication of the WEC-Sim project at OMAE 2014 [1], [2], [3] and GMREC/METS 2014 [4] (FY14 Q3), WEC-Sim code development and public open-source release (FY14 Q3), andmore » development of a preliminary WEC-Sim validation test plan (FY14 Q4). This report presents the preliminary Validation Testing Plan developed in FY14 Q4. The validation test effort started in FY14 Q4 and will go on through FY15. Thus far the team has developed a device selection method, selected a device, and placed a contract with the testing facility, established several collaborations including industry contacts, and have working ideas on the testing details such as scaling, device design, and test conditions.« less

  4. Likelihood ratio data to report the validation of a forensic fingerprint evaluation method.

    PubMed

    Ramos, Daniel; Haraksim, Rudolf; Meuwly, Didier

    2017-02-01

    Data to which the authors refer to throughout this article are likelihood ratios (LR) computed from the comparison of 5-12 minutiae fingermarks with fingerprints. These LRs data are used for the validation of a likelihood ratio (LR) method in forensic evidence evaluation. These data present a necessary asset for conducting validation experiments when validating LR methods used in forensic evidence evaluation and set up validation reports. These data can be also used as a baseline for comparing the fingermark evidence in the same minutiae configuration as presented in (D. Meuwly, D. Ramos, R. Haraksim,) [1], although the reader should keep in mind that different feature extraction algorithms and different AFIS systems used may produce different LRs values. Moreover, these data may serve as a reproducibility exercise, in order to train the generation of validation reports of forensic methods, according to [1]. Alongside the data, a justification and motivation for the use of methods is given. These methods calculate LRs from the fingerprint/mark data and are subject to a validation procedure. The choice of using real forensic fingerprint in the validation and simulated data in the development is described and justified. Validation criteria are set for the purpose of validation of the LR methods, which are used to calculate the LR values from the data and the validation report. For privacy and data protection reasons, the original fingerprint/mark images cannot be shared. But these images do not constitute the core data for the validation, contrarily to the LRs that are shared.

  5. Valid methods: the quality assurance of test method development, validation, approval, and transfer for veterinary testing laboratories.

    PubMed

    Wiegers, Ann L

    2003-07-01

    Third-party accreditation is a valuable tool to demonstrate a laboratory's competence to conduct testing. Accreditation, internationally and in the United States, has been discussed previously. However, accreditation is only I part of establishing data credibility. A validated test method is the first component of a valid measurement system. Validation is defined as confirmation by examination and the provision of objective evidence that the particular requirements for a specific intended use are fulfilled. The international and national standard ISO/IEC 17025 recognizes the importance of validated methods and requires that laboratory-developed methods or methods adopted by the laboratory be appropriate for the intended use. Validated methods are therefore required and their use agreed to by the client (i.e., end users of the test results such as veterinarians, animal health programs, and owners). ISO/IEC 17025 also requires that the introduction of methods developed by the laboratory for its own use be a planned activity conducted by qualified personnel with adequate resources. This article discusses considerations and recommendations for the conduct of veterinary diagnostic test method development, validation, evaluation, approval, and transfer to the user laboratory in the ISO/IEC 17025 environment. These recommendations are based on those of nationally and internationally accepted standards and guidelines, as well as those of reputable and experienced technical bodies. They are also based on the author's experience in the evaluation of method development and transfer projects, validation data, and the implementation of quality management systems in the area of method development.

  6. Computational Prediction and Validation of an Expert's Evaluation of Chemical Probes

    PubMed Central

    Litterman, Nadia K.; Lipinski, Christopher A.; Bunin, Barry A.; Ekins, Sean

    2016-01-01

    In a decade with over half a billion dollars of investment, more than 300 chemical probes have been identified to have biological activity through NIH funded screening efforts. We have collected the evaluations of an experienced medicinal chemist on the likely chemistry quality of these probes based on a number of criteria including literature related to the probe and potential chemical reactivity. Over 20% of these probes were found to be undesirable. Analysis of the molecular properties of these compounds scored as desirable suggested higher pKa, molecular weight, heavy atom count and rotatable bond number. We were particularly interested whether the human evaluation aspect of medicinal chemistry due diligence could be computationally predicted. We used a process of sequential Bayesian model building and iterative testing as we included additional probes. Following external validation of these methods and comparing different machine learning methods we identified Bayesian models with accuracy comparable to other measures of drug-likeness and filtering rules created to date. PMID:25244007

  7. Introduction, comparison, and validation of Meta-Essentials: A free and simple tool for meta-analysis.

    PubMed

    Suurmond, Robert; van Rhee, Henk; Hak, Tony

    2017-12-01

    We present a new tool for meta-analysis, Meta-Essentials, which is free of charge and easy to use. In this paper, we introduce the tool and compare its features to other tools for meta-analysis. We also provide detailed information on the validation of the tool. Although free of charge and simple, Meta-Essentials automatically calculates effect sizes from a wide range of statistics and can be used for a wide range of meta-analysis applications, including subgroup analysis, moderator analysis, and publication bias analyses. The confidence interval of the overall effect is automatically based on the Knapp-Hartung adjustment of the DerSimonian-Laird estimator. However, more advanced meta-analysis methods such as meta-analytical structural equation modelling and meta-regression with multiple covariates are not available. In summary, Meta-Essentials may prove a valuable resource for meta-analysts, including researchers, teachers, and students. © 2017 The Authors. Research Synthesis Methods published by John Wiley & Sons Ltd.

  8. DOE Office of Scientific and Technical Information (OSTI.GOV)

    Marshall, William BJ J; Rearden, Bradley T

    The validation of neutron transport methods used in nuclear criticality safety analyses is required by consensus American National Standards Institute/American Nuclear Society (ANSI/ANS) standards. In the last decade, there has been an increased interest in correlations among critical experiments used in validation that have shared physical attributes and which impact the independence of each measurement. The statistical methods included in many of the frequently cited guidance documents on performing validation calculations incorporate the assumption that all individual measurements are independent, so little guidance is available to practitioners on the topic. Typical guidance includes recommendations to select experiments from multiple facilitiesmore » and experiment series in an attempt to minimize the impact of correlations or common-cause errors in experiments. Recent efforts have been made both to determine the magnitude of such correlations between experiments and to develop and apply methods for adjusting the bias and bias uncertainty to account for the correlations. This paper describes recent work performed at Oak Ridge National Laboratory using the Sampler sequence from the SCALE code system to develop experimental correlations using a Monte Carlo sampling technique. Sampler will be available for the first time with the release of SCALE 6.2, and a brief introduction to the methods used to calculate experiment correlations within this new sequence is presented in this paper. Techniques to utilize these correlations in the establishment of upper subcritical limits are the subject of a companion paper and will not be discussed here. Example experimental uncertainties and correlation coefficients are presented for a variety of low-enriched uranium water-moderated lattice experiments selected for use in a benchmark exercise by the Working Party on Nuclear Criticality Safety Subgroup on Uncertainty Analysis in Criticality Safety Analyses. The results include studies on the effect of fuel rod pitch on the correlations, and some observations are also made regarding difficulties in determining experimental correlations using the Monte Carlo sampling technique.« less

  9. The measurement of patient attitudes regarding prenatal and preconception genetic carrier screening and translational behavioral medicine: an integrative review.

    PubMed

    Shiroff, Jennifer J; Gregoski, Mathew J

    2017-06-01

    Measurement of recessive carrier screening attitudes related to conception and pregnancy is necessary to determine current acceptance, and whether behavioral intervention strategies are needed in clinical practice. To evaluate quantitative survey instruments to measure patient attitudes regarding genetic carrier testing prior to conception and pregnancy databases examining patient attitudes regarding genetic screening prior to conception and pregnancy from 2003-2013 were searched yielding 344 articles; eight studies with eight instruments met criteria for inclusion. Data abstraction on theoretical framework, subjects, instrument description, scoring, method of measurement, reliability, validity, feasibility, level of evidence, and outcomes was completed. Reliability information was provided in five studies with an internal consistency of Cronbach's α >0.70. Information pertaining to validity was presented in three studies and included construct validity via factor analysis. Despite limited psychometric information, these questionnaires are self-administered and can be briefly completed, making them a feasible method of evaluation.

  10. Insights on Antioxidant Assays for Biological Samples Based on the Reduction of Copper Complexes—The Importance of Analytical Conditions

    PubMed Central

    Marques, Sara S.; Magalhães, Luís M.; Tóth, Ildikó V.; Segundo, Marcela A.

    2014-01-01

    Total antioxidant capacity assays are recognized as instrumental to establish antioxidant status of biological samples, however the varying experimental conditions result in conclusions that may not be transposable to other settings. After selection of the complexing agent, reagent addition order, buffer type and concentration, copper reducing assays were adapted to a high-throughput scheme and validated using model biological antioxidant compounds of ascorbic acid, Trolox (a soluble analogue of vitamin E), uric acid and glutathione. A critical comparison was made based on real samples including NIST-909c human serum certified sample, and five study samples. The validated method provided linear range up to 100 µM Trolox, (limit of detection 2.3 µM; limit of quantification 7.7 µM) with recovery results above 85% and precision <5%. The validated developed method with an increased sensitivity is a sound choice for assessment of TAC in serum samples. PMID:24968275

  11. Development of Modal Test Techniques for Validation of a Solar Sail Design

    NASA Technical Reports Server (NTRS)

    Gaspar, James L.; Mann, Troy; Behun, Vaughn; Wilkie, W. Keats; Pappa, Richard

    2004-01-01

    This paper focuses on the development of modal test techniques for validation of a solar sail gossamer space structure design. The major focus is on validating and comparing the capabilities of various excitation techniques for modal testing solar sail components. One triangular shaped quadrant of a solar sail membrane was tested in a 1 Torr vacuum environment using various excitation techniques including, magnetic excitation, and surface-bonded piezoelectric patch actuators. Results from modal tests performed on the sail using piezoelectric patches at different positions are discussed. The excitation methods were evaluated for their applicability to in-vacuum ground testing and to the development of on orbit flight test techniques. The solar sail membrane was tested in the horizontal configuration at various tension levels to assess the variation in frequency with tension in a vacuum environment. A segment of a solar sail mast prototype was also tested in ambient atmospheric conditions using various excitation techniques, and these methods are also assessed for their ground test capabilities and on-orbit flight testing.

  12. Workshop Report: Crystal City VI-Bioanalytical Method Validation for Biomarkers.

    PubMed

    Arnold, Mark E; Booth, Brian; King, Lindsay; Ray, Chad

    2016-11-01

    With the growing focus on translational research and the use of biomarkers to drive drug development and approvals, biomarkers have become a significant area of research within the pharmaceutical industry. However, until the US Food and Drug Administration's (FDA) 2013 draft guidance on bioanalytical method validation included consideration of biomarker assays using LC-MS and LBA, those assays were created, validated, and used without standards of performance. This lack of expectations resulted in the FDA receiving data from assays of varying quality in support of efficacy and safety claims. The AAPS Crystal City VI (CC VI) Workshop in 2015 was held as the first forum for industry-FDA discussion around the general issues of biomarker measurements (e.g., endogenous levels) and specific technology strengths and weaknesses. The 2-day workshop served to develop a common understanding among the industrial scientific community of the issues around biomarkers, informed the FDA of the current state of the science, and will serve as a basis for further dialogue as experience with biomarkers expands with both groups.

  13. Validity and reliability of bioelectrical impedance analysis and skinfold thickness in predicting body fat in military personnel.

    PubMed

    Aandstad, Anders; Holtberget, Kristian; Hageberg, Rune; Holme, Ingar; Anderssen, Sigmund A

    2014-02-01

    Previous studies show that body composition is related to injury risk and physical performance in soldiers. Thus, valid methods for measuring body composition in military personnel are needed. The frequently used body mass index method is not a valid measure of body composition in soldiers, but reliability and validity of alternative field methods are less investigated in military personnel. Thus, we carried out test and retest of skinfold (SKF), single frequency bioelectrical impedance analysis (SF-BIA), and multifrequency bioelectrical impedance analysis measurements in 65 male and female soldiers. Several validated equations were used to predict percent body fat from these methods. Dual-energy X-ray absorptiometry was also measured, and acted as the criterion method. Results showed that SF-BIA was the most reliable method in both genders. In women, SF-BIA was also the most valid method, whereas SKF or a combination of SKF and SF-BIA produced the highest validity in men. Reliability and validity varied substantially among the equations examined. The best methods and equations produced test-retest 95% limits of agreement below ±1% points, whereas the corresponding validity figures were ±3.5% points. Each investigator and practitioner must consider whether such measurement errors are acceptable for its specific use. Reprint & Copyright © 2014 Association of Military Surgeons of the U.S.

  14. Quality assessment of two- and three-dimensional unstructured meshes and validation of an upwind Euler flow solver

    NASA Technical Reports Server (NTRS)

    Woodard, Paul R.; Batina, John T.; Yang, Henry T. Y.

    1992-01-01

    Quality assessment procedures are described for two-dimensional unstructured meshes. The procedures include measurement of minimum angles, element aspect ratios, stretching, and element skewness. Meshes about the ONERA M6 wing and the Boeing 747 transport configuration are generated using an advancing front method grid generation package of programs. Solutions of Euler's equations for these meshes are obtained at low angle-of-attack, transonic conditions. Results for these cases, obtained as part of a validation study demonstrate accuracy of an implicit upwind Euler solution algorithm.

  15. Supersonic Coaxial Jet Experiment for CFD Code Validation

    NASA Technical Reports Server (NTRS)

    Cutler, A. D.; Carty, A. A.; Doerner, S. E.; Diskin, G. S.; Drummond, J. P.

    1999-01-01

    A supersonic coaxial jet facility has been designed to provide experimental data suitable for the validation of CFD codes used to analyze high-speed propulsion flows. The center jet is of a light gas and the coflow jet is of air, and the mixing layer between them is compressible. Various methods have been employed in characterizing the jet flow field, including schlieren visualization, pitot, total temperature and gas sampling probe surveying, and RELIEF velocimetry. A Navier-Stokes code has been used to calculate the nozzle flow field and the results compared to the experiment.

  16. Construct Validity and Scoring Methods of the World Health Organization: Health and Work Performance Questionnaire Among Workers With Arthritis and Rheumatological Conditions.

    PubMed

    AlHeresh, Rawan; LaValley, Michael P; Coster, Wendy; Keysor, Julie J

    2017-06-01

    To evaluate construct validity and scoring methods of the world health organization-health and work performance questionnaire (HPQ) for people with arthritis. Construct validity was examined through hypothesis testing using the recommended guidelines of the consensus-based standards for the selection of health measurement instruments (COSMIN). The HPQ using the absolute scoring method showed moderate construct validity as four of the seven hypotheses were met. The HPQ using the relative scoring method had weak construct validity as only one of the seven hypotheses were met. The absolute scoring method for the HPQ is superior in construct validity to the relative scoring method in assessing work performance among people with arthritis and related rheumatic conditions; however, more research is needed to further explore other psychometric properties of the HPQ.

  17. Prospective validation of pathologic complete response models in rectal cancer: Transferability and reproducibility.

    PubMed

    van Soest, Johan; Meldolesi, Elisa; van Stiphout, Ruud; Gatta, Roberto; Damiani, Andrea; Valentini, Vincenzo; Lambin, Philippe; Dekker, Andre

    2017-09-01

    Multiple models have been developed to predict pathologic complete response (pCR) in locally advanced rectal cancer patients. Unfortunately, validation of these models normally omit the implications of cohort differences on prediction model performance. In this work, we will perform a prospective validation of three pCR models, including information whether this validation will target transferability or reproducibility (cohort differences) of the given models. We applied a novel methodology, the cohort differences model, to predict whether a patient belongs to the training or to the validation cohort. If the cohort differences model performs well, it would suggest a large difference in cohort characteristics meaning we would validate the transferability of the model rather than reproducibility. We tested our method in a prospective validation of three existing models for pCR prediction in 154 patients. Our results showed a large difference between training and validation cohort for one of the three tested models [Area under the Receiver Operating Curve (AUC) cohort differences model: 0.85], signaling the validation leans towards transferability. Two out of three models had a lower AUC for validation (0.66 and 0.58), one model showed a higher AUC in the validation cohort (0.70). We have successfully applied a new methodology in the validation of three prediction models, which allows us to indicate if a validation targeted transferability (large differences between training/validation cohort) or reproducibility (small cohort differences). © 2017 American Association of Physicists in Medicine.

  18. Development, Construction, and Content Validation of a Questionnaire to Test Mobile Shower Commode Usability

    PubMed Central

    Theodoros, Deborah G.; Russell, Trevor G.

    2015-01-01

    Background: Usability is an emerging domain of outcomes measurement in assistive technology provision. Currently, no questionnaires exist to test the usability of mobile shower commodes (MSCs) used by adults with spinal cord injury (SCI). Objective: To describe the development, construction, and initial content validation of an electronic questionnaire to test mobile shower commode usability for this population. Methods: The questionnaire was constructed using a mixed-methods approach in 5 phases: determining user preferences for the questionnaire’s format, developing an item bank of usability indicators from the literature and judgement of experts, constructing a preliminary questionnaire, assessing content validity with a panel of experts, and constructing the final questionnaire. Results: The electronic Mobile Shower Commode Assessment Tool Version 1.0 (eMAST 1.0) questionnaire tests MSC features and performance during activities identified using a mixed-methods approach and in consultation with users. It confirms that usability is complex and multidimensional. The final questionnaire contains 25 questions in 3 sections. The eMAST 1.0 demonstrates excellent content validity as determined by a small sample of expert clinicians. Conclusion: The eMAST 1.0 tests usability of MSCs from the perspective of adults with SCI and may be used to solicit feedback during MSC design, assessment, prescription, and ongoing use. Further studies assessing the eMAST’s psychometric properties, including studies with users of MSCs, are needed. PMID:25762862

  19. Determination of sulfonamide antibiotics and metabolites in liver, muscle and kidney samples by pressurized liquid extraction or ultrasound-assisted extraction followed by liquid chromatography-quadrupole linear ion trap-tandem mass spectrometry (HPLC-QqLIT-MS/MS).

    PubMed

    Hoff, Rodrigo Barcellos; Pizzolato, Tânia Mara; Peralba, Maria do Carmo Ruaro; Díaz-Cruz, M Silvia; Barceló, Damià

    2015-03-01

    Sulfonamides are widely used in human and veterinary medicine. The presence of sulfonamides residues in food is an issue of great concern. Throughout the present work, a method for the targeted analysis of 16 sulfonamides and metabolites residue in liver of several species has been developed and validated. Extraction and clean-up has been statistically optimized using central composite design experiments. Two extraction methods have been developed, validated and compared: i) pressurized liquid extraction, in which samples were defatted with hexane and subsequently extracted with acetonitrile and ii) ultrasound-assisted extraction with acetonitrile and further liquid-liquid extraction with hexane. Extracts have been analyzed by liquid chromatography-quadrupole linear ion trap-tandem mass spectrometry. Validation procedure has been based on the Commission Decision 2002/657/EC and included the assessment of parameters such as decision limit (CCα), detection capability (CCβ), sensitivity, selectivity, accuracy and precision. Method׳s performance has been satisfactory, with CCα values within the range of 111.2-161.4 µg kg(-1), limits of detection of 10 µg kg(-1) and accuracy values around 100% for all compounds. Copyright © 2014 Elsevier B.V. All rights reserved.

  20. Optimization of detection conditions and single-laboratory validation of a multiresidue method for the determination of 135 pesticides and 25 organic pollutants in grapes and wine by gas chromatography time-of-flight mass spectrometry.

    PubMed

    Dasgupta, Soma; Banerjee, Kaushik; Dhumal, Kondiba N; Adsule, Pandurang G

    2011-01-01

    This paper describes single-laboratory validation of a multiresidue method for the determination of 135 pesticides, 12 dioxin-like polychlorinated biphenyls, 12 polyaromatic hydrocarbons, and bisphenol A in grapes and wine by GC/time-of-flight MS in a total run time of 48 min. The method is based on extraction with ethyl acetate in a sample-to-solvent ratio of 1:1, followed by selective dispersive SPE cleanup for grapes and wine. The GC/MS conditions were optimized for the chromatographic separation and to achieve highest S/N for all 160 target analytes, including the temperature-sensitive compounds, like captan and captafol, that are prone to degradation during analysis. An average recovery of 80-120% with RSD < 10% could be attained for all analytes except 17, for which the average recoveries were 70-80%. LOQ ranged within 10-50 ng/g, with < 25% expanded uncertainties, for 155 compounds in grapes and 151 in wine. In the incurred grape and wine samples, the residues of buprofezin, chlorpyriphos, metalaxyl, and myclobutanil were detected, with an RSD of < 5% (n = 6); the results were statistically similar to previously reported validated methods.

Top